Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/87/8b/1e/878b1e67-fd1a-fb2f-de5b-113fe4018dc7/mza_11173054665888442467.jpg/600x600bb.jpg
TechcraftingAI NLP
Brad Edwards
271 episodes
4 days ago
TechcraftingAI NLP brings you daily summaries of the latest arXiv Computation and Language research.
Show more...
Technology
RSS
All content for TechcraftingAI NLP is the property of Brad Edwards and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
TechcraftingAI NLP brings you daily summaries of the latest arXiv Computation and Language research.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/39368654/39368654-1703088924475-7aa75231d6474.jpg
Ep. 256 - Part 2 - June 6, 2024
TechcraftingAI NLP
38 minutes 43 seconds
1 year ago
Ep. 256 - Part 2 - June 6, 2024

ArXiv NLP research for Thursday, June 06, 2024.


00:20: The syntax-semantics interface in a child's path: A study of 3- to 11-year-olds' elicited production of Mandarin recursive relative clauses

02:17: Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in Large Language Models

03:39: Explainability and Hate Speech: Structured Explanations Make Social Media Moderators Faster

04:36: Intention and Face in Dialog

05:48: Uncovering Limitations of Large Language Models in Information Seeking from Tables

07:15: Are We Done with MMLU?

08:41: Legal Judgment Reimagined: PredEx and the Rise of Intelligent AI Interpretation in Indian Courts

09:53: Do Language Models Understand Morality? Towards a Robust Detection of Moral Content

11:47: Every Answer Matters: Evaluating Commonsense with Probabilistic Measures

12:49: Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness

14:26: Pointer-Guided Pre-Training: Infusing Large Language Models with Paragraph-Level Contextual Awareness

15:35: Confabulation: The Surprising Value of Large Language Model Hallucinations

16:42: DICE: Detecting In-distribution Contamination in LLM's Fine-tuning Phase for Math Reasoning

18:25: Legal Documents Drafting with Fine-Tuned Pre-Trained Large Language Model

19:32: ValueBench: Towards Comprehensively Evaluating Value Orientations and Understanding of Large Language Models

20:50: mCSQA: Multilingual Commonsense Reasoning Dataset with Unified Creation Strategy by Language Models and Humans

22:21: What Do Language Models Learn in Context? The Structured Task Hypothesis

23:38: Rethinking LLM and Linguistic Steganalysis: An Efficient Detection of Strongly Concealed Stego

24:58: BEADs: Bias Evaluation Across Domains

26:41: FairytaleQA Translated: Enabling Educational Question and Answer Generation in Less-Resourced Languages

28:03: Benchmark Data Contamination of Large Language Models: A Survey

29:02: Transformers need glasses! Information over-squashing in language tasks

30:26: Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models

31:58: Characterizing Similarities and Divergences in Conversational Tones in Humans and LLMs by Sampling with People

33:44: ABEX: Data Augmentation for Low-Resource NLU via Expanding Abstract Descriptions

35:19: What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages

36:41: PaCE: Parsimonious Concept Engineering for Large Language Models

TechcraftingAI NLP
TechcraftingAI NLP brings you daily summaries of the latest arXiv Computation and Language research.