Home
Categories
EXPLORE
Comedy
True Crime
Society & Culture
History
Business
News
Sports
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/9d/c8/57/9dc857f4-309d-9457-0471-e56406d46de5/mza_15725793256597513042.jpg/600x600bb.jpg
muckrAIkers
Jacob Haimes and Igor Krawczuk
18 episodes
1 month ago
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.
Show more...
Technology
Science,
Mathematics
RSS
All content for muckrAIkers is the property of Jacob Haimes and Igor Krawczuk and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.
Show more...
Technology
Science,
Mathematics
Episodes (18/18)
muckrAIkers
AI Safety for Who?

Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms today We discuss what actual safety would look like, drawing on self-driving car regulations.

Chapters

  • (00:00) - Introduction & AI Investment Insanity
  • (01:43) - The Problem with AI Safety
  • (08:16) - Anthropomorphizing AI & Its Dangers
  • (26:55) - Mental Health, Wellness, and AI
  • (39:15) - Censorship, Bias, and Dual Use
  • (44:42) - Solutions, Community Action & Final Thoughts

Links

AI Ethics & Philosophy

  • Foreign affairs article - The Cost of the AGI Delusion
  • Nature article - Principles alone cannot guarantee ethical AI
  • Xeiaso blog post - Who Do Assistants Serve?
  • Argmin article - The Banal Evil of AI Safety
  • AI Panic News article - The Rationality Trap

AI Model Bias, Failures, and Impacts

  • BBC news article - AI Image Generation Issues
  • The New York Times article - Google Gemini German Uniforms Controversy
  • The Verge article - Google Gemini's Embarrassing AI Pictures
  • NPR article - Grok, Elon Musk, and Antisemitic/Racist Content
  • AccelerAId blog post - How AI Nudges are Transforming Up-and Cross-Selling
  • AI Took My Job website

AI Mental Health & Safety Concerns

  • Euronews article - AI Chatbot Tragedy
  • Popular Mechanics article - OpenAI and Psychosis
  • Psychology Today article - The Emerging Problem of AI Psychosis
  • Rolling Stone article - AI Spiritual Delusions Destroying Human Relationships
  • The New York Times article - AI Chatbots and Delusions

Guidelines, Governance, and Censorship

  • Preprint - R1dacted: Investigating Local Censorship in DeepSeek's R1 Language Model
  • Minds & Machines article - The Ethics of AI Ethics: An Evaluation of Guidelines
  • SSRN paper - Instrument Choice in AI Governance
  • Anthropic announcement - Claude Gov Models for U.S. National Security Customers
  • Anthropic documentation - Claude's Constitution
  • Reuters investigation - Meta AI Chatbot Guidelines
  • Swiss Federal Council consultation - Swiss AI Consultation Procedures
  • Grok Prompts Github Repo
  • Simon Willison blog post - Grok 4 Heavy
Show more...
1 month ago
49 minutes

muckrAIkers
The Co-opting of Safety

We dig into how the concept of AI "safety" has been co-opted and weaponized by tech companies. Starting with examples like Mecha-Hitler Grok, we explore how real safety engineering differs from AI "alignment," the myth of the alignment tax, and why this semantic confusion matters for actual safety.

  • (00:00) - Intro
  • (00:21) - Mecha-Hitler Grok
  • (10:07) - "Safety"
  • (19:40) - Under-specification
  • (53:56) - This time isn't different
  • (01:01:46) - Alignment Tax myth
  • (01:17:37) - Actually making AI safer

Links
  • JMLR article - Underspecification Presents Challenges for Credibility in Modern Machine Learning
  • Trail of Bits paper - Towards Comprehensive Risk Assessments and Assurance of AI-Based Systems
  • SSRN paper - Uniqueness Bias: Why It Matters, How to Curb It

Additional Referenced Papers

  • NeurIPS paper - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?
  • ICML paper - AI Control: Improving Safety Despite Intentional Subversion
  • ICML paper - DarkBench: Benchmarking Dark Patterns in Large Language Models
  • OSF preprint - Current Real-World Use of Large Language Models for Mental Health
  • Anthropic preprint - Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Inciting Examples

  • ars Technica article - US government agency drops Grok after MechaHitler backlash, report says
  • The Guardian article - Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats
  • BBC article - Update that made ChatGPT 'dangerously' sycophantic pulled

Other Sources

  • London Daily article - UK AI Safety Institute Rebrands as AI Security Institute to Focus on Crime and National Security
  • Vice article - Prominent AI Philosopher and ‘Father’ of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv
  • LessWrong blogpost - "notkilleveryoneism" sounds dumb (see comments)
  • EA Forum blogpost - An Overview of the AI Safety Funding Situation
  • Book by Dmitry Chernov and Didier Sornette - Man-made Catastrophes and Risk Information Concealment
  • Euronews article - OpenAI adds mental health safeguards to ChatGPT, saying chatbot has fed into users’ ‘delusions’
  • Pleias website
  • Wikipedia page on Jaywalking
Show more...
3 months ago
1 hour 24 minutes

muckrAIkers
AI, Reasoning or Rambling?

In this episode, we redefine AI's "reasoning" as mere rambling, exposing the "illusion of thinking" and "Potemkin understanding" in current models. We contrast the classical definition of reasoning (requiring logic and consistency) with Big Tech's new version, which is a generic statement about information processing. We explain how Large Rambling Models generate extensive, often irrelevant, rambling traces that appear to improve benchmarks, largely due to best-of-N sampling and benchmark gaming.

Words and definitions actually matter! Carelessness leads to misplaced investments and an overestimation of systems that are currently just surprisingly useful autocorrects.

  • (00:00) - Intro
  • (00:40) - OBB update and Meta's talent acquisition
  • (03:09) - What are rambling models?
  • (04:25) - Definitions and polarization
  • (09:50) - Logic and consistency
  • (17:00) - Why does this matter?
  • (21:40) - More likely explanations
  • (35:05) - The "illusion of thinking" and task complexity
  • (39:07) - "Potemkin understanding" and surface-level recall
  • (50:00) - Benchmark gaming and best-of-n sampling
  • (55:40) - Costs and limitations
  • (58:24) - Claude's anecdote and the Vending Bench
  • (01:03:05) - Definitional switch and implications
  • (01:10:18) - Outro

Links
  • Apple paper - The Illusion of Thinking
  • ICML 2025 paper - Potemkin Understanding in Large Language Models
  • Preprint - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling

Theoretical understanding

  • Max M. Schlereth Manuscript - The limits of AGI part II
  • Preprint - (How) Do Reasoning Models Reason?
  • Preprint - A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers
  • NeurIPS 2024 paper - How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad

Empirical explanations

  • Preprint - How Do Large Language Monkeys Get Their Power (Laws)?
  • Andon Labs Preprint - Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents
  • LeapLab, Tsinghua University and Shanghai Jiao Tong University paper - Does Reinforcement Learning Really Incentivize Reasoning Capacity
  • Preprint - RL in Name Only? Analyzing the Structural Assumptions in RL post-training for LLMs
  • Preprint - Mind The Gap: Deep Learning Doesn't Learn Deeply
  • Preprint - Measuring AI Ability to Complete Long Tasks
  • Preprint - GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

Other sources

  • Zuck's Haul webpage - Meta's talent acquisition tracker
    • Hacker News discussion - Opinions from the AI community
  • Interconnects blogpost - The rise of reasoning machines
  • Anthropic blog - Project Vend: Can Claude run a small shop?
Show more...
4 months ago
1 hour 11 minutes

muckrAIkers
One Big Bad Bill

In this episode, we break down Trump's "One Big Beautiful Bill" and its dystopian AI provisions: automated fraud detection systems, centralized citizen databases, military AI integration, and a 10-year moratorium blocking all state AI regulation. We explore the historical parallels with authoritarian data consolidation and why this represents a fundamental shift away from limited government principles once held by US conservatives.

  • (00:00) - Intro
  • (01:13) - Bill, general overview
  • (05:14) - Bill, AI overview
  • (07:54) - Medicaid fraud detection systems
  • (11:20) - Bias in AI Systems and Ethical Concerns
  • (17:58) - Centralization of data
  • (30:04) - Military integration of AI
  • (37:05) - Tax incentives for development
  • (40:57) - Regulatory moratorium
  • (47:58) - One big bad authoritarian regime

Links
  • Congress page on the One Big Beautiful Bill Act
  • NYMag article - Republicans Admit They Didn’t Even Read Their Big Beautiful Bill
  • Everything is Horrible Blogpost - They Did Vote For This (GOP House Edition)

Authoritarianism

  • Historical context
    • Holocaust Encyclopedia article - Gleichschaltung: Coordinating the Nazi State
    • Wikipedia article - 1943 Amsterdam civil registry office bombing
    • Wikipedia article - Four Ds
  • Conservative leaning, pro-privacy, anti-government
    • Data Governance Hub blogpost - Review and Literature Guide of Trump’s “One Big Beautiful Dataset”
    • Cato Institute blogpost - If You Value Privacy, Resist Any Form of National ID Cards
    • American Enterprise Intitute blogpost - The Dangerous Road to a “Master File”—Why Linking Government Databases Is a Terrible Idea
    • EFF blogpost - The Dangers of Consolidating All Government Information
  • ACLU against national ID cards
    • ACLU main page on national ID cards
    • ACLU blogpost - National Identification Cards: Why Does the ACLU Oppose a National I.D. System?
    • ACLU blogpost - 5 Problems with National ID Cards
  • Inherent unfairness of ML
    • Lighthouse Reports investigation - The Limits of Ethical AI
    • Lighthouse Reports investigation - Suspicion Machines
    • Amazon Science publication - Bias preservation in machine learning: The legality of fairness metrics under EU non-discrimination law
    • Michigan Technology Law Review article - The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default
    • Wired article - Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms

Military

  • WallStreet Journal article - The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and More
  • Trump executive order - Unleashing American Drone Dominance
  • Anthropic press release - Claude Gov Models for U.S. National Security Customers

Moratorium on State AI Regulation

  • TechPolicy.Press article - The State AI Laws Likeliest To Be Blocked by a Moratorium
  • Forbes article - Colorado’s AI Law Still Stands After Update Effort Fails

Other Sources

  • KPMG report - Incentives and credits tax provisions in “One Big Beautiful Bill Act”
  • The Register article - Trump team leaks AI plans in public GitHub repository
  • WallStreet Journal article - To Feed Power-Wolfing AI, Lawmakers Are Embracing Nuclear
  • CBS Austin article - IRS direct file program exceeded its expectations but faces uncertain future
Show more...
5 months ago
53 minutes

muckrAIkers
Breaking Down the Economics of AI

Jacob and Igor tackle the wild claims about AI's economic impact by examining three main clusters of arguments: automating expensive tasks like programming, removing "cost centers" like call centers and corporate art, and claims of explosive growth. They dig into the actual data, debunk the hype, and explain why most productivity claims don't hold up in practice. Plus: MIT denounces a paper with fabricated data, and Grok randomly promotes white genocide myths.


  • (00:00) - Recording date + intro
  • (00:52) - MIT denounces paper
  • (04:09) - Grok's white genocide
  • (06:23) - Butthole convergence
  • (07:13) - AI and the economy
  • (14:50) - Automating profit centers
  • (29:46) - Removing the last cost centers
  • (47:16) - "This time is different" (explosive growth)
  • (57:55) - Alpha Evolve, optimization, and slippage


Links
  • University of Chicago working paper - Large Language Models, Small Labor Market Effects
  • OECD working paper - Miracle or Myth? Assessing the macroeconomic productivity gains from Artificial Intelligence
  • Epoch AI blogpost - Explosive Growth from AI: A Review of the Arguments
  • Business Insider article - Anthropic CEO: AI Will Be Writing 90% of Code in 3 to 6 Months
  • Preprint - Transformative AGI by 2043 is <1% likely

Automating profit centers

  • Pivot to AI blogpost - If AI is so good at coding … where are the open source contributions?
  • Ben Evans' Mastodon post - "Show me the pull requests"
  • NY Times article - Your A.I. Radiologist Will Not Be With You Soon
  • FastCompany article - More companies are adopting 'AI-first' strategies. Here's how it could impact the environment
  • Forbes article - Business Tech News: Shopify CEO Says AI First Before Employees
  • Newsroom article - IBM Study: CEOs Double Down on AI While Navigating Enterprise Hurdles
  • PNAS research article - Evidence of a social evaluation penalty for using AI
  • Ars Technica article - AI use damages professional reputation, study suggests

Removing cost centers

  • The Register article - Anthopic's law firm blames Claude hallucinations for errors
  • Fortune article - Klarna plans to hire humans again, as new landmark survey reveals most AI projects fail to deliver
  • Wikipedia article - The Market for Lemons

AlphaEvolve

  • Deepmind press release - AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
  • Deepmind white paper - AlphaEvolve: A coding agent for scientific and algorithmic discovery

Off Topic

  • VelvetShark blogpost - Why do AI company logos look like buttholes?
  • MIT Economics press release - Assuring an accurate research record
  • Pivot to AI blogpost - How to make a splash in AI economics: fake your data
  • Pivot to AI blogpost - Even Elon Musk can’t make Grok claim a ‘white genocide’ in South Africa
Show more...
5 months ago
1 hour 6 minutes

muckrAIkers
DeepSeek: 2 Months Out

DeepSeek has been out for over 2 months now, and things have begun to settle down. We take this opportunity to contextualize the developments that have occurred in its wake, both within the AI industry and the world economy. As systems get more "agentic" and users are willing to spend increasing amounts of time waiting for their outputs, the value of supposed "reasoning" models continues to be peddled by AI system developers, but does the data really back these claims?

Check out our DeepSeek minisode for a snappier overview!

EPISODE RECORDED 2025.03.30


  • (00:40) - DeepSeek R1 recap
  • (02:46) - What makes it new?
  • (08:53) - What is reasoning?
  • (14:51) - Limitations of reasoning models (why we hate reasoning)
  • (31:16) - Claims about R1 training on Open AI
  • (37:30) - “Deep Research”
  • (49:13) - Developments and drama in the AI industry
  • (56:26) - Proposed economic value
  • (01:14:20) - US government involvement
  • (01:23:28) - OpenAI uses MCP
  • (01:28:15) - Outro


Links
  • DeepSeek website
  • DeepSeek paper
  • DeepSeek docs - Models and Pricing
  • DeepSeek repo - 3FS

Understanding DeepSeek/DeepResearch

  • Explainers
    • Language Models & Co. article - The Illustrated DeepSeek-R1
    • Towards Data Science article - DeepSeek-V3 Explained 1: Multi-head Latent Attention
    • Jina.ai article - A Practical Guide to Implementing DeepSearch/DeepResearch
    • Han, Not Solo blogpost - The Differences between Deep Research, Deep Research, and Deep Research
  • Analysis and Research
    • Preprint - Understanding R1-Zero-Like Training: A Critical Perspective
    • Blogpost - There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study
    • Preprint - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
    • Preprint - Chain-of-Thought Reasoning In The Wild Is Not Always Faithful

Fallout coverage

  • TechCrunch article - OpenAI calls DeepSeek 'state-controlled,' calls for bans on 'PRC-produced' models
  • The Verge article - OpenAI has evidence that its models helped train China’s DeepSeek
  • Interesting Engineer article - $6M myth: DeepSeek’s true AI cost is 216x higher at $1.3B, research reveals
  • Ars Technica article - Microsoft now hosts AI model accused of copying OpenAI data
  • The Signal article - Nvidia loses nearly $600 billion in DeepSeek crash
  • Yahoo Finance article - The 'Magnificent 7' stocks are having their worst quarter in more than 2 years
  • Reuters article - Microsoft pulls back from more data center leases in US and Europe, analysts say

US governance

  • National Law Review article - Three States Ban DeepSeek Use on State Devices and Networks
  • CNN article - US lawmakers want to ban DeepSeek from government devices
  • House bill - No DeepSeek on Government Devices Act
  • Senate bill - Decoupling America's Artificial Intelligence Capabilities from China Act of 2025

Leaderboards

  • aider
  • LiveBench
  • LM Arena
  • Konwinski Prize
  • Preprint - SWE-Bench+: Enhanced Coding Benchmark for LLMs
  • Cybernews article - OpenAI study proves LLMs still behind human engineers in over 1400 real-world tasks

Other References

  • Anthropic report - The Anthropic Economic Index
  • METR Report - Measuring AI Ability to Complete Long Tasks
  • The Information article - OpenAI Discusses Building Its First Data Center for Storage
    • Deepmind report backing up this idea
  • TechCrunch article - OpenAI adopts rival Anthropic's standard for connecting AI models to data
  • Reuters article - OpenAI, Meta in talks with Reliance for AI partnerships, The Information reports
  • 2024 AI Index report
  • NDTV article - Ghibli-Style Images To Memes: White House Embraces Alt-Right Online Culture
  • Elk post on DOGE and AI
Show more...
7 months ago
1 hour 31 minutes

muckrAIkers
DeepSeek Minisode

DeepSeek R1 has taken the world by storm, causing a stock market crash and prompting further calls for export controls within the US. Since this story is still very much in development, with follow-up investigations and calls for governance being released almost daily, we thought it best to hold of for a little while longer to be able to tell the whole story. Nonetheless, it's a big story, so we provide a brief overview of all that's out there so far.

  • (00:00) - Recording date
  • (00:04) - Intro
  • (00:37) - DeepSeek drop and reactions
  • (04:27) - Export controls
  • (08:05) - Skepticism and uncertainty
  • (14:12) - Outro


Links
  • DeepSeek website
  • DeepSeek paper
  • Reuters article - What is DeepSeek and why is it disrupting the AI sector?

Fallout coverage

  • The Verge article - OpenAI has evidence that its models helped train China’s DeepSeek
  • The Signal article - Nvidia loses nearly $600 billion in DeepSeek crash
  • CNN article - US lawmakers want to ban DeepSeek from government devices
  • Fortune article - Meta is reportedly scrambling ‘war rooms’ of engineers to figure out how DeepSeek’s AI is beating everyone else at a fraction of the price
  • Dario Amodei's blogpost - On DeepSeek and Export Controls
  • SemiAnalysis article - DeepSeek Debates
  • Ars Technica article - Microsoft now hosts AI model accused of copying OpenAI data
  • Wiz Blogpost - Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History

Investigations into "reasoning"

  • Blogpost - There May Not be Aha Moment in R1-Zero-like Training — A Pilot Study
  • Preprint - s1: Simple test-time scaling
  • Preprint - LIMO: Less is More for Reasoning
  • Blogpost - Reasoning Reflections
  • Preprint - Token-Hungry, Yet Precise: DeepSeek R1 Highlights the Need for Multi-Step Reasoning Over Speed in MATH
Show more...
9 months ago
15 minutes

muckrAIkers
Understanding AI World Models w/ Chris Canal

Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the importance of test-time compute, moving goalposts, the orthogonality thesis, and much more.

A seasoned software developer, Chris started EquiStamp as a way to improve our current understanding of model failure modes and capabilities in late 2023. Now a key contractor for METR, EquiStamp evaluates the next generation of LLMs from frontier model developers like OpenAI and Anthropic.

EquiStamp is hiring, so if you're a software developer interested in a fully remote opportunity with flexible working hours, join the EquiStamp Discord server and message Chris directly; oh, and let him know muckrAIkers sent you!


  • (00:00) - Recording date
  • (00:05) - Intro
  • (00:29) - Hot off the press
  • (02:17) - Introducing Chris Canal
  • (19:12) - World/risk models
  • (35:21) - Competencies + decision making power
  • (42:09) - Breaking models down
  • (01:05:06) - Timelines, test time compute
  • (01:19:17) - Moving goalposts
  • (01:26:34) - Risk management pre-AGI
  • (01:46:32) - Happy endings
  • (01:55:50) - Causal chains
  • (02:04:49) - Appetite for democracy
  • (02:20:06) - Tech-frame based fallacies
  • (02:39:56) - Bringing back real capitalism
  • (02:45:23) - Orthogonality Thesis
  • (03:04:31) - Why we do this
  • (03:15:36) - Equistamp!


Links

  • EquiStamp
  • Chris's Twitter
  • METR Paper - RE-Bench: Evaluating frontier AI R&D capabilities of language model agents against human experts
  • All Trades article - Learning from History: Preventing AGI Existential Risks through Policy by Chris Canal
  • Better Systems article - The Omega Protocol: Another Manhattan Project

Superintelligence & Commentary

  • Wikipedia article - Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
  • Reflective Altruism article - Against the singularity hypothesis (Part 5: Bostrom on the singularity)
  • Into AI Safety Interview - Scaling Democracy w/ Dr. Igor Krawczuk

Referenced Sources

  • Book - Man-made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility
  • Artificial Intelligence Paper - Reward is Enough
  • Wikipedia article - Capital and Ideology by Thomas Piketty
  • Wikipedia article - Pantheon

LeCun on AGI

  • "Won't Happen" - Time article - Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk
  • "But if it does, it'll be my research agenda latent state models, which I happen to research" - Meta Platforms Blogpost - I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI

Other Sources

  • Stanford CS Senior Project - Timing Attacks on Prompt Caching in Language Model APIs
  • TechCrunch article - AI researcher François Chollet founds a new AI lab focused on AGI
  • White House Fact Sheet - Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence
  • New York Post article - Bay Area lawyer drops Meta as client over CEO Mark Zuckerberg’s ‘toxic masculinity and Neo-Nazi madness’
  • OpenEdition Academic Review of Thomas Piketty
  • Neural Processing Letters Paper - A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks
  • BFI Working Paper - Do Financial Concerns Make Workers Less Productive?
  • No Mercy/No Malice article - How to Survive the Next Four Years by Scott Galloway
Show more...
9 months ago
3 hours 19 minutes

muckrAIkers
NeurIPS 2024 Wrapped 🌯

What happens when you bring over 15,000 machine learning nerds to one city? If your guess didn't include racism, sabotage and scandal, belated epiphanies, a spicy SoLaR panel, and many fantastic research papers, you wouldn't have captured my experience. In this episode we discuss the drama and takeaways from NeurIPS 2024.

Posters available at time of episode preparation can be found on the episode webpage.

EPISODE RECORDED 2024.12.22


  • (00:00) - Recording date
  • (00:05) - Intro
  • (00:44) - Obligatory mentions
  • (01:54) - SoLaR panel
  • (18:43) - Test of Time
  • (24:17) - And now: science!
  • (28:53) - Downsides of benchmarks
  • (41:39) - Improving the science of ML
  • (53:07) - Performativity
  • (57:33) - NopenAI and Nanthropic
  • (01:09:35) - Fun/interesting papers
  • (01:13:12) - Initial takes on o3
  • (01:18:12) - WorkArena
  • (01:25:00) - Outro


Links

Note: many workshop papers had not yet been published to arXiv as of preparing this episode, the OpenReview submission page is provided in these cases. 

  • NeurIPS statement on inclusivity
  • CTOL Digital Solutions article - NeurIPS 2024 Sparks Controversy: MIT Professor's Remarks Ignite "Racism" Backlash Amid Chinese Researchers’ Triumphs
  • (1/2) NeurIPS Best Paper - Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction
  • Visual Autoregressive Model report this link now provides a 404 error
    • Don't worry, here it is on archive.is
  • Reuters article - ByteDance seeks $1.1 mln damages from intern in AI breach case, report says
  • CTOL Digital Solutions article - NeurIPS Award Winner Entangled in ByteDance's AI Sabotage Accusations: The Two Tales of an AI Genius
  • Reddit post on Ilya's talk
  • SoLaR workshop page

Referenced Sources

  • Harvard Data Science Review article - Data Science at the Singularity
  • Paper - Reward Reports for Reinforcement Learning
  • Paper - It's Not What Machines Can Learn, It's What We Cannot Teach
  • Paper - NeurIPS Reproducibility Program
  • Paper - A Metric Learning Reality Check

Improving Datasets, Benchmarks, and Measurements

  • Tutorial video + slides - Experimental Design and Analysis for AI Researchers (I think you need to have attended NeurIPS to access the recording, but I couldn't find a different version)
  • Paper - BetterBench: Assessing AI Benchmarks, Uncovering Issues, and Establishing Best Practices
  • Paper - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?
  • Paper - A Systematic Review of NeurIPS Dataset Management Practices
  • Paper - The State of Data Curation at NeurIPS: An Assessment of Dataset Development Practices in the Datasets and Benchmarks Track
  • Paper - Benchmark Repositories for Better Benchmarking
  • Paper - Croissant: A Metadata Format for ML-Ready Datasets
  • Paper - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox
  • Paper - Evaluating Generative AI Systems is a Social Science Measurement Challenge
  • Paper - Report Cards: Qualitative Evaluation of LLMs

Governance Related

  • Paper - Towards Data Governance of Frontier AI Models
  • Paper - Ways Forward for Global AI Benefit Sharing
  • Paper - How do we warn downstream model providers of upstream risks?
    • Unified Model Records tool
  • Paper - Policy Dreamer: Diverse Public Policy Creation via Elicitation and Simulation of Human Preferences
  • Paper - Monitoring Human Dependence on AI Systems with Reliance Drills
  • Paper - On the Ethical Considerations of Generative Agents
  • Paper - GPAI Evaluation Standards Taskforce: Towards Effective AI Governance
  • Paper - Levels of Autonomy: Liability in the age of AI Agents

Certified Bangers + Useful Tools

  • Paper - Model Collapse Demystified: The Case of Regression
  • Paper - Preference Learning Algorithms Do Not Learn Preference Rankings
  • LLM Dataset Inference paper + repo
  • dattri paper + repo
  • DeTikZify paper + repo

Fun Benchmarks/Datasets

  • Paloma paper + dataset
  • RedPajama paper + dataset
  • Assemblage webpage
  • WikiDBs webpage
  • WhodunitBench repo
  • ApeBench paper + repo
  • WorkArena++ paper

Other Sources

  • Paper - The Mirage of Artificial Intelligence Terms of Use Restrictions
Show more...
10 months ago
1 hour 26 minutes

muckrAIkers
OpenAI's o1 System Card, Literally Migraine Inducing

The idea of model cards, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.

Note: All figures/tables discussed in the podcast can be found on the podcast website at https://kairos.fm/muckraikers/e009/


  • (00:00) - Recorded 2024.12.08
  • (00:54) - Actual intro
  • (03:00) - System cards vs. academic papers
  • (05:36) - Starting off sus
  • (08:28) - o1.continued
  • (12:23) - Rant #1: figure 1
  • (18:27) - A diamond in the rough
  • (19:41) - Hiding copyright violations
  • (21:29) - Rant #2: Jacob on "hallucinations"
  • (25:55) - More ranting and "hallucination" rate comparison
  • (31:54) - Fairness, bias, and bad science comms
  • (35:41) - System, dev, and user prompt jailbreaking
  • (39:28) - Chain-of-thought and Rao-Blackwellization
  • (44:43) - "Red-teaming"
  • (49:00) - Apollo's bit
  • (51:28) - METR's bit
  • (59:51) - Pass@???
  • (01:04:45) - SWE Verified
  • (01:05:44) - Appendix bias metrics
  • (01:10:17) - The muck and the meaning


Links
  • o1 system card
  • OpenAI press release collection - 12 Days of OpenAI


Additional o1 Coverage

  • NIST + AISI [report] - US AISI and UK AISI Joint Pre-Deployment Test
  • Apollo Research's paper - Frontier Models are Capable of In-context Scheming
  • VentureBeat article - OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT Pro
  • The Atlantic article - The GPT Era Is Already Ending


On Data Labelers

  • 60 Minutes article + video - Labelers training AI say they're overworked, underpaid and exploited by big American tech companies
  • Reflections article - The hidden health dangers of data labeling in AI development
  • Privacy International article = Humans in the AI loop: the data labelers behind some of the most powerful LLMs' training datasets


Chain-of-Thought Papers Cited

  • Paper - Measuring Faithfulness in Chain-of-Thought Reasoning
  • Paper - Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
  • Paper - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
  • Paper - Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models


Other Mentioned/Relevant Sources

  • Andy Jones blogpost - Rao-Blackwellization
  • Paper - Training on the Test Task Confounds Evaluation and Emergence
  • Paper - Best-of-N Jailbreaking
  • Research landing page - SWE Bench
  • Code Competition - Konwinski Prize
  • Lakera game = Gandalf
  • Kate Crawford's Atlas of AI
  • BlueDot Impact's course - Intro to Transformative AI


Unrelated Developments

  • Cruz's letter to Merrick Garland
  • AWS News Blog article - Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performance
  • BleepingComputer article - Ultralytics AI model hijacked to infect thousands with cryptominer
  • The Register article - Microsoft teases Copilot Vision, the AI sidekick that judges your tabs
  • Fox Business article - OpenAI CEO Sam Altman looking forward to working with Trump admin, says US must build best AI infrastructure
Show more...
11 months ago
1 hour 16 minutes

muckrAIkers
How to Safely Handle Your AGI

While on the campaign trail, Trump made claims about repealing Biden's Executive Order on AI, but what will actually be changed when he gets into office? We take this opportunity to examine policies being discussed or implemented by leading governments around the world.


  • (00:00) - Intro
  • (00:29) - Hot off the press
  • (02:59) - Repealing the AI executive order?
  • (11:16) - "Manhattan" for AI
  • (24:33) - EU
  • (30:47) - UK
  • (39:27) - Bengio
  • (44:39) - Comparing EU/UK to USA
  • (45:23) - China
  • (51:12) - Taxes
  • (55:29) - The muck


Links
  • SFChronicle article - US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work
  • Trump's Executive Order on AI (the AI governance executive order at home)
  • Biden's Executive Order on AI
  • Congressional report brief which advises a "Manhattan Project for AI"

Non-USA

  • CAIRNE resource collection on CERN for AI
  • UK Frontier AI Taskforce report (2023)
  • International interim report (2024)
  • Bengio's paper - AI and Catastrophic Risk
  • Davidad's Safeguarded AI program at ARIA
  • MIT Technology Review article - Four things to know about China’s new AI rules in 2024
  • GovInsider article - Australia’s national policy for ethical use of AI starts to take shape
  • Future of Privacy forum article - The African Union’s Continental AI Strategy: Data Protection and Governance Laws Set to Play a Key Role in AI Regulation

Taxes

  • Macroeconomic Dynamics paper - Automation, Stagnation, and the Implications of a Robot Tax
  • CESifo paper - AI, Automation, and Taxation
  • GavTax article - Taxation of Artificial Intelligence and Automation

Perplexity Pages

  • CERN for AI page
  • China's AI policy page
  • Singapore's AI policy page
  • AI policy in Africa, India, Australia page

Other Sources

  • Artificial Intelligence Made Simple article - NYT's "AI Outperforms Doctors" Story Is Wrong
  • Intel report - Reclaim Your Day: The Impact of AI PCs on Productivity
  • Heise Online article - Users on AI PCs slower, Intel sees problem in unenlightened users
  • The Hacker News article - North Korean Hackers Steal $10M with AI-Driven Scams and Malware on LinkedIn
  • Futurism article - Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They're Underage
  • Vice article - 'AI Jesus' Is Now Taking Confessions at a Church in Switzerland
  • Politico article - Ted Cruz: Congress 'doesn't know what the hell it's doing' with AI regulation
  • US Senate Committee on Commerce, Science, and Transportation press release - Sen. Cruz Sounds Alarm Over Industry Role in AI Czar Harris’s Censorship Agenda
Show more...
11 months ago
58 minutes

muckrAIkers
The End of Scaling?

Multiple news outlets, including The Information, Bloomberg, and Reuters [see sources] are reporting an "end of scaling" for the current AI paradigm. In this episode we look into these articles, as well as a wide variety of economic forecasting, empirical analysis, and technical papers to understand the validity, and impact of these reports. We also use this as an opportunity to contextualize the realized versus promised fruits of "AI".


  • (00:23) - Hot off the press
  • (01:49) - The end of scaling
  • (10:50) - "Useful tools" and "agentic" "AI"
  • (17:19) - The end of quantization
  • (25:18) - Hedging
  • (29:41) - The end of upwards mobility
  • (33:12) - How to grow an economy
  • (38:14) - Transformative & disruptive tech
  • (49:19) - Finding the meaning
  • (56:14) - Bursting AI bubble and Trump
  • (01:00:58) - The muck


Links
  • The Information article - OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows
  • Bloomberg [article] - OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
  • Reuters article - OpenAI and others seek new path to smarter AI as current methods hit limitations
  • Paper on the end of quantization - Scaling Laws for Precision
  • Tim Dettmers Tweet on "Scaling Laws for Precision"

Empirical Analysis

  • WU Vienna paper - Unslicing the pie: AI innovation and the labor share in European regions
  • IMF paper - The Labor Market Impact of Artificial Intelligence: Evidence from US Regions
  • NBER paper - Automation, Career Values, and Political Preferences
  • Pew Research Center report - Which U.S. Workers Are More Exposed to AI on Their Jobs?

Forecasting

  • NBER/Acemoglu paper - The Simple Macroeconomics of AI
  • NBER/Acemoglu paper - Harms of AI
  • IMF report - Gen-AI: Artificial Intelligence and the Future of Work
  • Submission to Open Philanthropy AI Worldviews Contest - Transformative AGI by 2043 is <1% likely

Externalities and the Bursting Bubble

  • NBER paper - Bubbles, Rational Expectations and Financial Markets
  • Clayton Christensen lecture capture - Clayton Christensen: Disruptive innovation
  • The New Republic article - The “Godfather of AI” Predicted I Wouldn’t Have a Job. He Was Wrong.
  • Latent Space article - $2 H100s: How the GPU Rental Bubble Burst

On Productization

  • Palantir press release on introduction of Claude to US security and defense
  • Ars Technica article - Claude AI to process secret government data through new Palantir deal
  • OpenAI press release on partnering with Condé Nast
  • Candid Technology article - Shutterstock and Getty partner with OpenAI and BRIA
  • E2B
  • Stripe agents
  • Robopair

Other Sources

  • CBS News article - Google AI chatbot responds with a threatening message: "Human … Please die."
  • Biometric Update article - Travelers to EU may be subjected to AI lie detector
  • Techcrunch article - OpenAI’s tumultuous early years revealed in emails from Musk, Altman, and others
  • Richard Ngo Tweet on leaving OpenAI


Show more...
1 year ago
1 hour 7 minutes

muckrAIkers
US National Security Memorandum on AI, Oct 2024

October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out the important bits, and summarize our main takeaways.

  • (00:48) - The memorandum
  • (06:28) - What the press is saying
  • (10:39) - What's in the text
  • (13:48) - Potential harms
  • (17:32) - Miscellaneous notable stuff
  • (31:11) - What's the US governments take on AI?
  • (45:45) - The civil side - comments on reporting
  • (49:31) - The commenters
  • (01:07:33) - Our final hero
  • (01:10:46) - The muck


Links
  • United States National Security Memorandum on AI
  • Fact Sheet on the National Security Memorandum
  • Framework to Advance AI Governance and Risk Management in National Security

Related Media

  • CAIS Newsletter - AI Safety Newsletter #43
  • NIST report - Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
  • ACLU press release - ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections
  • Wikipedia article - Presidential Memorandum
  • Reuters article - White House presses gov't AI use with eye on security, guardrails
  • Forbes article - America’s AI Security Strategy Acknowledges There’s No Stopping AI
  • DefenseScoop article - New White House directive prods DOD, intelligence agencies to move faster adopting AI capabilities
  • NYTimes article - Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools
  • Forbes article - 5 Things To Know About The New National Security Memorandum On AI – And What ChatGPT Thinks
  • Federal News Network interview - A look inside the latest White House artificial intelligence memo
  • Govtech article - Reactions Mostly Positive to National Security AI Memo
  • The Information article - Biden Memo Encourages Military Use of AI

Other Sources

  • Physical Intelligence press release - π0: Our First Generalist Policy
  • OpenAI press release - Introducing ChatGPT Search
  • WhoPoo App!!
Show more...
1 year ago
1 hour 16 minutes

muckrAIkers
Understanding Claude 3.5 Sonnet (New)

Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the new model, but also why Anthropic may have released this model and tool combination now.


  • (00:00) - Intro
  • (00:22) - Hot off the press
  • (05:03) - Claude 3.5 Sonnet (New) Two 'o' 3000
  • (09:23) - Breaking down "computer use"
  • (13:16) - Our understanding
  • (16:03) - Diverging business models
  • (32:07) - Why has Anthropic chosen this strategy?
  • (43:14) - Changing the frame
  • (48:00) - Polishing the lily

Links

  • Anthropic press release - Introducing Claude 3.5 Sonnet (New)
  • Model Card Addendum

Other Anthropic Relevant Media

  • Paper - Sabotage Evaluations for Frontier Models
  • Anthropic press release - Anthropic's Updated RSP
  • Alignment Forum blogpost - Anthropic's Updated RSP
  • Tweet - Response to scare regarding Anthropic training on user data
  • Anthropic press release - Developing a computer use model
  • Simon Willison article - Initial explorations of Anthropic’s new Computer Use capability
  • Tweet - ARC Prize performance
  • The Information article - Anthropic Has Floated $40 Billion Valuation in Funding Talks

Other Sources

  • LWN.net article - OSI readies controversial Open AI definition
  • National Security Memorandum
  • Framework to Advance AI Governance and Risk Management in National Security
  • Reuters article - Mother sues AI chatbot company Character.AI, Google over son's suicide
  • Medium article - A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source Models
  • The Guardian article - Google's solution to accidental algorithmic racism: ban gorillas
  • TIME article - Ethical AI Isn’t to Blame for Google’s Gemini Debacle
  • Latacora article - The SOC2 Starting Seven
  • Grandview Research market trends - Robotic Process Automation Market Trends
Show more...
1 year ago
1 hour

muckrAIkers
Winter is Coming for OpenAI

Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyone want to fund a company that is set to lose net 5 billion USD for 2024?" We scrape through a whole lot of muck to find the meaningful signals in all this news, and there is a lot of it, so get ready!


  • (00:00) - Intro
  • (00:28) - Hot off the press
  • (02:43) - Why listen?
  • (06:07) - Why might VCs invest?
  • (15:52) - What are people saying
  • (23:10) - How *is* OpenAI making money?
  • (28:18) - Is AI hype dying?
  • (41:08) - Why might big companies invest?
  • (48:47) - Concrete impacts of AI
  • (52:37) - Outcome 1: OpenAI as a commodity
  • (01:04:02) - Outcome 2: AGI
  • (01:04:42) - Outcome 3: best plausible case
  • (01:07:53) - Outcome 1*: many ways to bust
  • (01:10:51) - Outcome 4+: shock factor
  • (01:12:51) - What's the muck
  • (01:21:17) - Extended outro

Links

  • Reuters article - OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia
  • Goldman Sachs report - GenAI: Too Much Spend, Too Little Benefit
  • Apricitas Economics article - The AI Investment Boom
  • Discussion of "The AI Investment Boom" on YCombinator
  • State of AI in 13 Charts
  • Fortune article - OpenAI sees $5 billion loss in 2024 and soaring sales as big ChatGPT fee hikes planned, report says

More on AI Hype (Dying)

  • Latent Space article - The Winds of AI Winter
  • Article by Gary Marcus - The Great AI Retrenchment has Begun
  • TimmermanReport article - AI: If Not Now, When? No, Really - When?
  • MIT News article - Who Will Benefit from AI?
  • Washington Post article - The AI Hype bubble is deflating. Now comes the hard part.
  • Andreesen Horowitz article - Why AI Will Save the World

Other Sources

  • Human-Centered Artificial Intelligence Foundation Model Transparency Index
  • Cointelegraph article - Europe gathers global experts to draft ‘Code of Practice’ for AI
  • Reuters article - Microsoft's VP of GenAI research to join OpenAI
  • Twitter post from Tim Brooks on joining DeepMind
  • Edward Zitron article - The Man Who Killed Google Search
Show more...
1 year ago
1 hour 22 minutes

muckrAIkers
Open Source AI and 2024 Nobel Prizes

The Open Source AI Definition is out after years of drafting, will it reestablish brand meaning for the “Open Source” term? Also, the 2024 Nobel Prizes in Physics and Chemistry are heavily tied to AI; we scrutinize not only this year's prizes, but also Nobel Prizes as a concept.

 

  • (00:00) - Intro
  • (00:30) - Hot off the press
  • (03:45) - Open Source AI background
  • (10:30) - Definitions and changes in RC1
  • (18:36) - “Business source”
  • (22:17) - Parallels with legislation
  • (26:22) - Impacts of the OSAID
  • (33:58) - 2024 Nobel Prize Context
  • (37:21) - Chemistry prize
  • (45:06) - Physics prize
  • (50:29) - Takeaways
  • (52:03) - What’s the real muck?
  • (01:00:27) - Outro

Links

  • Open Source AI Definition, Release Candidate 1
  • OSAID RC1 announcement
  • All Nobel Prizes 2024

More Reading on Open Source AI

  • Kairos.FM article - Open Source AI is a lie, but it doesn't have to be
  • The Register article - The open source AI civil war approaches
  • MIT Technology Review article - We finally have a definition for open-source AI

On Nobel Prizes

  • Paper - Access to Opportunity in the Sciences: Evidence from the Nobel Laureates
  • Physics prize - scientific background, popular info
  • Chemistry prize - scientific background, popular info
  • Reuters article - Google's Nobel prize winners stir debate over AI research
  • Wikipedia article - Nobel disease

Other Sources

  • Pivot.ai article - People are ‘blatantly stealing my work,’ AI artist complains
  • Paper - GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
  • Paper - Reclaiming AI as a Theoretical Tool for Cognitive Science | Computational Brain & Behavior

 

Show more...
1 year ago
1 hour 1 minute

muckrAIkers
SB1047

Why is Mark Ruffalo talking about SB1047, and what is it anyway? Tune in for our thoughts on the now vetoed California legislation that had Big Tech scared.

  • (00:00) - Intro
  • (00:31) - Updates from a relatively slow week
  • (03:32) - Disclaimer: SB1047 vetoed during recording (still worth a listen)
  • (05:24) - What is SB1047
  • (12:30) - Definitions
  • (17:18) - Understanding the bill
  • (28:42) - What are the players saying about it?
  • (46:44) - Addressing critiques
  • (55:59) - Open Source
  • (01:02:36) - Takeaways
  • (01:15:40) - Clarification on impact to big tech
  • (01:18:51) - Outro

Links
  • SB1047 legislation page
  • SB1047 CalMatters page
  • Newsom vetoes SB1047
  • CAIS newsletter on SB1047
  • Prominent AI nerd letter
  • Anthropic's letter
  • SB1047 ~explainer


Additional SB1047 Related Coverage

  • Opposition to SB1047 'makes no sense'
  • Newsom on SB1047
  • Andreesen Horowitz on SB1047
  • Classy move by Dan
  • Ex-OpenAI employee says Altman doesn't want regulation


Other Sources

  • o1 doesn't measure up in new benchmark paper
  • OpenAI losses and gains
  • OpenAI crypto hack
  • "Murati out" -Mira Murati, probably
  • Altman pitching datacenters to White House
  • Sam Altman, 'podcast bro'
  • Paper: Contract Design with Safety Inspections


Show more...
1 year ago
1 hour 19 minutes

muckrAIkers
OpenAI's o1, aka. Strawberry

OpenAI's new model is out, and we are going to have to rake through a lot of muck to get the value out of this one!

⚠ Opt out of LinkedIn's GenAI scraping ➡️ https://lnkd.in/epziUeTi

  • (00:00) - Intro
  • (00:25) - Other recent news
  • (02:57) - Hot off the press
  • (03:58) - Why might someone care?
  • (04:52) - What is it?
  • (06:49) - How is it being sold?
  • (10:45) - How do they explain it, technically?
  • (27:09) - Reflection AI Drama
  • (40:19) - Why do we care?
  • (46:39) - Scraping away the muck

Note: at around 32 minutes, Igor says the incorrect Llama model version for the story he is telling. Jacob dubbed over those mistakes with the correct versioning.

Links relating to o1

  • OpenAI blogpost
  • System card webpage
  • GitHub collection of o1 related media
  • AMA Twitter thread
  • Francois Chollet Tweet on reasoning and o1
  • The academic paper doing something very similar to o1

Other stuff we mention

  • OpenAI's huge valuation hinges on upending corporate structure
  • Meta acknowledges it’s scraping all public posts for AI training
  • White House announces new private sector voluntary commitments to combat image-based sexual abuse
  • Sam Altman wants you to be grateful
  • The Zuck is done apologizing
  • IAPS report on technical safety research at AI companies
  • Llama2 70B is "about as good" as GPT-4 at summarization tasks
Show more...
1 year ago
50 minutes

muckrAIkers
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.