Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/92/f0/ad/92f0adf4-2b10-a63c-bc79-1889b710b139/mza_6601485165628379978.jpg/600x600bb.jpg
AI: post transformers
mcgrof
316 episodes
2 days ago
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.
Show more...
Technology
RSS
All content for AI: post transformers is the property of mcgrof and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/44199026/44199026-1754490757264-4f84f1d34e94a.jpg
Mechanistic interpretability: Decoding the AI's Inner Logic: Circuits and Sparse Features
AI: post transformers
29 minutes 19 seconds
5 days ago
Mechanistic interpretability: Decoding the AI's Inner Logic: Circuits and Sparse Features

Ten different sources are used in this episode which are excerpts from academic papers and technical reports focusing on mechanistic interpretability and sparse autoencoders in language models (LLMs) and vision-language models (VLMs). This episode explores the state-of-the-art in **Mechanistic Interpretability** (MI), focusing on how researchers are decomposing large language models (LLMs) and multimodal models (MLLMs) into understandable building blocks. A central theme is the power of **Sparse Autoencoders (SAEs)**, which address the issue of polysemanticity—where a single neuron represents many unrelated concepts—by training overcomplete bases to extract sparse, **monosemantic features**. The episode would detail the successful scaling of SAEs to production models like Claude 3 Sonnet and Claude 3.5 Haiku, demonstrating that these techniques reveal features that are often abstract, multilingual, and even generalize across modalities (from text to images). Listeners would learn how advanced techniques like **Specialized SAEs (SSAEs)** are developed using dense retrieval to target and interpret rare or domain-specific "dark matter" concepts, such as specialized physics knowledge or toxicity patterns, that are often missed by general methods. The fundamental goal is establishing a linear representation of concepts that facilitates precise understanding and, crucially, manipulation of model internals.


The second half of the episode dives into the application of these features to trace computational pathways, or **circuits**, using tools like **attribution graphs** and causal interventions. We explore concrete discoveries regarding LLM reasoning, such as identifying the modular circuit components—like queried-rule locating, fact-processing, and decision heads—that execute propositional logic and multi-step reasoning. We review how these mechanistic insights enable **precise control**, such as editing a model's diagnostic hypothesis (e.g., in medical scenarios) or circumventing refusal behaviors (jailbreaks) by overriding harmful request features. We cover cutting-edge intervention methods like **Attenuation via Posterior Probabilities (APP)**, which leverages the improved separation of concepts achieved by SAEs to perform highly effective and minimally disruptive concept erasure.


Sources:

1. 2025, Carnegie Mellon University: https://aclanthology.org/2025.findings-naacl.87.pdf (Source for Specialized Sparse Autoencoders)

2. 2025, OpenAI: (Implicit Source: PDF for the paper titled "Weight-sparse transformers have interpretable circuits," attributed to an OpenAI author)

3. 2024, Anthropic: (Implied Source URL for the work "Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet," published May 21, 2024)

4. 2024, Anthropic: The claude 3 model family: Opus, sonnet, haiku (URL/document cited in circuit analysis work)

5. 2024, Gemma Team: https://arxiv.org/abs/2408.00118 (Gemma 2: Improving open language models at a practical size)

6. 2024, OpenAI: https://openai.com/index/learning-to-reason-with-llms/ (Learning to reason with LLMs)

7. 2023, Transformer Circuits Thread: https://transformer-circuits.pub/2023/monosemantic-features/index.html (Towards Monosemanticity: Decomposing Language Models With Dictionary Learning)

8. 2022, AI Alignment Forum: https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing (Causal scrubbing)

9. 2022, Transformer Circuits Thread: https://transformer-circuits.pub/2022/solu/index.html (Softmax Linear Units)

10. 2021, Transformer Circuits Thread: https://transformer-circuits.pub/2021/framework/index.html (A mathematical framework for transformer circuits)

AI: post transformers
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.