Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/92/f0/ad/92f0adf4-2b10-a63c-bc79-1889b710b139/mza_6601485165628379978.jpg/600x600bb.jpg
AI: post transformers
mcgrof
340 episodes
16 hours ago
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.
Show more...
Technology
RSS
All content for AI: post transformers is the property of mcgrof and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/44199026/44199026-1754490757264-4f84f1d34e94a.jpg
NeurIPS 2025: MoBA: Mixture of Block Attention for Long-Context LLMs
AI: post transformers
17 minutes 4 seconds
1 month ago
NeurIPS 2025: MoBA: Mixture of Block Attention for Long-Context LLMs

This paper introduces Mixture of Block Attention (MoBA) to address the prohibitive quadratic computational overhead inherent in traditional attention mechanisms when scaling large language models (LLMs) for long contexts. MoBA is a novel architecture that strategically applies the established Mixture of Experts (MoE) paradigm directly to the attention mechanism itself. Instead of attending to the entire sequence, MoBA partitions the context into discrete blocks and utilizes a dynamic gating network to selectively route queries to only the most relevant blocks of keys and values. This block-sparse approach drastically increases computational efficiency, achieving sub-quadratic complexity and demonstrating speedups of up to 16 times when processing sequences up to 10 million tokens. Crucially, the research demonstrates that MoBA maintains performance comparable to full attention across scaling laws and real-world benchmarks. Furthermore, the architecture is highly flexible, allowing for seamless transitions between sparse MoBA and full attention layers during both training and inference.


Source: https://openreview.net/pdf?id=RlqYCpTu1P

AI: post transformers
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.