Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/92/f0/ad/92f0adf4-2b10-a63c-bc79-1889b710b139/mza_6601485165628379978.jpg/600x600bb.jpg
AI: post transformers
mcgrof
316 episodes
1 day ago
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.
Show more...
Technology
RSS
All content for AI: post transformers is the property of mcgrof and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/44199026/44199026-1754490757264-4f84f1d34e94a.jpg
Doubly Stochastic Attention for Transformers
AI: post transformers
35 minutes 23 seconds
1 week ago
Doubly Stochastic Attention for Transformers

The four papers we review dated from 1967 up to two papers in 2025 collectively discuss the mathematical properties and deep learning applications of **doubly stochastic matrices**, which are nonnegative matrices whose rows and columns sum to one. One paper, "Concerning Nonnegative Matrices and Doubly Stochastic Matrices," provides the **foundational mathematical theory** regarding the convergence of iterative row and column scaling (known as the Sinkhorn algorithm) to a unique doubly stochastic matrix, contingent on the original matrix having "total support." The other papers focus on **Transformer architecture enhancements**, proposing "Sinkformers" and "Sparse Sinkhorn Attention" as variants that replace the standard row-wise SoftMax attention with the Sinkhorn algorithm to enforce **doubly stochastic attention matrices** for improved performance and theoretical properties, such as a connection to the Wasserstein metric. Furthermore, the "Gradient Multi-Normalization" paper introduces a **stateless optimizer** that uses a multi-normalization procedure, including a "Square-Root Sinkhorn" variant, demonstrating its efficacy and efficiency in training large language models.


Sources:

1967:

CONCERNING NONNEGATIVE MATRICES AND DOUBLY STOCHASTIC MATRICES

https://projecteuclid.org/journalArticle/Download?urlId=pjm%2F1102992505


June 24, 2022:

Sinkformers: Transformers with Doubly Stochastic Attention

https://arxiv.org/pdf/2110.11773


February 10, 2025:

Gradient Multi-Normalization for Stateless and Scalable LLM Training

https://arxiv.org/pdf/2502.06742


July 12, 2025:

ESPFormer: Doubly-Stochastic Attention with Expected Sliced Transport Plans

https://arxiv.org/pdf/2502.07962


AI: post transformers
The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.