Home
Categories
EXPLORE
True Crime
Comedy
Music
Society & Culture
Education
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/80/7e/aa/807eaa17-3d2a-019c-aad3-60750eb3f14a/mza_9723492223742683642.jpg/600x600bb.jpg
State of AI
Ali Mehedi
23 episodes
1 week ago
Stay ahead in the fast-evolving world of Artificial Intelligence with State of AI, the podcast that explores how AI is reshaping enterprises and society. Each week, we delve into groundbreaking innovations, real-world applications, ethical dilemmas, and the societal impact of AI. From enterprise solutions to cultural shifts, we bring you expert insights, industry trends, and thought-provoking discussions. Perfect for business leaders, tech enthusiasts, and curious minds looking to understand the future of AI.
Show more...
Tech News
News
RSS
All content for State of AI is the property of Ali Mehedi and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Stay ahead in the fast-evolving world of Artificial Intelligence with State of AI, the podcast that explores how AI is reshaping enterprises and society. Each week, we delve into groundbreaking innovations, real-world applications, ethical dilemmas, and the societal impact of AI. From enterprise solutions to cultural shifts, we bring you expert insights, industry trends, and thought-provoking discussions. Perfect for business leaders, tech enthusiasts, and curious minds looking to understand the future of AI.
Show more...
Tech News
News
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/42582335/42582335-1733345353544-778d698399aa4.jpg
State of AI: The Scaling Law Myth - Why Bigger Isn’t Always Better
State of AI
27 minutes 51 seconds
4 weeks ago
State of AI: The Scaling Law Myth - Why Bigger Isn’t Always Better

In this episode of State of AI, we dissect one of the most provocative new findings in AI research — Scaling Laws Are Unreliable for Downstream Tasks by Nicholas Lourie, Michael Y. Hu, and Kyunghyun Cho of NYU. This study delivers a reality check to one of deep learning’s core assumptions: that increasing model size, data, and compute always leads to better downstream performance.

The paper’s meta-analysis across 46 tasks reveals that predictable, linear scaling occurs only 39% of the time — meaning the majority of tasks show irregular, noisy, or even inverse scaling, where larger models perform worse.

We explore:

  • ⚖️ Why downstream scaling laws often break, even when pretraining scales perfectly.

  • 🧩 How dataset choice, validation corpus, and task formulation can flip scaling trends.

  • 🔄 Why some models show “breakthrough scaling” — sudden jumps in capability after long plateaus.

  • 🧠 What this means for the future of AI forecasting, model evaluation, and cost-efficient research.

  • 🧪 The implications for reproducibility and why scaling may be investigator-specific.

If you’ve ever heard “just make it bigger” as the answer to AI progress — this episode will challenge that belief.

📊 Keywords: AI scaling laws, NYU AI research, Kyunghyun Cho, deep learning limits, downstream tasks, inverse scaling, emergent abilities, AI reproducibility, model evaluation, State of AI podcast.

State of AI
Stay ahead in the fast-evolving world of Artificial Intelligence with State of AI, the podcast that explores how AI is reshaping enterprises and society. Each week, we delve into groundbreaking innovations, real-world applications, ethical dilemmas, and the societal impact of AI. From enterprise solutions to cultural shifts, we bring you expert insights, industry trends, and thought-provoking discussions. Perfect for business leaders, tech enthusiasts, and curious minds looking to understand the future of AI.