All content for Generative AI 101 is the property of Emily Laird and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Large Language Models might sound smart, but can they predict what happens when a cat sees a cucumber? In this episode, host Emily Laird throws LLMs into the philosophical ring with World Models, AI systems that learn from watching, poking, and pushing stuff around (kind of like toddlers). Meta’s Yann LeCun isn’t impressed by chatbots, and honestly, he might have a point. We break down why real intelligence might need both brains and brawn—or at least a good sense of gravity.Join the AI Weekly MeetupsConnect with Us: If you enjoyed this episode or have questions, reach out to Emily Laird on LinkedIn. Stay tuned for more insights into the evolving world of generative AI. And remember, you now know more about world models vs LLMs and that's pretty cool.
Connect with Emily Laird on LinkedIn