AI is accelerating at a breakneck pace, but model quality isn’t the only constraint we face.. There are major infrastructure requirements, energy needs, security, and data pipelines to run AI at scale. This week on Chain of Thought, Cisco’s President and Chief Product Officer Jeetu Patel joins host Conor Bronsdon to reveal what it actually takes to build the critical foundation for the AI era.
Jeetu breaks down the three bottlenecks he sees holding AI back today:
• Infrastructure limits: not enough power, compute, or data center capacity
• A trust deficit: non-deterministic models powering systems that must be predictable
• A widening data gap: human-generated data plateauing while machine data explodes
Jeetu then shares how Cisco is tackling these challenges through secure AI factories, edge inference, open multi-model architectures, and global partnerships with Nvidia, G42, and sovereign cloud providers. Jeetu also explains why he thinks enterprises will soon rely on thousands of specialized models — not just one — and how routing, latency, cost, and security shape this new landscape.
Conor and Jeetu also explore high-performance leadership and team culture, discussing building high-trust teams, embracing constructive tension, staying vigilant in moments of success, and the personal experiences that shaped Jeetu’s approach to innovation and resilience.
If you want a clearer picture of the global AI infrastructure race, how high-level leaders are thinking about the future, and what it all means for enterprises, developers, and the future of work, this conversation is essential.
Chapters:
00:00 – Welcome to Chain of Thought
0:48 - AI and Jobs: Beyond the Hype
6:15 - The Real AI Opportunity: Original Insights
10:00 - Three Critical AI Constraints: Infrastructure, Trust, and Data
16:27 - Cisco's AI Strategy and Platform Approach
19:18 - Edge Computing and Model Innovation
22:06 - Strategic Partnerships: Nvidia, G42, and the Middle East
29:18 - Acquisition Strategy: Platform Over Products
32:03 - Power and Infrastructure Challenges
36:06 - Building Trust Across Global Partnerships
38:03 - US vs. China: The AI Infrastructure Race
40:33 - America's Venture Capital Advantage
42:06 - Acquisition Philosophy: Strategy First
45:45 - Defining Cisco's True North
48:06 - Mission-Driven Innovation Culture
50:15 - Hiring for Hunger, Curiosity, and Clarity
56:27 - The Power of Constructive Conflict
1:00:00 - Career Lessons: Continuous Learning
1:02:24 - The Email Question
1:04:12 - Joe Tucci's Four-Column Exercise
1:08:15 - Building High-Trust Teams
1:10:12 - The Five Dysfunctions Framework
1:12:09 - Leading with Vulnerability
1:16:18 - Closing Thoughts and Where to Connect
Connect with Jeetu Patel:
LinkedIn – https://www.linkedin.com/in/jeetupatel/
X(twitter) – https://x.com/jpatel41
Cisco - https://www.cisco.com/
Connect with ConorBronsdon
Substack – https://conorbronsdon.substack.com/
LinkedIn – https://www.linkedin.com/in/conorbronsdon/
X (twitter) – https://x.com/ConorBronsdon
The transformer architecture has dominated AI since 2017, but it’s not the only approach to building LLMs - and new architectures are bringing LLMs to edge devices
Maxime Labonne, Head of Post-Training at Liquid AI and creator of the 67,000+ star LLM Course, joins Conor Bronsdon to challenge the AI architecture status quo. Liquid AI’s hybrid architecture, combining transformers with convolutional layers, delivers faster inference, lower latency, and dramatically smaller footprints without sacrificing capability.
This alternative architectural philosophy creates models that run effectively on phones and laptops without compromise.
But reimagined architecture is only half the story. Maxime unpacks the post-training reality most teams struggle with: challenges and opportunities of synthetic data, how to balance helpfulness against safety, Liquid AI’s approach to evals, RAG architectural approaches, how he sees AI on edge devices evolving, hard won lessons from shipping LFM1 through 2, and much more.
If you're tired of surface-level AI takes and want to understand the architectural and engineering decisions behind production LLMs from someone building them in the trenches, this is your episode.
Connect with Maxime Labonne :
LinkedIn – https://www.linkedin.com/in/maxime-labonne/
X (Twitter) – @maximelabonne
About Maxime – https://mlabonne.github.io/blog/about.html
HuggingFace – https://huggingface.co/mlabonne
The LLM Course – https://github.com/mlabonne/llm-course
Liquid AI – https://liquid.ai
Connect with Conor Bronsdon :
X (twitter) – @conorbronsdon
Substack – https://conorbronsdon.substack.com/
LinkedIn – https://www.linkedin.com/in/conorbronsdon/
00:00 Intro — Welcome to Chain of Thought
00:27 Guest Intro — Maxime Labonne of Liquid AI
02:21 The Hybrid LLM Architecture Explained
06:30 Why Bigger Models Aren’t Always Better
11:10 Convolution + Transformers: A New Approach to Efficiency
18:00 Running LLMs on Laptops and Wearables
22:20 Post-Training as the Real Moat
25:45 Synthetic Data and Reliability in Model Refinement
32:30 Evaluating AI in the Real World
38:11 Benchmarks vs Functional Evals
43:05 The Future of Edge-Native Intelligence
48:10 Closing Thoughts & Where to Find Maxime Online
Most AI agents are built backwards, starting with models instead of system architecture.
Aishwarya Srinivasan, Head of AI Developer Relations at Fireworks AI, joins host Conor Bronsdon to explain the shift required to build reliable agents: stop treating them as model problems and start architecting them as complete software systems. Benchmarks alone won't save you.
Aish breaks down the evolution from prompt engineering to context engineering, revealing how production agents demand careful orchestration of multiple models, memory systems, and tool calls. She shares battle-tested insights on evaluation-driven development, the rise of open source models like DeepSeek v3, and practical strategies for managing autonomy with human-in-the-loop systems. The conversation addresses critical production challenges, ranging from LLM-as-judge techniques to navigating compliance in regulated environments.
Connect with Aishwarya Srinivasan:
LinkedIn: https://www.linkedin.com/in/aishwarya-srinivasan/
Instagram: https://www.instagram.com/the.datascience.gal/
Connect with Conor: https://www.linkedin.com/in/conorbronsdon/
00:00 Intro — Welcome to Chain of Thought
00:22 Guest Intro — Ash Srinivasan of Fireworks AI
02:37 The Challenge of Responsible AI
05:44 The Hidden Risks of Reward Hacking
07:22 From Prompt to Context Engineering
10:14 Data Quality and Human Feedback
14:43 Quantifying Trust and Observability
20:27 Evaluation-Driven Development
30:10 Open Source Models vs. Proprietary Systems
34:56 Gaps in the Open-Source AI Stack
38:45 When to Use Different Models
45:36 Governance and Compliance in AI Systems
50:11 The Future of AI Builders
56:00 Closing Thoughts & Follow Ash Online
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
This week, we're doing something special and sharing an episode from another podcast we love: The Humans of AI by our friends at Writer. We're huge fans of their work, and you might remember Writer's CEO, May Habib, from the inaugural episode of our own show.
From The Humans of AI:
Learn how Melisa Russak, lead research scientist at WRITER, stumbled upon fundamental machine learning algorithms, completely unaware of existing research — twice. Her story reveals the power of approaching problems with fresh eyes and the innovative breakthroughs that can occur when constraints become catalysts for creativity.
Melisa explores the intersection of curiosity-driven research, accidental discovery, and systematic innovation, offering valuable insights into how WRITER is pushing the boundaries of enterprise AI. Tune in to learn how her journey from a math teacher in China to a pioneer in AI research illuminates the future of technological advancement.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Check out Writer’s YouTube channel to watch the full interviews.
Learn more about WRITER at writer.com.
Follow Melisa on LinkedIn
Follow May on LinkedIn
Check out Galileo
Try Galileo
The incredible velocity of AI coding tools has shifted the critical bottleneck in software development from code generation to code reviews.
Greg Foster, Co-Founder & CTO of Graphite, joins the conversation to explore this new reality, outlining the three waves of AI that are leading to autonomous agents spawning pull requests in the background. He argues that as AI automates the "inner loop" of writing code, the human-centric "outer loop"—reviewing, merging, and deploying—is now under immense pressure, demanding a complete rethinking of our tools and processes.
The conversation then gets tactical, with Greg detailing how a technique called "stacking" can break down large code changes into manageable units for both humans and AI. He also identifies an emerging hiring gap where experienced engineers with strong architectural context are becoming "lethal" with AI tools. This episode is an essential guide to navigating the new bottlenecks in software development and understanding the skills that will define the next generation of high-impact engineers.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Greg on LinkedIn
Follow Greg on X
Graphite Website: graphite.dev
Check out Galileo
Try Galileo
What’s the first step to building an enterprise-grade AI tool?
Malte Ubl, CTO of Vercel, joins us this week to share Vercel’s playbook for agents, explaining how agents are a new type of software for solving flexible tasks. He shares how Vercel's developer-first ecosystem, including tools like the AI SDK and AI Gateway, is designed to help teams move from a quick proof-of-concept to a trusted, production-ready application.
Malte explores the practicalities of production AI, from the importance of eval-driven development to debugging chaotic agents with robust tracing. He offers a critical lesson on security, explaining why prompt injection requires a totally different solution - tool constraint - than traditional threats like SQL injection. This episode is a deep dive into the infrastructure and mindset, from sandboxes to specialized SLMs, required to build the next generation of AI tools.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Malte on LinkedIn
Follow Malte on X (formerly Twitter)
Learn more about Vercel
Check out Galileo
Try Galileo
The technological moat is eroding in the AI era, what new factors separate a successful startup from the rest?
Aurimas Griciūnas, CEO of SwirlAI, joins the show to break down the realities of building in this new landscape. Startup success now hinges on speed, strong financial backing, or immediate distribution. Aurimas warns against the critical mistake of prioritizing shiny tools over fundamental engineering and the market gaps this creates.
Discover the new moats for AI companies, built on a culture of relentless execution, tight feedback loops, and the surprising skills that define today's most valuable engineers.The episode also looks to the future, with bold predictions about a slowdown in LLM leaps and the coming impact of coding agents and self-improving systems.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Aurimas on LinkedIn
Aurimas' Course: End-to-End AI Engineering Bootcamp
Check out Galileo
As we enter the era of the AI engineer, the biggest challenge isn't technical - it's a shift in mindset. Hamel Husain, a leading AI consultant and luminary in the eval space, joins the podcast to explore the skills and processes needed to build reliable AI.
Hamel explains why many teams relying on vanity dashboards and a "buffet of metrics" experience a false sense of security, which is no substitute for customized evals tailored to domain-specific risks. The solution? A disciplined process of error analysis, grounded in manually looking at data to identify real-world failures
This discussion is an essential guide to building the continuous learning loops and "experimentation mindset" required to take AI products from prototype to production with confidence. Listen to learn the playbook for building AI reliability, and derive qualitative insights from log data to build customized quantitative guardrails.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Hamel on LinkedIn
Follow Hamel on X/Twitter
Check out his blog: hamel.dev
Check out Galileo
What if your next competitor is not a startup, but a solo builder on a side project shipping features faster than your entire team?
For Claire Vo, that's not a hypothetical. As the founder of ChatPRD, formerly the Chief Product and Technology Officer at LaunchDarkly, and host of the How I AI podcast, she has a unique vantage point on the driving forces behind a new blueprint for success.
She argues that AI accountability must be driven from the top by an "AI czar" and reveals how a culture of experimentation is the key to overcoming organizational hesitancy. Drawing from her experience as a solo founder, she warns that for incumbents, the cost of moving slowly is the biggest threat and details how AI can finally be used to tackle legacy codebases. The conversation closes with bold predictions on the rise of the "super IC" - who can achieve top-tier impact and salary without managing a team - and the death of product management.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Claire on LinkedIn
Follow Claire on X/Twitter
Claire’s podcast How I AI
Check out Galileo
How do you build an AI-native company to a $7M run rate in just six months?
According to Marcel Santilli, Founder and CEO of GrowthX, the secret isn't chasing the next frontier model, it's mastering the "messy middle." Drawing on his deep experience at Scale AI and Deepgram, Marcel joins host Conor Bronsdon to share his framework for building durable, customer-obsessed businesses.
Marcel argues that the most critical skills for the AI era aren't technical but philosophical: first-principles thinking and the art of delegation.
Tune in to learn why GrowthX first focused on services to codify expert work, how AI can augment human talent instead of replacing it, and why speed and brand are a startup's greatest competitive advantages. This conversation offers a clear playbook for building a resilient company by prioritizing culture and relentless shipping.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Marcel on LinkedIn
Follow Marcel on X (formerly Twitter)
Learn more about GrowthX
Check out Galileo
AI isn't just changing healthcare; it's providing the essential help needed to unlock a trillion-dollar opportunity for better care.
Andreas Cleve, CEO & Co-founder of Corti, steps in to shed light on AI's immense, yet often misunderstood, transformative potential in this high-stakes environment. Andreas refutes the narrative of healthcare being slow adopters, emphasizing its high bar for trustworthy technology and its constant embrace of new tools. He reveals how purpose-built AI models are already alleviating the "pajama time" burden of documentation for clinicians, enabling faster and more accurate assessments in various specializations. This quiet, impactful adoption is seeing companies grow "like weeds" beyond common expectations.
The conversation addresses how AI can tackle the looming global shortage of 10 million healthcare professionals by 2030, reallocating a trillion dollars worth of administrative work back into care. Andreas details Corti’s approach to building invisible, reliable AI through rigorous, compliance-first evaluation, ensuring accuracy and efficiency in real-time. He emphasizes that AI's true role is not replacement, but augmentation, empowering professionals to deliver more care, attract talent, and drive organizational growth.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
LinkedIn: linkedin.com/in/andreascleve
X (formerly Twitter): andreascleve
Corti Website: corti.ai
Check out Galileo
AI agents offer unprecedented power, but mastering agent reliability is the ultimate challenge for agentic systems to actually work in production.
Mikiko Chandrashekar, Staff Developer Advocate at MongoDB, whose background spans the entire data-to-AI pipeline, unveils MongoDB's vision as the memory store for agents, supporting complex multi-agent systems from data storage and vector search to debugging chat logs. She highlights how MongoDB, reinforced by the acquisition of Voyage, empowers developers to build production-scale agents across various industries, from solo projects to major enterprises. This robust data layer is foundational to ensure agent performance and improve the end user experience.
Mikiko advocates for treating agents as software products, applying rigorous engineering best practices to ensure reliability, even for non-deterministic systems. She details MongoDB's unique position to balance GPU/CPU loads and manage data for performance and observability, including Galileo's integrations.
The conversation emphasizes the profound need to rethink observability, evaluations, and guardrails in the era of agents, showcasing Galileo's family of small language models for real-time guardrailing, Luna-2, and Insights Engine for automated failure analysis. Discover how building trustworthiness through systematic evaluation, beyond just "vibe checks," is essential for AI agents to scale and deliver value in high-stakes use cases.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Mikiko on LinkedIn
Follow Mikiko on X/Twitter
Explore Mikiko's YouTube channel
Check out Mikiko's Substack
Connect with MongoDB on LinkedIn
Connect with MongoDB on YouTube
Check out Galileo
The age of ubiquitous AI agents is here, bringing immense potential - and unprecedented risk.
Hosts Conor Bronsdon and Vikram Chatterji open the episode by discussing the urgent need for building trust and reliability into next-generation AI agents. Vikram unveils Galileo's free AI reliability platform for agents, featuring Luna 2 SLMs for real-time guardrails and its Insights Engine for automatic failure mode analysis. This platform enables cost-effective, low-latency production evaluations, significantly transforming debugging. Achieving trustworthy AI agents demands rigorous testing, continuous feedback, and robust guardrailing—complex challenges requiring powerful solutions from partners like Elastic.
Conor welcomes Philipp Krenn, Director of Developer Relations at Elastic, to discuss their collaboration in ensuring AI agent reliability, including how Elastic leverages Galileo's platform for evaluation. Philipp details Elastic's evolution from a search powerhouse to a key AI enabler, transforming data access with Retrieval-Augmented Generation (RAG) and new interaction modes. He discusses Elastic's investment in SLMs for efficient re-ranking and embeddings, emphasizing robust evaluation and observability for production. This collaborative effort aims to equip developers to build reliable, high-performing AI systems for every enterprise.
Chapters:
00:00 Introduction
01:09 Galileo's AI Reliability Platform
01:43 Challenges in AI Agent Reliability
06:17 Insights Engine and Its Importance
11:00 Luna 2: Small Language Models
14:42 Custom Metrics and Agent Leaderboard
19:16 Galileo's Integrations and Partnerships
21:04 Philipp Krenn from Elastic
24:47 Optimizing LLM Responses
25:41 Galileo and Elastic: A Powerful Partnership
28:20 Challenges in AI Production and Trust
30:02 Guardrails and Reliability in AI Systems
32:17 The Future of AI in Customer Interaction
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Philipp on LinkedIn
Learn more about Elastic
Check out Galileo
The Internet of Agents is rapidly taking shape, necessitating innovative foundational standards, protocols, and evaluation methods for its success.
Recorded at Cisco's office in San Jose, we welcome Giovanna Carofiglio, Distinguished Engineer and Senior Director at Outshift by Cisco. As a leader of the AGNTCY Collective (an open-source initiative by Cisco, Galileo, LangChain, and many other participating companies), Giovanna outlines the vision for agents to collaborate seamlessly across the enterprise and the internet. She details the collective's pillars, from agent discovery and deployment using new agentic protocols like Slim, to ensuring a secure, low-latency communication transport layer. This groundbreaking work aims to make distributed agentic communication a reality.
The conversation then explores the critical role of observability and evaluation in building trustworthy agent applications, including defining an interoperable standard schema for communications. Giovanna highlights the complex challenges of scaling agents to thousands or millions, emphasizing the need for robust security (agent identity with OSF schema) and predictable agent behavior through extensive testing and characterization. She distinguishes between protocols like MCP (agent-to-tool) and A2A (agent-to-agent), advocating for open standards and underlying transport layers akin to TCP.
Chapters:
00:00 Introduction
01:00 Overview of Agent Interoperability
02:20 What is AGNTCY
03:45 Agent Discovery and Composition
04:38 Agent Protocols and Communication
05:45 Observability and Evaluation
07:00 Metrics and Standards for Agents
09:45 Challenges in Agent Evaluation
14:15 Low Latency and Active Evaluation
23:34 Synthetic Data and Ground Truth
25:07 Interoperable Agent Schema
26:37 MCP & A2A
30:17 Future of Agent Communication
32:03 Security and Agent Identity
34:37 Collaboration and Community Involvement
38:28 Conclusion
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
AGNTCY Collective: agntcy.org
Connect with Giovanna on LinkedIn
Learn more about Outshift: outshift.cisco.com
Check out Galileo
When AI makes creating content and code nearly free, how do you stand out? Differentiation now hinges on two things: unique taste and effective distribution.
This week, Bharat Vasan, founder & CEO at Intangible and a "recovering VC," explains why the AI landscape compelled him to return to founding. He sees AI sparking a new creative revolution, similar to the early internet, that makes it easier than ever to bring ideas to life. The conversation delivers essential advice for founders, revealing why relentless shipping is the ultimate clarifier for a business and why resilience, not just intelligence, is the key to survival.
Drawing from his experience on both sides of the venture table, Bharat breaks down the brutally competitive VC landscape and shares Intangible's mission: to simplify 3D creative tools with AI, finally bridging the gap between human vision and machine power. Listeners will gain insights on company building, brand strategy, and why customer obsession is the ultimate moat in the AI age.
Chapters:
00:00 Introduction
00:45 From Founder to VC and Back
03:17 Human Creativity in the Age of AI
07:50 The Role of Taste and Distribution
11:49 Building a Brand in the AI Era
16:17 The Venture Capital Landscape for AI Startups
20:11 Advice for Founders in the AI Boom
23:55 Incumbents vs. Startups
27:10 The New Generation of Innovators
29:19 Pirate Mentality in Startups
30:00 Building a Brand
36:28 Shipping and Resilience
41:49 Customer Obsession
46:58 The Vision for Intangible
51:52 Conclusion
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Bharat on LinkedIn.
Follow Bharat on X.
Learn more about Intangible at intangible.ai.
Check out Galileo
Unlocking AI agents for knowledge work automation and scaling intelligent, multi-agent systems within enterprises fundamentally requires measurability, reliability, and trust.
João Moura, founder & CEO of CrewAI, joins Galileo’s Conor Bronsdon and Vikram Chatterji to unpack and define the emerging AI agent stack. They explore how enterprises are moving beyond initial curiosity to tackle critical questions around provisioning, authentication, and measurement for hundreds or thousands of agents in production. The discussion highlights a crucial "gold rush" among middleware providers, all racing to standardize the orchestration and frameworks needed for seamless agent deployment and interoperability. This new era demands a re-evaluation of everything from cloud choices to communication protocols as agents reshape the market.
João and Vikram then dive into the complexities of building for non-deterministic multi-agent systems, emphasizing the challenges of increased failure modes and the need for rigorous testing beyond traditional software. They detail how CrewAI is democratizing agent access with a focus on orchestration, while Galileo provides the essential reliability platform, offering advanced evaluation, observability, and automated feedback loops. From specific use cases in financial services to the re-emergence of core data science principles, discover how companies are building trustworthy, high-quality AI products and prepare for the coming agent marketplace.
Chapters:
00:00 Introduction and Guest Welcome
02:04 Defining the AI Agent Stack
03:49 Challenges in Building AI Agents
05:52 The Future of AI Agent Marketplaces
06:59 Infrastructure and Protocols
09:05 Interoperability and Flexibility
20:18 Governance and Security Concerns
24:12 Industry Adoption and Use Cases
25:57 Unlocking Faster Development with Success Metrics
28:40 Challenges in Managing Complex Systems
30:10 Introducing the Insights Engine
30:33 The Importance of Observability and Control
32:33 Democratizing Access with No-Code Tools
35:39 Ensuring Quality and Reliability in Production
41:08 Future of Agentic Systems and Industry Transformation
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Joao Moura: LinkedIn | X/Twitter
CrewAI: crewai.com | X/Twitter
Check out Galileo
How is an open ecosystem powering the next generation of AI for developers and leaders?
Broadcasting live from the heart of the action at AMD's Advancing AI 2025, Chain of Thought host Conor Bronsdon welcomes AMD’s Anush Elangovan, VP of AI Software, and Sharon Zhou, VP of AI. They unpack AMD's groundbreaking transformation from a hardware giant to a leader in full-stack AI, committed to an open ecosystem. Discover how new MI350 GPUs deliver mind-blowing performance with advanced data types and why ROCm 7 and AMD Developer Cloud offer Day Zero support for frontier models.
Then Conor welcomes Sharon Zhou, VP of AI at AMD, to discuss making AMD's powerful software stack truly accessible and how to drive developer curiosity. Sharon explains strategies for creating a "happy path" for community contributions, fostering engagement through teaching, and listening to developers at every stage. She shares her predictions for the future, including the rise of self-improving AI, the critical role of heterogeneous compute, and the potential of "vibes based feedback" to guide models. This vision for democratizing access to high-performance AI, driven by a deep understanding of the developer journey, promises to unlock the next generation of applications.
Chapters:
00:00 Live from AMD's Advancing AI 2025 Event
00:30 Introduction to Anush Elangovan
01:38 The MI350 GPU Series Unveiled
04:57 CDNA4 Architecture Explained
07:00 The Future of AI Infrastructure
08:32 AMD's Developer Cloud and ROCm 7
11:50 Cultural Shift at AMD
14:48 Open Source and Community Contributions
18:35 Software Longevity and Ecosystem Strategy
22:19 AI Agents and Performance Gains
27:36 AI's Role in Solving Power Challenges
28:11 Thanking Anush
28:42 Introduction to Sharon Zhou
29:45 Sharon's Focus at AMD
30:39 Engaging Developers with AMD's AI Tools
31:24 Listening to the AI Community
33:56 Open Source and AI Development
45:04 Future of AI and Self-Improving Models
48:04 Final Thoughts and Farewell
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Anush Elangovan: LinkedIn
Sharon Zhou: LinkedIn
AMD Official Site: amd.com
AMD Developer Resources: AMD Developer Central
Check out Galileo
What if the most valuable data in your enterprise—the key to your AI future—is sitting dormant in your backups, treated like an insurance policy you hope to never use?
Join Conor Bronsdon with Greg Statton, VP of AI Solutions at Cohesity, for an inside look at how they are turning this passive data into an active asset to power generative AI applications. Greg details Cohesity’s evolution from an infinitely scalable file system built for backups into a data intelligence powerhouse, managing hundreds of exabytes of enterprise data globally. He recounts how early successes in using this data for security and anomaly detection paved the way for more advanced AI applications. This foundational work was crucial in preparing Cohesity to meet the new demands of generative AI.
Greg offers a candid look at the real-world challenges enterprises face, arguing that establishing data hygiene and a cross-functional governance model is the most critical step before building reliable AI applications. He shares the compelling story of how Cohesity's focus on generative AI was sparked by an internal RAG experiment he built to solve a "semantic divide" in team communication, which quickly grew into a company-wide initiative. He also provides essential advice for data professionals, emphasizing the need to focus on solving core business problems.
Chapters:
00:00 Introduction
00:36 The Role of Gaming in AI Development
05:43 Personal Gaming Experiences
08:26 The Intersection of AI and Gaming
12:53 Importance of Data in Game Development
19:03 User Testing and QA in Gaming
25:49 Postmortems and Telemetry
27:21 Beta Testing and Data Preparedness
29:18 Traditional AI vs Generative AI
31:31 Challenges of Implementing AI in Games
35:57 Leveraging AI for Data Analytics
39:41 Automated QA and Reinforcement Learning
42:01 AI for Localization and Sentiment Analysis
44:21 Future of AI in Gaming
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Company Website: cohesity.com
LinkedIn: Gregory Statton
Check out Galileo
Try Galileo
What if the pixels and polygons of your favorite video games were the secret architects of today's AI revolution?
Carly Taylor, Field CTO for Gaming at Databricks and founder of ggAI, joins host Conor Bronsdon to illuminate the direct line from video game innovation to the current AI landscape. She explains how the gaming industry's relentless pursuit of better graphics and performance not only drove pivotal GPU advancements and cost reductions, but also fundamentally shaped our popular understanding of artificial intelligence by popularizing the very term "AI" through decades of in-game experiences. Carly shares her personal journey, from a childhood passion for games like Rollercoaster Tycoon ignited while playing with her mom, to becoming a data scientist for Call of Duty.
The discussion then confronts a long-standing tension in game development: how the critical need to ship titles often relegates vital game data to a secondary concern, a dynamic Carly explains is now being reshaped by AI. She details the inherent challenges game studios face in capturing and leveraging telemetry, from disparate development processes to the lengthy pipeline required for updates. Carly illuminates how modern AI, particularly generative AI, presents a massive opportunity for studios to finally unlock their vast data troves for everything from self-service analytics and community insight generation to revolutionizing QA processes. This pivotal intersection of evolving game data practices and new AI capabilities is poised to redefine how games are made, understood, and ultimately experienced.
Chapters
00:00 Introduction
00:28 The Role of Gaming in AI Development
05:35 Personal Gaming Experiences
08:18 The Intersection of AI and Gaming
12:45 Importance of Data in Game Development
18:55 User Testing and QA in Gaming
25:41 Postmortems and Telemetry
27:13 Beta Testing and Data Preparedness
29:10 Traditional AI vs Generative AI
31:23 Challenges of Implementing AI in Games
35:49 Leveraging AI for Data Analytics
39:33 Automated QA and Reinforcement Learning
41:53 AI for Localization and Sentiment Analysis
44:13 Future of AI in Gaming
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Carly on LinkedInSubscribe to Carly's Substack: Good At Business
Check out Galileo
AI in 2025 promises intelligent action, not just smarter chat. But are enterprises prepared for the agentic shift and the complex reliability hurdles it brings?
Join Conor Bronsdon on Chain of Thought with fellow co-hosts and Galileo co-founders, Vikram Chatterji (CEO) and Atindriyo Sanyal (CTO), as they explore this pivotal transformation. They discuss how generative AI is evolving from a simple tool into a powerful engine for enterprise task automation, a significant advance driving the pursuit of substantial ROI. This shift is also fueling what Vikram observes as a "gold rush" for middleware and frameworks, alongside healthy skepticism about making widespread agentic task completion a practical reality.
As these AI systems grow into highly complex, compound structures—often incorporating multimodal inputs and multi-agent designs—Vikram and Atin address the critical challenges around debugging, achieving reliability, and solving the profound measurement problem. They share Galileo's vision for an AI reliability platform designed to tame these intricate systems through robust guardrailing, advanced metric engines like Luna, and actionable developer insights. Tune in to understand how the industry is moving beyond point-in-time evaluations to continuous AI reliability, crucial for building trustworthy, high-performing AI applications at scale.
Chapters
00:00 Welcome and Introductions
01:05 Generative AI and Task Completion
02:13 Middleware and Orchestration Systems
03:17 Enterprise Adoption and Challenges
05:55 Multimodal AI and Future Plans
08:37 AI Reliability and Evaluation
11:08 Complex AI Systems and Developer Challenges
13:45 Galileo's Vision and Product Roadmap
18:59 Modern AI Evaluation Agents
20:10 Galileo's Powerful SDK and Tools
21:24 The Importance of Observability and Robust Testing
22:27 The Rise of Vibe Coding
24:48 Balancing Creativity and Reliability in AI
31:26 Enterprise Adoption of AI Systems
36:59 Challenges and Opportunities in Regulated Industries
42:10 Future of AI Reliability and Industry Impact
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Website: galileo.ai
Read: Galileo Optimizes Enterprise–Scale Agentic AI Stack with NVIDIA
Check out Galileo