In this episode, we cover the biggest AI hardware announcements from CES including Nvidia's Vera Rubin AI supercomputer platform, Samsung's plan to ship 800 million Galaxy AI devices, and Intel's Panther Lake chips built on their new 18A manufacturing process. Nvidia claims Vera Rubin delivers five times the training performance over Blackwell architecture and is designed for agentic AI workloads with significantly lower inference costs. Samsung is doubling down on AI across smartphones, tablets, TVs, and appliances powered by Google's Gemini models, while Intel positions Panther Lake as a turning point for their manufacturing comeback with 60 percent better performance than Lunar Lake.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Nvidia, Samsung, Intel, Google, AMD, Apple, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or technical advice. Affiliate links are included to help support the podcast at no additional cost to you.
In this episode, we discuss OpenAI's Grove AI talent program, Google's 2026 AI Agent Trends Report, and analyst predictions about potential AI market consolidation. OpenAI is accepting applications for its selective fifteen-person Grove cohort designed to develop the next generation of AI leaders through mentorship and hands-on experience with frontier AI systems. Google's new report outlines five major trends showing how AI agents are transitioning from experimental pilots into core enterprise infrastructure, with emphasis on multi-step planning, workflow automation, and human-in-the-loop oversight aligned with their Gemini and Workspace roadmap. We also examine Pivotal Research's cautionary 2026 outlook warning that compute costs and ROI pressures could create a shakeout in the AI sector, with analysts suggesting only the strongest AI platforms will survive what they compare to dot-com era consolidation.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by OpenAI, Google, Pivotal Research, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or investment advice. Affiliate links are included to help support the podcast.
In this episode, we reflect on the remarkable AI advancements of 2025, including GPT 5.2, Google's Gemini evolution, and Meta's smart glasses AI integrations. As the major AI players like OpenAI, Google, Meta, Anthropic, Nvidia, and Microsoft take a rare holiday pause, we look back at a year of relentless AI innovation and preview what 2026 may bring. We also touch on emerging global AI regulation efforts, including China's draft rules for emotionally interactive AI systems. Join us as we celebrate the AI breakthroughs of 2025 and prepare for another exciting year of artificial intelligence development.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by OpenAI, Google, Meta, Anthropic, Nvidia, Microsoft, or any other entities mentioned unless explicitly stated. The content provided is for informational and entertainment purposes only and does not constitute professional, financial, or legal advice. Affiliate links are included to help support the podcast at no additional cost to you.
In this episode, we discuss the AI industry's critical shift toward proving real business value in 2026, OpenAI's decision to retire voice mode from its ChatGPT Mac app, and how holiday usage spikes revealed both the promise and limitations of coding agents. We explore why analysts are declaring 2026 the year AI must pay for itself, examining how enterprises are moving beyond hype to demand measurable ROI from AI deployments. The conversation covers how semi-autonomous agents still face reliability and trust challenges, while coding assistants have emerged as the clearest productivity win for businesses. We also break down OpenAI's voice mode consolidation strategy and what the holiday period capacity tests by OpenAI and Anthropic revealed about infrastructure scaling constraints and developer demand for AI coding tools.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by OpenAI, Anthropic, Axios, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or technical advice. Affiliate links are included to help support the podcast, and we may earn a commission if you make a purchase through them.
In this episode, we cover Meta's acquisition of AI agent startup Manus, Nvidia's potential deal to acquire AI21 Labs, and the Texas AI Governance Act now in effect as one of the strongest state-level AI oversight frameworks in the country. Meta's two billion dollar acquisition of Singapore-based Manus brings autonomous task-oriented AI agents to platforms like WhatsApp, Facebook, and Meta AI. We also examine Nvidia's reported two to three billion dollar negotiations with Israeli language model company AI21 Labs, a move that would expand Nvidia beyond hardware into owning frontier AI model technology. On the regulatory front, we break down how the Texas AI Governance Act creates a ten-member oversight council, civil penalties up to one hundred thousand dollars, and a thirty-six month regulatory sandbox for AI innovation.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Meta, Manus, Nvidia, AI21 Labs, Mobileye, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or legal advice. Affiliate links included help support the production of this podcast.
In this episode, we explore Microsoft CEO Satya Nadella's bold declaration that 2026 marks AI's transition from research to real-world deployment, signaling a major shift in enterprise AI strategy. We examine Microsoft's internal restructuring efforts to reduce dependence on OpenAI, including the expanded CoreAI unit and Nadella's hands-on founder mode approach to accelerating AI execution. The discussion also covers the growing attention on AI safety researchers at organizations like METR and Redwood Research, who are raising concerns about deception, misalignment, and loss of control as AI capabilities continue to advance. From Microsoft's Copilot deployment strategy to the critical work being done on AI governance and alignment, we analyze what this new phase means for businesses and the broader AI industry.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Microsoft, OpenAI, METR, Redwood Research, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or legal advice. Affiliate links are included to help support the podcast.
In this episode, we cover the explosive growth of Model Context Protocol servers surpassing ten thousand active deployments, Meta smart glasses going viral with a new AI-powered Conversation Focus feature, and enterprises officially making agentic AI their default strategy heading into 2026. We explore how MCP is becoming the universal connector allowing AI agents to securely plug into enterprise tools and databases across OpenAI, Anthropic, and Google ecosystems. We also break down Meta's solution to the cocktail party problem using AI beamforming and on-device processing, plus why McKinsey and major consulting firms are now declaring autonomous agents the next core computing paradigm for enterprise productivity.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Meta, OpenAI, Anthropic, Google, McKinsey, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or technical advice. Some links in this description are affiliate links, which means we may earn a small commission at no additional cost to you if you make a purchase through them.
In this episode, we cover Alphabet's massive $4.75 billion acquisition of clean energy developer Intersect, Anthropic's new Chrome extension bringing Claude directly into your browser, and OpenAI's customizable personality dial for ChatGPT. Alphabet's Intersect acquisition addresses the growing power demands of AI data centers with roughly ten gigawatts of renewable energy capacity by 2028. Anthropic's Claude Chrome extension positions the AI assistant where users already work, while OpenAI's ChatGPT personality feature lets users fine-tune tone and warmth across web and mobile. We break down what these moves mean for AI infrastructure, browser-based AI tools, and the future of personalized AI interactions.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Alphabet, Google, Anthropic, OpenAI, Intersect, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or legal advice. Affiliate links are included to help support the podcast at no additional cost to you.
In this episode, we explore Microsoft's new Model Context Protocol bringing system-wide AI agents to Windows 11, the FBI's expanded use of AI in federal investigations, and Google making Gemini 3 Flash the default model in its consumer apps. Microsoft's MCP framework enables AI assistants like Copilot to securely interact with apps and services across Windows through natural language commands, with File Explorer, Settings, and Copilot all receiving agent hooks for cross-app workflows. We also examine the FBI's confirmation that AI tools for video analysis, speech-to-text, and vehicle recognition are now key components of their investigative operations. Finally, we break down Google's decision to roll out Gemini 3 Flash as the default Gemini experience, prioritizing speed and efficiency while maintaining strong reasoning capabilities for everyday AI usage.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Microsoft, Google, the FBI, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, legal, or technical advice. All trademarks, logos, and copyrights mentioned are the property of their respective owners. Some links included are affiliate links, and we may earn a small commission at no additional cost to you.
In this episode, we cover Gainsight's acquisition of UpdateAI, OpenAI's ongoing safety refinements to GPT 5.2, and Google DeepMind's latest stability update for Gemini 3. Gainsight brings AI-native customer intelligence to enterprise customer success teams by acquiring UpdateAI, which uses AI agents to analyze meetings and customer signals automatically. OpenAI confirms it is actively tuning GPT 5.2 and GPT 5.2 Pro after launch, focusing on reasoning depth and reducing hallucinations as usage scales across regions. Google DeepMind pushes a backend update to Gemini 3 Pro and Gemini 3 Deep Think, improving latency and tool-calling reliability across Google Workspace and the Gemini app.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Gainsight, UpdateAI, OpenAI, Google DeepMind, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or legal advice. Some links in this description are affiliate links, and we may receive a small commission at no extra cost to you if you make a purchase through them.
In this episode, we explore Meta's new AI models codenamed Mango and Avocado, OpenAI and Anthropic's underage user detection systems, and Google's proactive AI agent called CC. Meta is developing Mango for image and video generation alongside Avocado, a next generation large language model focused on text and coding tasks, with both AI models expected to launch in the first half of 2026. We also examine how OpenAI and Anthropic are implementing new AI safety measures to detect underage users, including OpenAI's age prediction model and Anthropic's conversational clue detection for Claude. Finally, we cover Google's CC agent, a personal briefing AI that proactively pulls from Gmail, Calendar, and Docs to draft emails and surface tasks before you ask.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Meta, OpenAI, Anthropic, Google, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or legal advice. Affiliate links included may provide compensation to the podcast at no additional cost to you.
In this episode, we cover Google's official launch of Gemini 3 Flash, the latest AI model bringing significant improvements in reasoning capabilities, multimodal processing, and response latency across Google Search, the Gemini app, and developer tools like Vertex AI. We also discuss Sergey Brin's surprising warning about using Gemini Live while driving, where the Google cofounder candidly described the current public version as "ancient" compared to internal versions he tests during his own commutes. Finally, we explore Google's expansion of native Gemini access to iPhone and iPad users through Chrome, replacing Google Lens with a one-tap Gemini icon for page summaries and on-page analysis. These developments show Google pushing Gemini capabilities and cross-platform accessibility simultaneously.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Google, Google DeepMind, Apple, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional or technical advice. All trademarks, logos, and copyrights mentioned are the property of their respective owners. Affiliate links are included to help support the podcast.
In this episode, we discuss OpenAI's new ChatGPT Images feature, Sam Altman's cryptic product tease, and Microsoft's major leadership restructuring focused on AI development. OpenAI launched ChatGPT Images on December 16th, bringing image generation and editing directly into the ChatGPT interface using natural language prompts, positioning it as a direct competitor to Google's Nano Banana model. We also explore Sam Altman's mysterious social media announcement hinting at something quote really fun end quote launching soon, sparking widespread speculation about new multimodal or agentic AI capabilities. Finally, we break down Microsoft CEO Satya Nadella's organizational overhaul, including the elevation of Judson Althoff to commercial CEO and the introduction of weekly AI accelerator sessions designed to speed up innovation and amplify technical voices across the company.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by OpenAI, Microsoft, Google, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or technical advice. Some links in this description are affiliate links, meaning we may earn a small commission at no additional cost to you if you make a purchase through them.
In this episode, we discuss OpenAI's mysterious new launch teased by Sam Altman, OpenAI's open source circuit sparsity model release, and Nvidia's Nemotron 3 family of open source AI models. We explore how circuit sparsity enables more efficient AI by activating only necessary parts of neural networks, reducing compute costs while maintaining capability. Nvidia's Nemotron 3 Nano model marks a strategic push toward transparent, open source AI development, positioning the US ecosystem as a strong alternative amid growing competition. From OpenAI's research contributions on Hugging Face to Nvidia's enterprise-focused open source strategy, we examine how openness and efficiency are becoming central themes in the evolving AI landscape.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by OpenAI, Nvidia, Hugging Face, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or technical advice. Affiliate links are included to help support the podcast at no additional cost to you.
In this episode, we cover NVIDIA's release of the Nemotron 3 open source AI models, Tesla's latest Full Self Driving software improvements, and OnePlus announcing AI-powered features for their upcoming smartphone. NVIDIA launched the Nemotron 3 family with the Nano version available now and larger variants coming in 2026, designed to handle complex tasks cost-effectively while giving developers full transparency through open source access. Tesla pushed version 14.2.1.25 of their Full Self Driving software to Early Access users, addressing speed profile issues with testers reporting impressive improvements in real-world driving conditions. OnePlus revealed their Plus Mind AI features ahead of the 15R smartphone launch, continuing the trend of bringing AI capabilities directly onto devices for faster, offline functionality. These developments show AI maturing across different domains from open source models to autonomous driving to consumer smartphones.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by NVIDIA, Tesla, OnePlus, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or technical advice. Affiliate links are included to help support the podcast at no additional cost to listeners. All trademarks, logos, and copyrights mentioned are the property of their respective owners.
In this episode, we discuss Accenture's massive AI training expansion with Anthropic Claude, Google's Android XR smart glasses reveal codenamed Project Aura, and TIME Magazine naming the Architects of AI as Person of the Year for 2025. Accenture is training thirty thousand employees on Claude and Claude Code for enterprise AI workflows, marking the largest AI deployment in the company's history and complementing their existing ChatGPT Enterprise training program. Google's Project Aura smart glasses feature optical see through technology with a seventy degree field of view, positioning the company for augmented reality experiences powered by AI. TIME's recognition highlights Nvidia's Jensen Huang among AI's leading architects, noting ChatGPT's eight hundred million users and the profound transformation AI has driven across industries worldwide.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Accenture, Anthropic, Google, TIME Magazine, Nvidia, OpenAI, or any other entities mentioned unless explicitly stated. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or legal advice. Affiliate links may provide compensation to the podcast at no additional cost to you.
In this episode, we explore Google's new unified conversational search experience on mobile that combines AI Overview and AI Mode, supporting text, voice, and image inputs in one continuous flow. We also dive into Google Workspace Studio, a platform for building AI agents that automate multi-step workflows across Docs, Sheets, and Gmail, enabling enterprise organizations to deploy agent-driven collaboration tools. Additionally, we examine Nvidia's innovative location verification technology for AI chips using telemetry-based tracking to ensure regulatory compliance and prevent unauthorized movement of GPU hardware across borders. From conversational search interfaces to AI agent automation and hardware security, this episode covers how AI technology is maturing across application, platform, and infrastructure layers.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Google, Nvidia, or any other entities mentioned unless explicitly mentioned. The content provided is for informational and educational purposes only and does not constitute professional, technical, or legal advice. Affiliate links may generate commission for this podcast at no additional cost to you.
In this episode, we discuss OpenAI's GPT-5.2 launch featuring three new model variants called Instant, Thinking, and Pro, alongside a billion-dollar Disney partnership for Sora video generation. We explore Google's experimental Disco browser powered by Gemini 3 that transforms web tabs into interactive applications, and the reimagined Gemini Deep Research tool now available to developers through API access. Learn how OpenAI's Thinking model achieved a 38% reduction in hallucinations, how Google's Disco browser enables coding without coding through natural language, and how the new Deep Research API brings Google's strongest research agent capabilities directly into third-party applications for complex multi-step tasks.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by OpenAI, Google, Disney, or any other entities mentioned unless explicitly mentioned. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or technical advice. Some links may be affiliate links, which means we may earn a commission at no additional cost to you if you make a purchase through those links.
In this episode, we dive deep into OpenAI's latest GPT 5.2 release, examining its revolutionary capabilities for professional work environments. This significant update introduces three specialized variants - Instant for speed, Thinking for complex reasoning, and Pro for premium quality output - alongside a 400,000 token context window that dramatically expands document processing capabilities. We explore how GPT 5.2 elevates knowledge work with enhanced spreadsheet creation, presentations, and multi-step workflows, while also featuring improved agentic tool-calling and vision capabilities. Early benchmark results and feedback from OpenAI CEO Sam Altman and Wharton's Professor Ethan Mollick suggest this may be OpenAI's most significant upgrade to date for professional applications.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by OpenAI or any other entities mentioned unless explicitly mentioned. The content provided is for educational and informational purposes only and does not constitute professional advice. All trademarks, logos, and copyrights mentioned are the property of their respective owners.
In this episode, we discuss Meta's potential shift to charging for its next flagship AI model codenamed Avocado, marking a significant departure from the company's open source approach with Llama models. We explore Mistral's release of two powerful open source coding models called Devstral 2, which benchmark faster than Claude and GPT-4 on certain coding tasks, and the Pentagon's launch of GenAI.mil featuring Google Gemini for Government across military operations. Learn how Meta is restructuring its AI operations under Mark Zuckerberg's leadership, why Mistral's 24 billion parameter model can run locally on laptops as a true Copilot alternative, and what Google Gemini's Impact Level 5 security clearance means for AI deployment in defense applications. These developments reveal contrasting strategies as Meta moves toward monetization, Mistral champions open source accessibility, and the Pentagon embraces generative AI for operational military workflows.
https://www.aiconvocast.com
Help support the podcast by using our affiliate links:
Eleven Labs: https://try.elevenlabs.io/ibl30sgkibkv
Disclaimer:
This podcast is an independent production and is not affiliated with, endorsed by, or sponsored by Meta, Mistral, Google, the Department of Defense, or any other entities mentioned unless explicitly mentioned. The content provided is for educational and entertainment purposes only and does not constitute professional, financial, or legal advice. Affiliate links may generate commission to support podcast production.