Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Technology
History
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/84/2a/66/842a667f-aea0-55ab-62f9-4073e0cdf069/mza_4063544913538518597.jpg/600x600bb.jpg
The Daily AI Show
The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
673 episodes
1 day ago
The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Show more...
Technology
RSS
All content for The Daily AI Show is the property of The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Show more...
Technology
Episodes (20/673)
The Daily AI Show
World Models, Robots, and Real Stakes

On Friday’s show, the DAS crew discussed how AI is shifting from text and images into the physical world, and why trust and provenance will matter more as synthetic media gets indistinguishable from reality. They covered NVIDIA’s CES focus on “world models” and physical AI, new research arguing LLMs can function as world models, real-time autonomy and vehicle safety examples, Instagram’s stance that the “visual contract” is broken, and why identity systems, signatures, and social graphs may become the new anchor. The episode also highlighted an AI communication system for people with severe speech disabilities, a health example on earlier cancer detection, practical Suno tips for consistent vocal personas, and VentureBeat’s four themes to watch in 2026.


Key Points Discussed


CES is increasingly a robotics and AI show, Jensen Huang headlines January 5


NVIDIA’s Cosmos world foundation model platform points toward physical AI and robots


Researchers from Microsoft, Princeton, Edinburgh, and others argue LLMs can function as world models


“World models” matter for predicting state changes, physics, and cause and effect in the real world


Physical AI example, real-time detection of traction loss and motion states for vehicle stability


Discussion of advanced suspension and “each wheel as a robot” style control, tied to autonomy and safety


Instagram’s Adam Mosseri said the “visual contract” is broken, convincing fakes make “real” hard to assume


The takeaway, aesthetics stop differentiating, provenance and identity become the real battlefield


Concern shifts from obvious deepfakes to subtle, cumulative “micro” manipulations over time


Scott Morgan Foundation’s Vox AI aims to restore expressive communication for people with severe speech disabilities, built with lived experience of ALS


Additional health example, AI-assisted earlier detection of pancreatic cancer from scans


Suno persona updates and remix workflow tips for maintaining a consistent voice


VentureBeat’s 2026 themes, continuous learning, world models, orchestration, refinement


Timestamps and Topics

00:04:01 📺 CES preview, robotics and AI take center stage

00:04:26 🟩 Jensen Huang CES keynote, what to watch for

00:04:48 🤖 NVIDIA Cosmos, world foundation models, physical AI direction

00:07:44 🧠 New research, LLMs as world models

00:11:21 🚗 Physical AI for EVs, real-time traction loss and motion state estimation

00:13:55 🛞 Vehicle control example, advanced suspension, stability under rough conditions

00:18:45 📡 Real-world infrastructure chat, ultra high frequency “pucks” and responsiveness

00:24:00 📸 “Visual contract is broken”, Instagram and AI fakes

00:24:51 🔐 Provenance and identity, why labels fail, trust moves upstream

00:28:22 🧩 The “micro” problem, subtle tweaks, portfolio drift over years

00:30:28 🗣️ Vox AI, expressive communication for severe speech disabilities

00:32:12 👁️ ALS, eye tracking coding, multi-agent communication system details

00:34:03 🧬 Health example, earlier pancreatic cancer detection from scans

00:35:11 🎵 Suno persona updates, keeping a consistent voice

00:37:44 🔁 Remix workflow, preserving voice across iterations

00:42:43 📈 VentureBeat, four 2026 themes

00:43:02 ♻️ Trend 1, continuous learning

00:43:36 🌍 Trend 2, world models

00:44:22 🧠 Trend 3, orchestration for multi-step agentic workflows

00:44:58 🛠️ Trend 4, refinement and recursive self-critique

00:46:57 🗓️ Housekeeping, newsletter and conundrum updates, closing

Show more...
2 days ago
47 minutes 13 seconds

The Daily AI Show
What Actually Matters for AI in 2026

On Thursday’s show, the DAS crew opened the new year by digging into the less discussed consequences of AI scaling, especially energy demand, infrastructure strain, and workforce impact. The conversation moved through xAI’s rapid data center expansion, growing inference power requirements, job displacement at the entry level, and how automation and robotics are advancing faster in some regions than others. The back half of the show focused on what these trends mean for 2026, including economic pressure, organizational readiness, and where humans still fit as AI systems grow more capable.


Key Points Discussed


xAI’s rapid expansion highlights how energy is becoming a hard constraint for AI growth


Inference demand is driving real world electricity and infrastructure pressure


AI automation is already reducing entry level roles across several functions


Robotics and delivery automation in China show a faster path to physical world automation


AI adoption shifts labor demand, not evenly across regions or job types


2026 will force harder tradeoffs between speed, cost, and stability


Organizations are underestimating the operational and social costs of scaling AI


Corrected Timestamps and Topics

00:00:19 👋 New Year’s Day opening and context setting

00:02:45 🧠 AI newsletters and early 2026 signals

00:02:54 ⚡ xAI data center expansion and energy constraints

00:07:20 🔌 Inference demand, power limits, and rising costs

00:10:15 📉 Entry level job displacement and automation pressure

00:15:40 🤖 AI replacing early stage sales and operational roles

00:20:10 🌏 Robotics and delivery automation examples from China

00:27:30 🏙️ Physical world automation vs software automation

00:34:45 🧑‍🏭 Workforce shifts and where humans still add value

00:41:25 📊 Economic and organizational implications for 2026

00:47:50 🔮 What scaling pressure will expose this year

00:54:40 🏁 Closing thoughts and community wrap up


The Daily AI Show Co Hosts: Andy Halliday, Beth Lyons, and Brian Maucere

Show more...
3 days ago
55 minutes 38 seconds

The Daily AI Show
What We Got Right and Wrong About AI

On Wednesday’s show, the DAS crew wrapped up the year by reflecting on how AI actually showed up in day to day work during 2025, what expectations missed the mark, and which changes quietly stuck. The discussion focused on real adoption versus hype, how workflows evolved over the year, where agents made progress, and where friction remained. The crew also looked ahead to what 2026 is likely to demand from teams, especially around discipline, systems thinking, and operational maturity.


Key Points Discussed


2025 delivered more AI usage, but less transformation than headlines suggested


Most gains came from small workflow changes, not sweeping automation


Agents improved, but still require heavy structure and oversight


Teams that documented processes saw better results than teams chasing tools


AI fatigue increased as novelty wore off


Real value came from narrowing scope and tightening feedback loops


2026 will reward execution, not experimentation


Timestamps and Topics

00:00:19 👋 New Year’s Eve opening and reflections

00:04:10 🧠 Looking back at AI expectations for 2025

00:09:35 📉 Where AI underdelivered versus predictions

00:14:50 🔁 Small workflow wins that added up

00:20:40 🤖 Agent progress and remaining gaps

00:27:15 📋 Process discipline and documentation lessons

00:33:30 ⚙️ What teams misunderstood about AI adoption

00:39:45 🔮 What 2026 will demand from organizations

00:45:10 🏁 Year end closing and takeaways


The Daily AI Show Co Hosts: Andy Halliday, Brian Maucere, Beth Lyons, and Karl Yeh

Show more...
4 days ago
1 hour 1 minute 43 seconds

The Daily AI Show
When AI Helps and When It Hurts

On Tuesday’s show, the DAS crew discussed why AI adoption continues to feel uneven inside real organizations, even as models improve quickly. The conversation focused on the growing gap between impressive demos and messy day to day execution, why agents still fail without structure, and what separates teams that see real gains from those stuck in constant experimentation. The group also explored how ownership, workflow clarity, and documentation matter more than model choice, plus why many companies underestimate the operational lift required to make AI stick.


Key Points Discussed


AI demos look polished, but real workflows expose reliability gaps


Teams often mistake tool access for true adoption


Agents fail without constraints, review loops, and clear ownership


Prompting matters early, but process design matters more at scale


Many AI rollouts increase cognitive load instead of reducing it


Narrow, well defined use cases outperform broad assistants


Documentation and playbooks are critical for repeatability


Training people how to work with AI matters more than new features


Timestamps and Topics

00:00:15 👋 Opening and framing the adoption gap

00:03:10 🤖 Why AI feels harder in practice than in demos

00:07:40 🧱 Agent reliability, guardrails, and failure modes

00:12:55 📋 Tools vs workflows, where teams go wrong

00:18:30 🧠 Ownership, review loops, and accountability

00:24:10 🔁 Repeatable processes and documentation

00:30:45 🎓 Training teams to think in systems

00:36:20 📉 Why productivity gains stall

00:41:05 🏁 Closing and takeaways


The Daily AI Show Co Hosts: Andy Halliday, Anne Murphy, Beth Lyons, and Jyunmi Hatcher

Show more...
5 days ago
1 hour 2 minutes 8 seconds

The Daily AI Show
Why AI Still Feels Hard to Use

On Monday’s show, the DAS crew discussed how AI tools are landing inside real workflows, where they help, where they create friction, and why many teams still struggle to turn experimentation into repeatable value. The conversation focused on post holiday reality checks, agent reliability, workflow discipline, and what actually changes day to day work versus what sounds good in demos.


Key Points Discussed


Most teams still experiment with AI instead of operating with stable, repeatable workflows


AI feels helpful in bursts but often adds coordination and review overhead


Agents break down without constraints, guardrails, and clear ownership


Prompt quality matters less than process design once teams scale usage


Many companies confuse tool adoption with operational change


AI value shows up faster in narrow tasks than broad general assistants


Teams that document workflows get more ROI than teams that chase tools


Training and playbooks matter more than model upgrades


Timestamps and Topics

00:00:18 👋 Opening and Monday reset

00:03:40 🎄 Post holiday reality check on AI habits

00:07:15 🤖 Where AI helps versus where it creates friction

00:12:10 🧱 Why agents fail without structure

00:17:45 📋 Process over prompts discussion

00:23:30 🧠 Tool adoption versus real workflow change

00:29:10 🔁 Repeatability, documentation, and playbooks

00:36:05 🧑‍🏫 Training teams to think in systems

00:41:20 🏁 Closing thoughts on practical AI use

Show more...
5 days ago
52 minutes 22 seconds

The Daily AI Show
It's Christmas in AI

Brian hosted this Christmas Day episode with Beth and Andy. The show was short and casual, Andy kicked off a quick set of headlines, then the conversation moved into practical tool friction, why people stick with one model over another, what is still messy about memory and chat history, and how translation, localization, and consumer hardware might evolve in 2026.


Key Points Discussed


Nvidia makes a talent and licensing style move with a startup described as “Grok,” focused on inference efficiency and LPUs


Pew data shows most Americans still have limited AI awareness, despite nonstop headlines


genai.mil launches with Gemini for Government, the group debates model behavior and policy enforcement


Grok gets discussed as a future model option in that environment, raising alignment questions


Codex and Claude Code temporarily raise usage limits through early January, limits still shape real usage habits


Brian explains why he defaults to Gemini more often, fewer interruptions and smoother workflows


Tool switching remains painful, people lose context across apps, accounts, and sessions


Translation will mostly become automated, localization and trust-heavy situations still need humans


CES expectations center on wearables, assistants, and TVs, most “AI features” still risk being gimmicks


Timestamps & Topics


00:00:19 🎄 Christmas intro, quick host check in

00:02:16 🧠 Nvidia story, inference chips, LPU discussion

00:03:36 📊 Pew Research, public awareness of AI

00:04:35 🏛️ genai.mil launch, Gemini for Government discussion

00:06:19 ⚠️ Grok mentioned in the genai.mil context, alignment concerns

00:09:28 💻 Codex and Claude Code usage limits increase

00:10:31 🔁 Why people do or do not log into Claude, friction and limits

00:21:50 🌍 Translation vs localization, where humans still matter

00:31:08 👓 CES talk begins, wearables and glasses expectations

00:30:51 📺 TVs and “AI features,” what would actually be useful

00:47:35 🏁 Wrap up and sign off


The Daily AI Show Co-Hosts: Brian Maucere, Beth Lyons, and Andy Halliday

Show more...
1 week ago
47 minutes 30 seconds

The Daily AI Show
Is AI Worth It Yet?

On Friday’s show, the DAS crew discussed what real AI productivity looks like in 2025, where agents still break down, and how the biggest platforms are pushing assistants into products people already use. They covered fresh survey data on AI at work, Salesforce’s push for more deterministic agents, OpenAI’s role based prompt packs, a reported Waymo in car Gemini assistant, Meta’s non generative “world model” work, holiday AI features, and the ongoing Lovable vs Replit debate for building software fast. The episode also touched on AI infrastructure and power constraints, plus how teams should think about curriculum, playbooks, and repeatable workflows in an AI first world.


Key Points Discussed


Lenny Rachitsky shared survey results from 1,750 tech workers on how AI is actually used at work


55 percent said AI exceeded expectations, 70 percent said it improves work quality


More than half said AI saves at least half a day per week, founders reported the biggest time savings


Designers reported the weakest ROI, founders reported the strongest ROI


92.4 percent reported at least one significant downside, including reliability issues and instruction following problems


Salesforce leaders highlighted agent unreliability and “drift”, AgentForce is adding more deterministic rule based structures to constrain agent behavior


OpenAI Academy published prompt packs grouped by job role, showing how OpenAI frames “default” use cases


Waymo is reportedly working on a Gemini powered ride assistant, surfaced via a discovered system prompt in app code


Meta’s VLJEPA work came up as an example of non generative vision models aimed at world understanding, not image generation


The crew debated Lovable and Replit as fast paths from idea to working app, including where each still breaks down


Timestamps and Topics

00:00:17 👋 Opening, Boxing Day, setting up the “is AI delivering ROI” question

00:02:20 📊 Lenny Rachitsky survey, who was sampled, what it measures

00:05:44 ✅ Top findings, time saved, quality gains, ROI split by role

00:07:33 🧩 Agents and reliability, Salesforce view on drift, AgentForce guardrails

00:10:25 🧰 OpenAI Academy prompt packs by role, why it matters

00:12:07 🚗 Waymo and a Gemini powered ride assistant, system prompt discovery

00:13:05 👁️ Meta VLJEPA, non generative vision and “world model” direction

00:15:47 🎄 Holiday AI features, Santa themed voice and image moments

00:16:34 ⚡ Power and infrastructure constraints, wind and solar angle for AI buildout

00:20:05 🛠️ Lovable vs Replit, speed to product and practical tradeoffs

00:25:00 💻 Claude workflow talk and migration friction (real world setup issues)

00:30:00 ☁️ Cloud strategy, longer prompts, and getting useful outputs from big context

00:38:00 🎓 Curriculum and workforce readiness, what to teach and what to automate

00:40:10 📚 Wikipedia, automation patterns, and reusable knowledge sources

00:43:10 📓 Playbooks and repeatable processes, turning AI into a system not a novelty

00:51:40 🏁 Closing and weekend sendoff

Show more...
1 week ago
51 minutes 42 seconds

The Daily AI Show
Christmas Eve AI: From Robots to AI Toys Under the Tree

Jyunmi hosted this Christmas Eve episode with Beth, Andy, and Brian. The tone was lighter and more exploratory, mixing AI headlines with a holiday themed discussion on AI toys, gadgets, and everyday use cases. The show opened with a round robin on debates around general versus universal intelligence, then moved into robotics progress, voice assistants, enterprise AI adoption trends, and finally a long, practical segment on AI powered consumer gadgets people are actually buying, using, or curious about heading into 2026.


Key Points Discussed


Ongoing debate between Yann LeCun, Demis Hassabis, and Elon Musk on what “general intelligence” really means


Physical Intelligence proposes a Robot Olympics focused on everyday household tasks


Non humanoid robot arms already perform precise actions like unlocking doors and food prep


Robotics progress seen as especially impactful for elder care and assisted living


ChatGPT introduces pinned chats, a small but meaningful organization upgrade


Growing desire for folders and deeper chat organization in 2026


Gemini excels at vision tasks like receipt scanning and categorization


Brian shares a real world Gemini workflow for automated personal budgeting


Boston Dynamics to debut next generation Atlas humanoid robot at CES 2026


Y Combinator Winter 2026 cohort favors Anthropic over OpenAI for startups


Claude leads in vibe coding due to Replit and Lovable integrations


Alexa Plus adds third party services like Suno, Ticketmaster, OpenTable, and Thumbtack


Mixed reactions to Alexa Plus highlight trust and use case gaps


Voice first agents seen as a stepping stone toward true personal AI agents


AI toys discussed include board.fun, Reachy Mini robot, AI translation earbuds, and smart bird feeders


Strong interest in wearables and Google’s upcoming AI glasses for 2026


Timestamps and Topics


00:00:00 👋 Opening, Christmas Eve welcome, host lineup

00:02:10 🧠 AGI vs universal intelligence debate

00:07:30 🤖 Robot Olympics and physical intelligence demos

00:18:40 🔑 Precision robotics, care use cases, and household tasks

00:27:10 📌 ChatGPT pinned chats and organization needs

00:33:40 🧾 Gemini receipt scanning and budgeting workflow

00:44:20 🦾 Boston Dynamics Atlas CES preview

00:49:30 🧑‍💻 Y Combinator favors Anthropic for Winter 2026

00:55:10 🗣️ Alexa Plus features, pros, and frustrations

01:16:30 🎁 AI toys and gadgets under the tree

01:33:10 🧠 Wearables, translation devices, and future assistants

01:48:40 🏁 Holiday wrap up and community thanks


The Daily AI Show Co Hosts: Jyunmi, Beth Lyons, Andy Halliday, and Brian Maucere

Show more...
1 week ago
1 hour 9 minutes 31 seconds

The Daily AI Show
AI Creativity Explodes and ChatGPT Gets Misty-Eyed about 2025

The DAS crew opened with holiday week energy, reminders that the show would continue live through the end of the year, and light reflection on the Waymo incident from earlier in the week. The episode leaned heavily into creativity, tooling, and real world AI use, with a long central discussion on Alibaba’s Qwen Image Layered release, what it unlocks for designers, and how AI is simultaneously lowering the floor and raising the ceiling for creative work. The second half focused on OpenAI’s “Your Year in ChatGPT” feature, personalization controls, the widening AI usage gap, curriculum challenges in education, and a live progress update on the new Daily AI Show website, followed by a preview of the upcoming AI Festivus event.


Key Points Discussed


Waymo incidents framed as imperfect but safety first outcomes rather than failures


Alibaba releases Qwen Image Layered, enabling images to be decomposed into editable layers


Layered image editing seen as a major leap for designers and creative workflows


Comparison between Qwen layering and ChatGPT’s natural language Photoshop editing


AI tools lower barriers for non creatives while amplifying expert creators


Creativity gap widens between baseline output and high end craft


Analogies drawn to guitar tablature, templates, and iPhone photography


Suno cited as an example of creative access without replacing true musicianship


Debate on whether AI widens or equalizes the creativity gap across skill levels


Cursor reportedly allowed temporary free access to premium models due to a glitch


OpenAI launches “Your Year in ChatGPT,” offering personalized yearly summaries


Feature highlights usage patterns, archetypes, themes, and creative insights


Hosts react to their own ChatGPT year in review results


OpenAI adds more granular personalization controls


Builders express concern over personalization affecting custom GPT behavior


GPT 5.2 reduces personalization conflicts compared to earlier versions


Discussion on AI literacy gaps and inequality driven by usage differences


Professors and educators struggle to keep curricula current with AI advances


Curriculum approval cycles seen as incompatible with AI’s pace of change


Brian demos progress on the new Daily AI Show website with semantic search


Site enables topic based clip discovery, timelines, and super clip generation


Clips can be assembled into long form or short viral style videos automatically


System designed to scale across 600 plus episodes using structured transcripts


Temporal ordering helps distinguish historical vs current AI discussions


Preview of AI Festivus event with panels, films, exhibits, and community sessions


AI Festivus replay bundle priced at 27 dollars to support the event


Timestamps and Topics


00:00:00 👋 Opening, holiday schedule, host introductions

00:04:10 🚗 Waymo incident reflection and safety framing

00:08:30 🖼️ Qwen Image Layered announcement and implications

00:16:40 🎨 Creativity, tooling, and widening floor to ceiling gap

00:27:30 🎸 Analogies to music, photography, and templates

00:35:20 🧠 AI literacy gaps and inequality discussion

00:43:10 🧪 Cursor premium model access glitch

00:47:00 📊 OpenAI “Your Year in ChatGPT” walkthrough

00:58:30 ⚙️ Personalization controls and builder concerns

01:08:40 🎓 Education curriculum bottlenecks and AI pace

01:18:50 🛠️ Live demo of Daily AI Show website search and clips

01:34:30 🎬 Super clips, viral mode, and timeline navigation

01:46:10 🎉 AI Festivus preview and event details

01:55:30 🏁 Closing remarks and next show preview


The Daily AI Show Co Hosts: Brian Maucere, Beth Lyons, Andy Halliday, Anne Townsend, and Karl Yeh

Show more...
1 week ago
1 hour

The Daily AI Show
The Reality of Human AI Collaboration

The show leaned less on rapid breaking news and more on synthesis, reviewing Andrej Karpathy’s 2025 LLM year in review, practical experiences with Claude Code and Gemini, and what real human AI collaboration actually looks like in practice. The second half moved into policy tension around AI governance, advances in robotics and animatronics, autonomous vehicle failures, consumer facing AI agents, and new research on human AI synergy and theory of mind.


Key Points Discussed


Andrej Karpathy publishes a concise 2025 LLM year in review


Shift from RLHF to reinforcement learning from verifiable rewards


Jagged intelligence, not general intelligence, defines current models


Cursor and Claude Code emerge as a new local layer in the AI stack


Vibe coding becomes a mainstream development pattern


Gemini Nano Banana stands out as a major paradigm shift


Claude Code helps with local system tasks but makes critical date errors


Trust in AI agents requires constant human supervision


Gemini Flash criticized for hallucinating instead of flagging missing inputs


AI literacy and prompting skill matter more than raw model quality


Disney unveils advanced Olaf animatronic powered by AI and robotics


Cute, disarming robots may reshape public comfort with robotics


Unitree robots perform alongside humans in live dance shows


Waymo cars freeze in traffic after a centralized system failure


AI car buying agents negotiate vehicle purchases on behalf of users


Professional services like tax prep and law face deep AI disruption


Duke research shows AI can extract simple rules from complex systems


Human AI performance depends on interaction, not model alone


Theory of mind drives strong human AI collaboration


Showing AI reasoning improves alignment and trust


Pairing humans with AI boosts both high and low skill workers


Timestamps and Topics


00:00:00 👋 Opening, laptops, and AI assisted migration

00:06:30 🧠 Karpathy’s 2025 LLM year in review

00:14:40 🧩 Claude Code, Cursor, and local AI workflows

00:22:30 🍌 Nano Banana and image model limitations

00:29:10 📰 AI newsletters and information overload

00:36:00 ⚖️ Politico story on tech unease with David Sacks

00:45:20 🤖 Disney’s Olaf animatronic and AI robotics

00:55:10 🕺 Unitree robots in live performances

01:02:40 🚗 Waymo cars halt during power outage

01:08:20 🛒 AI powered car buying agents

01:14:50 📉 AI disruption in professional services

01:20:30 🔬 Duke research on AI finding simplicity in chaos

01:27:40 🧠 Human AI synergy and theory of mind research

01:36:10 ⚠️ Gemini Flash hallucination example

01:42:30 🔒 Trust, supervision, and co intelligence

01:47:50 🏁 Early wrap up and closing


The Daily AI Show Co Hosts: Beth Lyons and Andy Halliday

Show more...
1 week ago
52 minutes 34 seconds

The Daily AI Show
The Aesthetic Inflation Conundrum

In economics, if you print too much money, the value of the currency collapses. In sociology, there is a similar concept for beauty. Currently, physical beauty is "scarce" and valuable. A person who looks like a movie star commands attention, higher pay, and social status (the "Halo Effect"). But humanoid robots are about to flood the market with "hyper-beauty." Manufacturers won't design an "average" looking robot helper; they will design 10/10 physical specimens with perfect symmetry, glowing skin, and ideal proportions. Soon, the "background characters" of your life—the barista, the janitor, the delivery driver—will look like the most beautiful celebrities on Earth.

The Conundrum:

As visual perfection floods the streets, and it becomes impossible to tell a human from a highly advanced, perfect android, do we require humans to adopt a form of visible, authenticated digital marker (like an augmented reality ID or glowing biometric wristband) to prove they are biologically real? Or do we allow all beings to pass anonymously, accepting that the social friction of universal distrust and the "Supernormal" beauty of the unidentified robots is the new reality?

Show more...
2 weeks ago
17 minutes 58 seconds

The Daily AI Show
AI Memory Is Still in Its GPT 2 Era

The show turned into a long, thoughtful conversation rather than a rapid news rundown. It centered on Sam Altman’s recent interview on The Big Technology Podcast and The Neuron’s breakdown of it, specifically Altman’s claim that AI memory is still in its “GPT-2 era.” That sparked a deep debate about what memory should actually mean in AI systems, the technical and economic limits of perfect recall, selective forgetting, and how memory could become the strongest lock-in mechanism across AI platforms. From there, the conversation expanded into Amazon’s launch of Alexa Plus, AI-first product design versus bolt-on AI, legacy companies versus AI-native startups, and why rebuilding workflows matters more than adding copilots.


Key Points Discussed


Sam Altman says AI memory is still at a GPT-2 level of maturity


True “perfect memory” would be overwhelming, expensive, and often undesirable


Selective forgetting and just-in-time memory matter more than total recall


Memory likely becomes the strongest long-term moat for AI platforms


Users may struggle to switch assistants after years of accumulated memory


Local and hybrid memory architectures may outperform cloud-only memory


Amazon launches Alexa Plus as a web and device-based AI assistant


Alexa Plus enables easy document ingestion for home-level RAG use cases


Home assistants compete directly with ChatGPT on ambient, voice-first use


AI bolt-ons to legacy tools fall short of true AI-first redesigns


Sam argues AI-first products will replace chat and productivity metaphors


Spreadsheets increasingly become disposable interfaces, not the system of record


Legacy companies struggle to unwind process debt despite executive urgency


AI-native companies hold speed and structural advantages over incumbents


Some legacy firms can adapt if leadership commits deeply and early


Anthropic experiments with task-oriented agent interfaces beyond chat


Future AI tools likely organize work by intent, not conversation


Adoption friction comes from trust, visibility, and human understanding


AI transition pressure hits operations and middle layers hardest


Timestamps and Topics


00:00:00 👋 Opening, live chat shoutouts, Friday setup

00:03:10 🧠 Sam Altman interview and “GPT-2 era of memory” claim

00:10:45 📚 What perfect memory would actually require

00:18:30 ⚠️ Costs, storage, inference, and scalability concerns

00:26:40 🧩 Selective forgetting versus total recall

00:34:20 🔒 Memory as lock-in and portability risk

00:41:30 🏠 Amazon Alexa Plus launches and home RAG use cases

00:52:10 🎧 Voice-first assistants versus desktop AI

01:02:00 🧱 AI-first products versus bolt-on copilots

01:14:20 📊 Why spreadsheets become discardable interfaces

01:26:30 🏭 Legacy companies, process debt, and AI-native speed

01:41:00 🧪 Ford, BYD, and lessons from EV transformation

01:55:40 🤖 Anthropic’s task-based Claude interface experiment

02:07:30 🧭 Where AI product design is likely headed

02:18:40 🏁 Wrap-up, weekend schedule, and year-end reminders


The Daily AI Show Co Hosts: Beth Lyons, Andy Halliday, Brian Maucere, and Karl Yeh

Show more...
2 weeks ago
58 minutes 5 seconds

The Daily AI Show
Google Undercuts the Field, OpenAI Builds an App OS, and China Accelerates

The conversation centered on Google’s surprise rollout of Gemini 3 Flash, its implications for model economics, and what it signals about the next phase of AI competition. From there, the discussion expanded into AI literacy and public readiness, deepfakes and misinformation, OpenAI’s emerging app marketplace vision, Fiji Simo’s push toward dynamic AI interfaces, rising valuations and compute partnerships, DeepMind’s new Mixture of Recursions research, and a long, candid debate about China’s momentum in AI versus Western resistance, regulation, and public sentiment.


Key Points Discussed


Google makes Gemini 3 Flash the default model across its platform


Gemini 3 Flash matches GPT 5.2 on key benchmarks at a fraction of the cost


Flash dramatically outperforms on speed, shifting the cost performance equation


Subtle quality differences matter mainly to power users, not most people


Public AI literacy lags behind real world AI capability growth


Deepfakes and AI generated misinformation expected to spike in 2026


OpenAI opens its app marketplace to third party developers


Shift from standalone AI apps to “apps inside the AI”


Fiji Simo outlines ChatGPT’s future as a dynamic, generative UI


AI tools should appear automatically inside workflows, not as manual integrations


Amazon rumored to invest 10B in OpenAI tied to Tranium chips


OpenAI valuation rumors rise toward 750B and possibly 1T


DeepMind introduces Mixture of Recursions for adaptive token level reasoning


Model efficiency and cost reduction emerge as primary research focus


Huawei launches a new foundation model unit, intensifying China competition


Debate over China’s AI momentum versus Western resistance and regulation


Cultural tradeoffs between privacy, convenience, and AI adoption highlighted


Timestamps and Topics


00:00:00 👋 Opening, host setup, day’s focus

00:02:10 ⚡ Gemini 3 Flash rollout and pricing breakdown

00:07:40 📊 Benchmark comparisons vs GPT 5.2 and Gemini Pro

00:12:30 ⏱️ Speed differences and real world usability

00:18:00 🧠 Power users vs mainstream AI usage

00:22:10 ⚠️ AI readiness, misinformation, and deepfake risk

00:28:30 🧰 OpenAI marketplace and developer submissions

00:35:20 🖼️ Photoshop and Canva inside ChatGPT discussion

00:42:10 🧭 Fiji Simo and ChatGPT as a dynamic OS

00:48:40 ☁️ Amazon, Tranium, and OpenAI compute economics

00:54:30 💰 Valuation speculation and capital intensity

01:00:10 🔬 DeepMind Mixture of Recursions explained

01:08:40 🇨🇳 Huawei AI labs and China’s acceleration

01:18:20 🌍 Privacy, power, and cultural adoption differences

01:26:40 🏁 Closing, community plugs, and tomorrow preview

Show more...
2 weeks ago
56 minutes 21 seconds

The Daily AI Show
Image 1.5 is out, but how does it stack up?

The crew opened with a round robin of daily AI news, focusing on productivity assistants, memory as a moat for AI platforms, and the growing wearables arms race. The first half centered on Google’s new CC daily briefing assistant, comparisons to OpenAI Pulse, and why selective memory will likely define competitive advantage in 2026.

The second half moved into OpenAI’s new GPT Image 1.5 release, hands on testing of image editing and comics, real limitations versus Gemini Nano Banana, and broader creative implications. The episode closed with agent adoption data from Gallup, Kling’s new voice controlled video generation, creator led Star Wars fan films, and a deep dive into OpenAI’s AI and science collaboration accelerating wet lab biology.


Key Points Discussed


Google launches CC, a Gemini powered daily briefing assistant inside Gmail


CC mirrors Hux’s functionality but uses email instead of voice as the interface


OpenAI Pulse remains stickier due to deeper conversational memory


Memory quality, not raw model strength, seen as a major moat for 2026


Chinese wearable Looky introduces always on recording with local first privacy


Meta Glasses add conversation focus and Spotify integration


Debate over social acceptance of visible recording devices


OpenAI releases GPT Image 1.5 with faster generation and tighter edit controls


Image 1.5 improves fidelity but still struggles with logic driven visuals like charts


Gemini plus Nano Banana remains stronger for reasoning heavy graphics


Iterative image editing works but often discards original characters


Gallup data shows AI daily usage still relatively low across the workforce


Most AI use remains basic, focused on summarizing and drafting


Kling launches voice controlled video generation in version 2.6


Creator made Star Wars scenes highlight the future of fan generated IP content


OpenAI reports GPT 5 improving molecular cloning workflows by 79x


AI acts as an iterative lab partner, not a replacement for scientists


Robotics plus LLMs point toward faster, automated scientific discovery


IBM demonstrates quantum language models running on real quantum hardware


Timestamps and Topics


00:00:00 👋 Opening, host lineup, round robin setup

00:02:00 📧 Google CC daily briefing assistant overview

00:07:30 🧠 Memory as an AI moat and Pulse comparisons

00:14:20 📿 Looky wearable and privacy tradeoffs

00:20:10 🥽 Meta Glasses updates and ecosystem lock in

00:26:40 🖼️ OpenAI GPT Image 1.5 release overview

00:32:15 🎨 Brian’s hands on image tests and comic generation

00:41:10 📊 Image logic failures versus Nano Banana

00:46:30 📉 Gallup study on real world AI usage

00:55:20 🎙️ Kling 2.6 voice controlled video demo

01:00:40 🎬 Star Wars fan film and creator future discussion

01:07:30 🧬 OpenAI and Red Queen Bio wet lab breakthrough

01:15:10 ⚗️ AI driven iteration and biosecurity concerns

01:20:40 ⚛️ IBM quantum language model milestone

01:23:30 🏁 Closing and community reminders


The Daily AI Show Co Hosts: Jyunmi, Andy Halliday, Brian Maucere, and Karl Yeh

Show more...
2 weeks ago
1 hour 8 minutes 34 seconds

The Daily AI Show
Inside Nvidia’s Nemotron Play, Real Agent Usage Data, and US Tech Force

The DAS crew focused on Nvidia’s decision to open source its Nemotron model family, what that signals in the hardware and software arms race, and new research from Perplexity and Harvard analyzing how people actually use AI agents in the wild. The second half shifted into Google’s new Disco experiment, tab overload, agent driven interfaces, and a long discussion on the newly announced US Tech Force, including historical parallels, talent incentives, and skepticism about whether large government programs can truly attract top AI builders.


Key Points Discussed


Nvidia open sources the Nematron model family, spanning 30B to 500B parameters


Nematron Nano outperforms similar sized open models with much faster inference


Nvidia positions software plus hardware co design as its long term moat


Chinese open models continue to dominate open source benchmarks


Perplexity confirms use of Nematron models alongside proprietary systems


New Harvard and Perplexity paper analyzes over 100,000 agentic browser sessions


Productivity, learning, and research account for 57 percent of agent usage


Shopping and course discovery make up a large share of remaining queries


Users shift toward more cognitively complex tasks over time


Google launches Disco, turning related browser tabs into interactive agent driven apps


Disco aims to reduce tab overload and create task specific interfaces on the fly


Debate over whether apps are built for humans or agents going forward


Cursor moves parts of its CMS toward code first, agent friendly design


US Tech Force announced as a two year federal AI talent recruitment program


Program emphasizes portfolios over degrees and offers 150K to 200K compensation


Historical programs often struggled due to bureaucracy and cultural resistance


Panel debates whether elite AI talent will choose government over private sector roles


Concerns raised about branding, inclusion, and long term effectiveness of Tech Force


Timestamps and Topics


00:00:00 👋 Opening, host lineup, StreamYard layout issues

00:04:10 🧠 Nvidia Nematron open source announcement

00:09:30 ⚙️ Hardware software co design and TPU competition

00:15:40 📊 Perplexity and Harvard agent usage research

00:22:10 🛒 Shopping, productivity, and learning as top AI use cases

00:27:30 🌐 Open source model dominance from China

00:31:10 🧩 Google Disco overview and live walkthrough

00:37:20 📑 Tab overload, dynamic interfaces, and agent UX

00:43:50 🤖 Designing sites for agents instead of people

00:49:30 🏛️ US Tech Force program overview

00:56:10 📜 Degree free hiring, portfolios, and compensation

01:03:40 ⚠️ Historical failures of similar government tech programs

01:09:20 🧠 Inclusion, branding, and talent attraction concerns

01:16:30 🏁 Closing, community thanks, and newsletter reminders


The Daily AI Show Co Hosts: Brian Maucere, Andy Halliday, Anne Townsend, and Karl Yeh

Show more...
2 weeks ago
56 minutes 41 seconds

The Daily AI Show
White Collar Layoffs, World Models, and the AI Powered Future of Content

Brian and Andy opened with holiday timing, the show’s continued weekday streak through the end of the year, and a quick laugh about a Roomba bankruptcy headline colliding with the newsletter comic. The episode moved through Google ecosystem updates, live translation, AI cost efficiency research, Rivian’s AI driven vehicle roadmap, and a sobering discussion on white collar layoffs driven by AI adoption. The second half focused on OpenAI Codex self improvement signals, major breakthroughs in AI driven drug discovery, regulatory tension around AI acceleration, Runway’s world model push, and a detailed live demo of Brian’s new Daily AI Show website built with Lovable, Gemini, Supabase, and automated clip generation.


Key Points Discussed


Roomba reportedly explores bankruptcy and asset sales amid AI robotics pressure


Notebook LM now integrates directly into Gemini for contextual conversations


Google Translate adds real time speech to speech translation with earbuds


Gemini research teaches agents to manage token and tool budgets autonomously


Rivian introduces in car AI conversations and adds LIDAR to future models


Rivian launches affordable autonomy subscriptions versus high priced competitors


McKinsey cuts thousands of staff while deploying over twelve thousand AI agents


Professional services firms see demand drop as clients use AI instead


OpenAI says Codex now builds most of itself


Chai Discovery raises 130M to accelerate antibody generation with AI


Runway releases Gen 4.5 and pushes toward full world models


Brian demos a new AI powered Daily AI Show website with semantic search and clip generation


Timestamps and Topics


00:00:00 👋 Opening, holidays, episode 616 milestone

00:03:20 🤖 Roomba bankruptcy discussion

00:06:45 📓 Notebook LM integration with Gemini

00:12:10 🌍 Live speech to speech translation in Google Translate

00:18:40 💸 Gemini research on AI cost and token efficiency

00:24:55 🚗 Rivian autonomy processor, in car AI, and LIDAR plans

00:33:40 📉 McKinsey layoffs and AI driven white collar disruption

00:44:30 🧠 Codex self improvement discussion

00:48:20 🧬 Chai Discovery antibody breakthrough

00:53:10 🎥 Runway Gen 4.5 and world models

01:00:00 🛠️ Lovable powered Daily AI Show website demo

01:12:30 🔍 AI generated clips, Supabase search, and future monetization

01:16:40 🏁 Closing and tomorrow’s show preview


The Daily AI Show Co Hosts: Brian Maucere and Andy Halliday

Show more...
2 weeks ago
1 hour 7 minutes 43 seconds

The Daily AI Show
The Envoy Conundrum

If and when we make contact with an extraterrestrial intelligence, the first impression we make will determine the fate of our species. We will have to send an envoy—a representative to communicate who we are. For decades, we assumed this would be a human. But humans are fragile, emotional, irrational, and slow. We are prone to fear and aggression. An AI envoy, however, would be the pinnacle of our logic. It could learn an alien language in seconds, remain perfectly calm, and represent the best of Earth's intellect without the baggage of our biology. The risk is philosophical: If we send an AI, we are not introducing ourselves. We are introducing our tools. If the aliens judge us based on the AI, they are judging a sanitized mask, not the messy biological reality of humanity. We might be safer, but we would be starting our relationship with the cosmos based on a lie about what we are.


The Conundrum: In a high-stakes First Contact scenario, do we send a super-intelligent AI to ensure we don't make a fatal emotional mistake, or do we send a human to ensure that the entity meeting the universe is actually one of us, risking extinction for the sake of authenticity?

Show more...
3 weeks ago
37 minutes 38 seconds

The Daily AI Show
Using ChatGPT 5.2? Better watch this first!

They opened energized and focused almost immediately on GPT 5.2, why the benchmarks matter less than behavior, and what actually feels different when you build with it. Brian shared that he spent four straight hours rebuilding his internal gem builder using GPT 5.2, specifically to test whether OpenAI finally moved past brittle master and router prompting. The rest of the episode mixed deep hands on prompting work, real world agent behavior, smaller but meaningful AI breakthroughs in vision restoration and open source math reasoning, and reflections on where agentic systems are clearly heading.


Key Points Discussed


GPT 5.2 shows a real shift toward higher level goal driven prompting


Benchmarks matter less than whether custom GPTs are easier to build and maintain


GPT 5.2 Pro enables collapsing complex multi prompt systems into single meta prompts


Cookbook guidance is critical for understanding how 5.2 behaves differently from 5.1


Brian rebuilt his gem builder using fewer documents and far less prompt scaffolding


Structured phase based prompting works reliably without master router logic


Stress testing and red teaming can now be handled inside a single build flow


Spreadsheet reasoning and chart interpretation show meaningful improvement


Image generation still lags Gemini for comics and precise text placement


OpenAI hints at a smaller Shipmas style release coming next week


Topaz Labs wins an Emmy for AI powered image and video restoration


Science Corp raises 260M for a grain sized retinal implant restoring vision


Open source Nomos One scores near elite human levels on the Putnam math competition


Advanced orchestration beats raw model scale in some reasoning tasks


Agentic systems now behave more like pseudocode than chat interfaces


Timestamps and Topics


00:00:00 👋 Opening, GPT 5.2 focus, community callout

00:04:30 🧠 Initial reactions to GPT 5.2 Pro and benchmarks

00:09:30 📊 Spreadsheet reasoning and financial model improvements

00:14:40 ⏱️ Timeouts, latency tradeoffs, and cost considerations

00:18:20 📚 GPT 5.2 prompting cookbook walkthrough

00:24:00 🧩 Rebuilding the gem builder without master router prompts

00:31:40 🔒 Phase locking, guided workflows, and agent like behavior

00:38:20 🧪 Stress testing prompts inside the build process

00:44:10 🧾 Live demo of new client research and prep GPT

00:52:00 🖼️ Image generation test results versus Gemini

00:56:30 🏆 Topaz Labs wins Emmy for restoration tech

01:00:40 👁️ Retinal implant restores vision using AI and BCI

01:05:20 🧮 Nomos One open source model dominates math benchmarks

01:11:30 🤖 Agentic behavior as pseudocode and PRD driven execution

01:18:30 🎄 Shipmas speculation and next week expectations

01:22:40 🏁 Week wrap up and community reminders


The Daily AI Show Co Hosts: Brian Maucere, Beth Lyons, and Andy Halliday

Show more...
3 weeks ago
1 hour 1 minute 17 seconds

The Daily AI Show
Space Data Centers, Disney Sora Deal, and Shopify’s AI Shoppers

They opened with holiday lights, late year energy, and a quick check on December model rumors like Chestnut, Hazelnut, and Meta’s Avocado. They joked about AI naming moving from space themes to food themes. The first half focused on space based data centers, heat dissipation in orbit, Shopify’s AI upgrades, and Google’s Anti Gravity builder. The second half focused on MCP adoption, connector ecosystems, developer workflow fragmentation, and a long segment on Disney’s landmark Sora licensing deal and what fan generated content means for the future of storytelling.


Key Points Discussed


Space based data centers become real after a startup trains the first LLM in orbit


China already operates a 12 satellite AI cluster with an 8B parameter model


Cooling in space is counterintuitive, requiring radiative heat transfer


NASA derived materials and coolant systems may influence orbital data centers


Shopify launches AI simulated shoppers and agentic storefronts for GEO optimization


Shopify Sidekick now builds apps, storefront changes, and full automations conversationally


Anti Gravity allows conversational live website edits but currently hits rate limits


MCP enters the Linux Foundation with Anthropic donating full rights to the protocol


Growing confusion between apps, connectors, and tool selection in ChatGPT


AI consulting becomes harder as clients expect consistent results despite model updates


Agencies struggle with n8n versioning, OpenAI model drift, search cost spikes, and maintenance


Push toward multi model training, department specific tools, and heavy workshop onboarding


Disney signs a three year Sora licensing deal for Pixar, Marvel, Disney, and Star Wars characters


Disney invests 1B in OpenAI and deploys ChatGPT to all employees


Debate over canon, fan generated stories, moderation guardrails, and Disney Plus distribution


McDonald’s AI holiday ad removed after public backlash for uncanny visuals and tone


OpenAI releases a study of thirty seven million chats showing health searches dominate


Users shift topics by time of day: philosophy at 2 a.m., coding on weekdays, gaming on weekends


Timestamps and Topics


00:00:00 👋 Opening, holiday lights, food themed model names

00:02:15 🚀 Space based data centers and first LLM trained in orbit

00:05:10 ❄️ Cooling challenges, radiative heat, NASA tech spinoffs

00:08:12 🛰️ China’s orbital AI systems and 2035 megawatt plans

00:10:45 🛒 Shopify launches SimJammer AI shopper simulations

00:12:40 ⚙️ Agentic storefronts and cross platform product sync

00:14:55 🧰 Sidekick builds apps and automations conversationally

00:17:30 🌐 Anti Gravity live editing and Gemini rate limits

00:20:49 🔧 MCP transferred to the Linux Foundation

00:25:12 🔌 Confusion between apps and connectors in ChatGPT

00:27:00 🧪 Consulting strain, versioning chaos, model drift

00:30:48 🏗️ Department specific multimodel adoption workflows

00:33:15 🎬 Disney signs Sora licensing deal for all major IP

00:35:40 📺 Disney Plus will stream select fan generated Sora videos

00:38:10 ⚠️ Safeguards against misuse, IP rules, and story ethics

00:41:52 🍟 McDonald’s AI ad backlash and public perception

00:45:20 🔍 OpenAI analysis of 37M chats

00:47:18 ⏱️ Time of day topic patterns and behavioral insights

00:49:25 💬 More on tools, A to A workflows, and future coworker gems

00:53:56 🏁 Closing and Friday preview


The Daily AI Show Co Hosts: Brian Maucere, Beth Lyons, Andy Halliday, and Carl Yeh

Show more...
3 weeks ago
1 hour 6 minutes 33 seconds

The Daily AI Show
Japan Claims AGI, Pentagon Adopts Gemini, and MIT Designs New Medicines

They opened by framing the day around AI headlines and how each story connects to work, government, infrastructure, and long term consequences of rapidly advancing systems. The first major story centered on a Japanese company claiming AGI, followed by detailed breakdowns of global agentic AI standards, US military adoption of Gemini, China’s DeepSeek 3.2 claims, South Korean AI labeling laws, and space based AI data centers. The episode closed with large scale cloud investments, a debate on the “labor bubble,” IBM’s major acquisition, a new smart ring, and a long segment on an MIT system that can design protein binders for “undruggable” disease targets.


Key Points Discussed


Japanese company Integral.ai publicly claims it has achieved AGI


Their definition centers on autonomous skill learning, safe self improvement, and human level energy efficiency


Linux Foundation launches the Agentic AI Foundation with OpenAI, Anthropic, and Block


MCP, Goose, and agents.md become early building blocks for standardized agents


US Defense Department launches genai.mil using Gemini for government at IL5 security


DeepSeek 3.2 uses sparse attention and claims wins over Gemini 3 Pro, but not Gemini Pro Thinking


South Korea introduces national rules requiring AI generated ads to be labeled


China plans megawatt scale space based AI data centers and satellite model clusters


Microsoft commits 23B for sovereign AI infrastructure in India and Canada


Debate over the “labor bubble,” arguing that owners only hire when they must


IBM acquires Confluent for 11B to build real time streaming pipelines for AI agents


Halliday smart glasses disappoint, but new Index O1 “dumb ring” offers simple voice note capture


MIT’s BoltzGen model generates protein binders for hard disease targets with strong lab results


Timestamps and Topics


00:00:00 👋 Opening, framing the day’s themes

00:01:10 🤖 Japan’s Integral.ai claims AGI under a strict definition

00:06:05 ⚡ Autonomous learning, safe mastery, and energy efficiency criteria

00:07:32 🧭 Agentic AI Foundation overview

00:10:45 🔧 MCP, Goose, and agents.md explained

00:14:40 🛡️ genai.mil launches with Gemini for government

00:18:00 🇨🇳 DeepSeek 3.2 sparse attention and benchmark claims

00:22:17 ⚠️ Comparison to Gemini 3 Pro Thinking

00:23:40 🇰🇷 South Korea mandates AI ad labeling

00:27:09 🛰️ China’s space based AI systems and satellite arrays

00:31:39 ☁️ Microsoft invests 23B in India and Canada AI infrastructure

00:35:09 📉 The “labor bubble” argument and job displacement

00:41:11 🔄 IBM acquires Confluent for 11B

00:45:43 🥽 AI hardware segment, Halliday glasses and Index O1 ring

00:56:20 🧬 MIT’s BoltzGen designs binders for “undruggable” targets

01:05:30 ⚗️ Lab validation, bias issues, reproducibility concerns

01:10:57 🧪 Future of scientific work and human roles

01:13:25 🏁 Closing and community links


The Daily AI Show Co Hosts: Jyunmi and Andy Halliday

Show more...
3 weeks ago
1 hour 2 minutes 12 seconds

The Daily AI Show
The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh