Is AI empathy a life-or-death issue? Almost a million people ask ChatGPT for mental health advice DAILY ... so yes, it kind of is.
Rosebud co-founder Sean Dadashi joins TechFirst to reveal new research on whether today’s largest AI models can recognize signs of self-harm ... and which ones fail. We dig into the Adam Raine case, talk about how Dadashi evaluated 22 leading LLMs, and explore the future of mental-health-aware AI.
We also talk about why Dadashi was interested in this in the first place, and his own journey with mental health.
00:00 — Intro: Is AI empathy a life-or-death matter?
00:41 — Meet Sean Dadashi, co-founder of Rosebud
01:03 — Why study AI empathy and crisis detection?
01:32 — The Adam Raine case and what it revealed
02:01 — Why crisis-prevention benchmarks for AI don’t exist
02:48 — How Rosebud designed the study across 22 LLMs
03:17 — No public self-harm response benchmarks: why that’s a problem
03:46 — Building test scenarios based on past research and real cases
04:33 — Examples of prompts used in the study
04:54 — Direct vs indirect self-harm cues and why AIs miss them
05:26 — The bridge example: AI’s failure to detect subtext
06:14 — Did any models perform well?
06:33 — All 22 models failed at least once
06:47 — Lower-performing models: GPT-40, Grok
07:02 — Higher-performing models: GPT-5, Gemini
07:31 — Breaking news: Gemini 3 preview gets the first perfect score
08:12 — Did the benchmark influence model training?
08:30 — The need for more complex, multi-turn testing
08:47 — Partnering with foundation model companies on safety
09:21 — Why this is such a hard problem to solve
10:34 — The scale: over a million people talk to ChatGPT weekly about self-harm
11:10 — What AI should do: detect subtext, encourage help, avoid sycophancy
11:42 — Sycophancy in LLMs and why it’s dangerous
12:17 — The potential good: AI can help people who can’t access therapy
13:06 — Could Rosebud spin this work into a full-time safety project?
13:48 — Why the benchmark will be open-source
14:27 — The need for a third-party “Better Business Bureau” for LLM safety
14:53 — Sean’s personal story of suicidal ideation at 16
15:55 — How tech can harm — and help — young, vulnerable people
16:32 — The importance of giving people time, space, and hope
17:39 — Final reflections: listening to the voice of hope
18:14 — Closing
We’ve digitized sound. We’ve digitized light. But touch, maybe the most human of our senses, has stayed stubbornly analog.
That might be about to change, thanks to programmable matter. Or programmable fabric.
In this TechFirst episode, I speak with Adam Hopkins, CEO of Sensetics, a new UC Berkeley/Virginia Tech spinout building programmable fabrics that replicate the mechanoreceptors in human fingertips. Their technology can sense touch at tens of microns, respond at hardware-level speeds, and even play back touch remotely.
This could unlock enormous change for:
• Robotics: giving machines the ability to grasp fragile objects safely
• Medical training and surgery: remote palpation and high-fidelity haptics
• Industrial automation: safer and more precise manipulation
• VR and simulations: finally adding the missing digital sense
• E-commerce: touching clothes before you buy them
• Remote operations: from hazardous environments to deep-sea machinery
We talk about how the technology works, the metamaterials behind it, why touch matters for AI and physical robots, the path to commercialization, competitive landscape, and what comes next.
00:00 – Can we digitize touch?
00:45 – Introducing Synthetix
01:10 – How programmable touch fabrics work
02:15 – Micron-level sensing and metamaterials
04:00 – The “programmable matter” moment
06:05 – Why touch matters more than we think
07:30 – Emulating human mechanoreceptors
09:30 – What digital touch unlocks for robotics
10:40 – Medical simulations and remote operations
12:45 – Why touch is faster than vision
14:20 – Humanoids, walking, stability, and tactile feedback
15:30 – Engineering challenges and what’s left to solve
17:00 – Timeline to first products
18:20 – Manufacturing and scaling
19:30 – First planned markets
21:00 – Durability and robotic hands
22:20 – Consumer applications: e-commerce and textiles
24:00 – Will we one day have touch peripherals?
25:15 – Competition in tactile sensing and haptics
27:00 – Why today is the right moment for digital touch
28:00 – Final thoughts
AI is devouring the planet’s electricity ... already using up to 2% of global energy and projected to hit 5% by 2030. But a Spanish-Canadian company, Multiverse Computing, says it can slash that energy footprint by up to 95% without sacrificing performance.
They specialize in tiny AI: one model has the processing power of just 2 fruit fly brains. Another tiny model lives on a Raspberry Pi.
The opportunities for edge AI are huge. But the opportunities in the cloud are also massive.
In this episode of TechFirst, host John Koetsier talks with Samuel Mugel, Multiverse’s CEO, about how quantum-inspired algorithms can drastically compress large language models while keeping them smart, useful, and fast. Mugel explains how their approach -- intelligently pruning and reorganizing model weights -- lets them fit functioning AIs into hardware as tiny as a Raspberry Pi or the equivalent of a fly’s brain.
They explore how small language models could power Edge AI, smart appliances, and robots that work offline and in real time, while also making AI more sustainable, accessible, and affordable.
Mugel also discusses how ideas from quantum tensor networks help identify only the most relevant parts of a model, and how the company uses an “intelligently destructive” approach that saves massive compute and power.
00:00 – AI’s energy crisis
01:00 – A model in a fly’s brain
02:00 – Why tiny AIs work
03:00 – Edge AI everywhere
05:00 – Agent compute overload
06:00 – 200× too much compute
07:00 – The GPU crunch
08:00 – Smart matter vision
09:00 – AI on a Raspberry Pi
10:00 – How compression works
11:00 – Intelligent destruction
13:00 – General vs. narrow AIs
15:00 – Quantum inspiration
17:00 – Quantum + AI future
18:00 – AI’s carbon footprint
19:00 – Cost of using AI
20:00 – Cloud to edge shift
21:00 – Robots need fast AI
22:00 – Wrapping up
Can AI give every creator their own virtual team? Maybe, thanks to a new platform from RHEI called Made, which offers Milo, an AI agent who becomes your creator director, Zara, an AI agent who is your community manager, and Amie, a third AI agent who takes on the role of relationship manager.
And, apparently, more agents are coming soon.
The creator economy is bigger than ever, but so is burnout. Tens of millions of creators are trying to do everything themselves: strategy, scripting, editing, community, distribution, data, thumbnails, research … the list never ends.
What if creators didn’t have to do all of that?
In this episode of TechFirst, I talk with Shahrzad Rafati, founder & CEO of RHEI, about Made, an agentic AI "dream team" designed to elevate human creativity, not replace it.
We dig into:
• Why so many creators burn out
• How agentic AI workflows differ from ChatGPT-style prompting
• What it means to be a “creator CEO”
• How AI can manage community, analyze trends, and shape content strategies
• The coming shift toward human taste, vision, and originality in a world of infinite AI content
00:00 – Intro: Can AI give every creator a virtual team?
01:03 – Why the creator economy is burning out
02:25 – The “creator CEO” problem: too many hats, not enough time
04:36 – Introducing MAID and its AI agents
05:34 – Milo: AI creative director (ideas, research, thumbnails, metadata)
06:18 – Zara: AI community manager and fan engagement
07:53 – Why this is different from just using ChatGPT
09:46 – Alignment, personalization, and agentic workflows
12:21 – Multi-platform support: YouTube, TikTok, Instagram and more
13:34 – How onboarding works and how the system learns your style
16:33 – What this means for creators — and for the future of work
18:52 – Does *she* use her own virtual AI team? (Yes.)
20:15 – MAID for teams and enterprise clients
21:17 – Closing thoughts: AI, creativity, and the human signal
What happens when Amazon, NVIDIA, and MassRobotics team up to merge generative AI with robotics?
In this episode of TechFirst we chat with Amazon's Taimur Rashid, Head of Generative AI and Innovation Delivery. We talk about "physical AI" ... AI with spatial awareness and the ability to act safely and intelligently in the real world.
We also chat about the first cohort of a new accelerator for robotics startups.
It's sponsored by Amazon and NVIDIA, run by MassRobotics, and includes startups doing autonomous ships, autonomous construction robots, smart farms, hospital robots, manufacturing and assembly robots, exoskeletons, and more.
We talk about:
- Why “physical AI” is the missing piece for robots to become truly useful and scalable
- How startups in Amazon’s and NVIDIA’s new Physical AI Fellowship are pushing the limits of robotics from exoskeletons to farm bots
- What makes robotic hands so hard to build
- The generalist vs. specialist debate in humanoid robots
- How AI is already making Amazon warehouses 25% more efficient
This is a deep dive into the next phase of AI evolution: intelligence that can think, move, and act.
⸻
00:00 — Intro: Is physical AI the missing piece?
00:46 — What is “physical AI”?
02:30 — How LLMs fit into the physical world
03:25 — Why safety is the first principle of physical AI
04:20 — Why physical AI matters now
05:45 — Workforce shortages and trillion-dollar opportunities
07:00 — Falling costs of sensors and robotics hardware
07:45 — The biggest challenges: data, actuation, and precision
09:30 — The fine-grained problem: how robots pick up a berry vs. an orange
11:10 — Inside the first Physical AI cohort: 8 startups to watch
12:25 — Bedrock Robotics: autonomy for construction vehicles
12:55 — Diligent Robotics: socially intelligent humanoids in hospitals
14:00 — Generalist vs. specialist robots: why we’ll need both
15:30 — The future of physical AI in healthcare and manufacturing
16:10 — How Amazon is already using robots for 25% more efficiency
17:20 — The fellowship’s future: expanding beyond startups
18:10 — Wrap-up and key takeaways
Artificial general intelligence (AGI) could be humanity’s greatest invention ... or our biggest risk.
In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.
We cover:
• Is AGI inevitable? How soon will it arrive?
• Will AGI kill us … or save us?
• Why decentralization and blockchain could make AGI safer
• How large language models (LLMs) fit into the path toward AGI
• The risks of an AGI arms race between the U.S. and China
• Why Ben Goertzel created Meta, a new AGI programming language
📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.
⏱️ Chapters
00:00 – Intro: Will AGI kill us or save us?
01:02 – Ben Goertzel in Istanbul & the Beneficial AGI Conference
02:47 – Is AGI inevitable?
05:08 – Defining AGI: generalization beyond programming
07:15 – Emotions, agency, and artificial minds
08:47 – The AGI arms race: US vs. China vs. decentralization
13:09 – Risks of narrow or bounded AGI
15:27 – Decentralization and open-source as safeguards
18:21 – Can LLMs become AGI?
20:18 – Using LLMs as reasoning guides
21:55 – Hybrid models: LLMs plus reasoning engines
23:22 – Hallucination: humans vs. machines
25:26 – How LLMs accelerate AI research
26:55 – How close are we to AGI?
28:18 – Why Goertzel built a new AGI language (Meta)
29:43 – Meta: from AI coding to smart contracts
30:06 – Closing thoughts
What changes when robots deliver everything?
Starship Technologies has already completed 9 million autonomous deliveries, crossed roads over 200 million times, and operates thousands of sidewalk delivery robots across Europe and the U.S. Now they’re scaling into American cities ... and they say they’re ready to change your world
In this episode of TechFirst, I speak with Ahti Heinla, co-founder and CEO of Starship and co-founder of Skype, about:
- How Starship’s robots navigate without GPS
- What makes sidewalk delivery better than drones
- Solving the last-mile problem in snow, darkness, and dense cities
- How Starship is already profitable and fully autonomous
- What it all means for the future of commerce and city life
Heinla says:
“Ten years ago we had a prototype. Now we have a commercial product that is doing millions of deliveries.”
Watch to learn why the future of delivery might roll ... as well as fly.
🔗 Learn more: https://www.starship.xyz
🎧 Subscribe to TechFirst: https://www.youtube.com/@johnkoetsier
00:00 - Intro: What changes when robots deliver everything?
01:37 - Meet Starship: 9 million robot deliveries and counting
02:45 - Why it took 10 years to go from prototype to product
05:03 - When robot delivery becomes normal (and where it already is)
08:30 - How Starship robots handle cities, traffic, and construction
11:20 - Snow, darkness, and all-weather autonomy
13:19 - Reliability, unit economics, and competing with human couriers
16:23 - Inside the tech: sensors, AI, and why GPS isn’t enough
18:03 - Real-time mapping, climbing curbs, and reaching your door
19:54 - How Starship scales without local depots or chargers
22:04 - How city life and commerce change with robot delivery
25:53 - Do robots increase customer orders? (Short answer: yes)
27:05 - Hot food, Grubhub integration, and thermal insulation
28:26 - Will Starship use drones in the future?
29:38 - What U.S. cities are next for robot delivery?
Imagine a quantum computer with a million physical qubits in a space smaller than a sticky note.
That’s exactly what Quantum Art is building. In this TechFirst episode, I chat with CEO Tal David, who shares his team’s vision to deliver quantum systems with:
• 100x more parallel operations
• 100x more gates per second
• A footprint up to 50x smaller than competitors
We also dive into the four key tech breakthroughs behind this roadmap to scale Quantum Art's computer:
1. Multi-qubit gates capable of 1,000 2-qubit operations in a single step
2. Optical segmentation using laser-defined tweezers
3. Dynamic reconfiguration of ion cores at microsecond speed
4. Modular, ultra-dense 2D architectures scaling to 1M+ qubits
We also cover:
- How Quantum Art plans to reach fault tolerance by 2033
- Early commercial viability with 1,000 physical qubits by 2027
- Why not moving qubits might be the biggest innovation of all
- The quantum computing future of healthcare, logistics, aerospace, and energy
🎧 Chapters
00:00 – Intro: 1M qubits in 50mm²
01:45 – Vision: impact in business, humanity, and national tech
03:07 – Multi-qubit gates (1,000 ops in one step)
05:00 – Optical segmentation with tweezers
06:30 – Rapid reconfiguration: no shuttling, no delay
08:40 – Modular 2D architecture & ultra-density
10:30 – Physical vs logical qubits
13:00 – Quantum advantage by 2027
16:00 – Addressing the quantum computing skeptics
17:30 – Real-world use cases: aerospace, automotive, energy
19:00 – Why it’s called Quantum Art
👉 Subscribe for more deep tech interviews on quantum, robotics, AI, and the future of computing.
Are humanoid robots distracting us from the real unlock in robotics ... hands?
In this TechFirst episode, host John Koetsier digs into the hardest (and most valuable) problem in robotics: dexterous manipulation.
Guest Mike Obolonsky, Partner at Cortical Ventures, argues that about $50 trillion of global economic activity flows through “hands work,” yet manipulation startups have raised only a fraction of what locomotion and autonomy companies have.
We break down why hands are so hard (actuators, tactile sensing, proprioception, control, data) and what gets unlocked when we finally crack them.
What we'll talk through ...
• Why “navigation ≠ manipulation” and why most real-world jobs need hands
• The funding mismatch: billions to autonomy & humanoids vs. comparatively little to hands
• The tech stack for dexterity: actuators, tactile sensors (pressure, vibration, shear), feedback, and AI
• Grasping vs. manipulation: picking, placing, using tools (e.g., dishwashers to scalpels)
• Reliability in the wild: interventions/hour, wet/greasy plates, occlusions, bimanual dexterity
• Practical paths: task-specific grippers, modular end-effectors, and “good enough” today vs. general purpose tomorrow
• The moonshot: what 70–90% human-level hands could do for productivity on Earth ... and off-planet
Chapters
00:00 Intro—are we underinvesting in robotic hands?
01:10 Why hands matter more than legs (economics of manipulation)
04:30 Funding realities: autonomy & humanoids vs. hands
08:40 Locomotion progress vs. manipulation bottlenecks
12:10 Teleop now, autonomy later—how data gets gathered
14:20 What’s missing: actuators, tactile sensing, proprioception
17:10 Perception limits in the real world (wet dishes, occlusions)
22:00 General-purpose dexterity vs. task-specific ROI
26:00 Startup landscape & reliability (interventions/hour)
29:00 Modular end-effectors and upgrade paths
30:10 The moonshot: productivity explosion when hands are solved
Who should watch
Robotics founders, VCs, AI researchers, operators in warehousing & manufacturing, and anyone tracking humanoids beyond the hype.
If you enjoyed this
Subscribe for more deep-tech conversations, drop a comment with your take on the “hands vs. legs” debate, and share with someone building robots.
Keywords
robotic hands, dexterous manipulation, humanoid robots, tactile sensing, actuators, proprioception, warehouse automation, AI robotics, Cortical Ventures, TechFirst, John Koetsier, Mike Obolonsky
#Robotics #AI #Humanoids #RobotHands #Manipulation #Automation #TechFirst
Are humanoid robots the future… or a $100B mistake?
Over 100 companies—from Meta to Amazon—are betting big on humanoids. But are we chasing a sci-fi dream that’s not practical or profitable?
In this TechFirst episode, I chat with Bren Pierce, robotics OG and CEO of Kinisi Robots. We cover:
- Why legs might be overhyped
- How LLMs are transforming robots into agents
- The real cost (and complexity) of robotic hands
- Why warehouse robots work best with wheels
- The geopolitical robot arms race between China, the US, and Europe
- Hot takes, historical context, and a glimpse into the next 10 years of AI + robotics.
Timestamps:
0:00 – Are humanoids a dumb idea?
1:30 – Why legs might not matter (yet)
5:00 – LLMs as the real unlock
12:00 – The hand is 50% of the challenge
17:00 – Speed limits = compute limits
23:00 – Robot geopolitics & supply chains
30:00 – What the next 5 years looks like
Subscribe for more on AI, robotics, and tech megatrends.
The future could be much healthier for both farmers and everyone who eats, thanks to farm robots that kill weeds with lasers. In this episode of TechFirst, we chat with Paul Mikesell, CEO of Carbon Robotics, to discuss groundbreaking advancements in agricultural technology.
Paul shares updates since our last conversation in 2021, including the launch of LaserWeeder G2 and Carbon's autonomous tractor technology: AutoTractor.
LaserWeeder G2 quick facts:
- Modular design: Swappable laser “modules” that adapt to different row sizes (80-inch, 40-inch, etc.)
- Laser hardware: Each module has 2 lasers; a standard 20-foot machine = 12 modules = 24 lasers
- Laser precision: Targets the plant’s meristem (≈3mm on small weeds) with pinpoint accuracy
- Weed kill speed: 20–150 milliseconds per weed (including detection + laser fire)
- Throughput: 8,000–10,000 weeds per minute (Gen 2, up from ~5,000/min on Gen 1)
- Coverage rate: 3–4 acres per hour on the 20-foot G2 model
- ROI timeline: Farmers typically achieve payback in under 3 years
- Yield impact: Up to 50% higher yields in some conventional crops due to eliminating herbicide damage
- Price: Standard 20-foot LaserWeeder G2 = $1.4M, larger models scale from there
- Global usage: Units in the U.S. (Midwest corn & soy, Idaho & Arizona veggies) and Europe (Spain, Italy tunnel farming)
We chat about how these innovations are transforming weed control and farm management with AI, computer vision, and autonomous systems, the precision and efficiency of laser weeding, practical challenges addressed by autonomous tractors, and the significant ROI and yield improvements for farmers.
This is a must-watch for anyone interested in the future of farming and sustainable agriculture.
00:00 Introduction to TechFirst and Carbon Robotics
01:10 The Science Behind Laser Weeding
05:46 Introducing Laser Weeder 2.0
06:39 Modular System and New Laser Technology
09:26 Manufacturing and Cost Efficiency
11:47 ROI and Benefits for Farmers
13:24 Laser Weeder Specifications
14:08 Performance and Efficiency
14:49 Introduction to AutoTractor
17:23 Challenges in Autonomous Farming
18:23 Remote Intervention and Starlink Integration
23:23 Future of Farming Technology
24:50 Health and Environmental Benefits
25:18 Conclusion and Farewell
Can robots reduce herbicide and fertilizer use on farms by up to 90%?
Probably yes.
In this episode of TechFirst we chat with Verdant Robotics' CEO Gabe Sibley about SharpShooter, the company's state-of-the-art farm tech that precisely targets herbicide and fertilizer application, massively reducing chemical use.
That's huge for the environment.
It's also huge for farmer's pocketbooks ... because herbicide and fertilizer are increasingly expensive.
We dive into:
- How Sharpshooter targets plants with pinpoint accuracy — 240 shots per second
- Why this approach can save farmers millions in input costs
- The environmental benefits for soil, water, and food
- How AI and edge computing make split-second farm decisions possible
- The future of robotics in agriculture
If you’re interested in agtech, AI, or sustainable farming, this one’s for you.
00:00 Introduction to Robotic Farming
00:28 Interview with Gabe Sibley, CEO of Verdant Robotics
00:50 How Sharpshooter Technology Works
02:40 Economic and Environmental Benefits
04:59 Technical Specifications and Capabilities
11:11 Future of Agricultural Automation
11:54 Personal Insights and Motivation
16:39 Conclusion and Final Thoughts
Will your next browser be AI-enabled? AI-first? Perhaps even an AI agent?
In this episode of TechFirst, John Koetsier sits down with Henrik Lexow, Senior Product Leader at Opera, to explore Opera Neon, a big step toward agentic browsers that think, act, and create alongside you.
(And buy stuff you want, simply hard problems, and do some of your work for you.)
Opera’s new browser integrates real AI agents capable of executing multi-step tasks, interacting with web apps, summarizing content, and even building playable games or interactive tools, all inside your browser.
We chat about
• What an agentic browser is and why it matters
• How AI agents like Neon Do and Neon Make automate complex workflows
• Opera’s vision for personal, on-device, privacy-aligned AI
• Live demos of shopping, summarizing, and game creation using AI
• Why your browser might replace your operating system
🎮 Watch Henrik demo the Neon agent building a Snake game from scratch
🛍️ See AI navigate Amazon, add items to cart, and act independently
🧠 Learn why context is king and how this changes everything about search, tabs, and multitasking
00:00 Introduction: Should Your Browser Be an AI Agent?
00:52 The Evolution of AI in Browsers
04:53 Introducing Opera's Agentic Browser
11:51 Neon: The Future of Browsing
20:26 Exploring the Cart Functionality
20:53 Future of AI in Shopping
22:39 Trust and Privacy in AI
25:05 Neon Make: Generative AI Capabilities
26:05 Creating a Snake Game with Neon
28:33 Analyzing Car Insurance Policies
31:58 Sharing and Publishing with Neon
35:53 Conclusion and Future Prospects
Can nuclear waste solve the energy crisis caused by AI data centers? Maybe. And maybe much more, including providing rare elements we need like rhodium, palladium, ruthenium, krypto-85, Americium-241, and more.
Amazingly:
- 96% of nuclear fuel’s energy is left after it's "used"
- Recycling can reduce 10,000-year waste storage needs to just 300 years
- Curio’s new process avoids toxic nitric acid and extracts valuable isotopes
- 1 recycling plant could meet a third of America’s nuclear fuel needs
- Nuclear recycling could enable AI, space travel, and medical breakthroughs
In this episode of TechFirst, host John Koetsier talks with Ed McGinnis, CEO of Curio and former Acting Assistant Secretary for Nuclear Energy at the U.S. Department of Energy. McGinnis is on a mission to revolutionize how we think about nuclear waste, turning it into a powerful resource for energy, rare isotopes, and even precious metals like rhodium.
Watch now and subscribe for more deep tech insights.
Neura Robotics officially launched shed 4NE-1 this week. It's the leading European humanoid robot and it's the most powerful humanoid robot in existence right, as far as I'm aware, able to life 100kg or 220 pounds.
Neura also released a plan to build 5 million robots by 2030, a new home service robot named MiPA, a new 'Omnisensor' technology platform for integrating input from multiple types of sensors, and an app store for robot skills that anyone can contribute to ... and profit from.
In this TechFirst, we chat with David Reger, CEO of Neura Robotics, the leading European humanoid robotics company.
We touch on advanced sensors, AI integration, and Neura Robotics' platform that enables extensive customization and scalability. We also chat about significant partnerships with companies like NVIDIA, SAP, and Deutsche Telekom.
00:00 Introduction to Humanoid Robotics
00:22 Interview with Neura Robotics CEO
00:39 Launch of '4NE-1' Humanoid Robot
02:26 Technical Specifications and Capabilities
04:39 Advanced Sensor Technology
09:24 Artificial Skin and Touch Sensory
14:05 AI Integration in Robotics
15:53 Challenges in Embodied AI
17:11 Robot Gyms and Training
19:10 Partnerships and Collaborations
20:56 The App Store for Robot Skills
22:18 AI-Assisted Development Platform
29:15 Introducing Mepa: The Home Robot
31:41 Future Prospects and Closing Remarks
AI is big these days. Massive. More parameters, more memory, more capability. But what if the future is in tiny AI. Neural networks as small at 8 kilobytes on tiny chips, embedded in everything?
Think smart shoes.
Smart doors.
Smart ... everything
In this episode of TechFirst, host John Koetsier discusses the future of smart devices with Yubei Chen, co-founder of AIzip.
The conversation explores how small-scale AI can revolutionize everyday objects like shoes, cameras, and baby monitors. They delve into how edge AI, which operates at the device level rather than in the cloud, can create efficient, reliable, and cost-effective smart solutions. Chen explains the potential and challenges of integrating AI into traditional devices, including the hardware and software requirements, and touches on the implications for product quality, safety, and cost.
This insightful discussion provides a look into the near future of ubiquitous, intelligent technology in our daily lives.
00:00 Introduction to Smart Matter
01:17 Examples of Smart Applications
03:40 Building Efficient AI Models
04:01 The Future of Edge AI
09:32 Hardware for Smart Devices
11:52 Potential Downsides and Challenges
18:14 Conclusion and Final Thoughts
IBM has just unveiled its boldest quantum computing roadmap yet: Starling, the first large-scale, fault-tolerant quantum computer—coming in 2029. Capable of running 20,000X more operations than today’s quantum machines, Starling could unlock breakthroughs in chemistry, materials science, and optimization.
According to IBM, this is not just a pie-in-the-sky roadmap: they actually have the ability to make Starling happen.
In this exclusive conversation, I speak with Jerry Chow, IBM Fellow and Director of Quantum Systems, about the engineering breakthroughs that are making this possible ... especially a radically more efficient error correction code and new multi-layered qubit architectures.
We cover:
- The shift from millions of physical qubits to manageable logical qubits
- Why IBM is using quantum low-density parity check (qLDPC) codes
- How modular quantum systems (like Kookaburra and Cockatoo) will scale the technology
- Real-world quantum-classical hybrid applications already happening today
- Why now is the time for developers to start building quantum-native algorithms
00:00 Introduction to the Future of Computing
01:04 IBM's Jerry Chow
01:49 Quantum Supremacy
02:47 IBM's Quantum Roadmap
04:03 Technological Innovations in Quantum Computing
05:59 Challenges and Solutions in Quantum Computing
09:40 Quantum Processor Development
14:04 Quantum Computing Applications and Future Prospects
20:41 Personal Journey in Quantum Computing
24:03 Conclusion and Final Thoughts
How will we scale humanoid robot product to hundreds of thousands and millions of units?
In this TechFirst we do a deep dive with Apptronik CEO Jeff Cardenas. We chat about Apptronik's Apollo, his recent $400M+ funding round, the partnership with manufacturing giant Jabil, and much more.
We also talk about innovations in AI that have accelerated robot learning and dexterous manipulation, the challenge of scaling manufacturing, and Apptronik's future vision.
🎙️ Podcast Summary:
Topic: The future of humanoid robotics, funding, manufacturing, and the global AI arms race
Guest: Jeff Cardenas, CEO of Apptronik
🦾 Apollo Robot Updates
• Apollo 1 debuted in 2023; new versions are coming in 2025 with major upgrades.
• Focus areas: larger batteries, swappable parts, improved actuators, and system robustness.
• Push toward dexterous manipulation, not just lifting boxes—real industrial work.
💰 $403 Million Funding Round
• Grew from $350M with new investments from Mercedes, Google (DeepMind), B Capital, Capital Factory, and others.
• Mercedes’ legacy of precision and design deeply inspires Cardenas.
• Funding will fuel scaling, robustness, and manufacturing partnerships.
🏭 Manufacturing Strategy
• New partnership with global manufacturing giant Jabil.
• Learning from Jabil to avoid premature scaling pitfalls.
• Long-term plan includes building out their own capability in Texas and Mexico.
• Manufacturing flexibility is key amid tariff and geopolitical uncertainty.
🌍 The Global Race: US vs. China
• Over 100 humanoid robotics companies worldwide; US and China dominate.
• China has invested $138B+ into domestic robotics, outpacing the rest of the world in deployment.
• Cardenas calls it the “Space Race of Our Time”, emphasizing urgency and national strategy.
📅 Roadmap for Humanoids
• 2025: Proving commercial viability in industrial/logistics environments.
• 2026+: Volume manufacturing begins for industrial use.
• Phase 2: Retail, healthcare, hospitality.
• Phase 3 (5+ years): Elder care and home robots — Cardenas’ personal North Star.
🧠 Vision & Ethics
• “Robots for Humans” isn’t just branding—it’s a human-centered design philosophy.
• Deep partnership with Google DeepMind ensures AI is developed responsibly.
• Apptronik’s mission: build robots that people want around, not fear.
💡 Soundbites
• “You don’t just build the robot. You build the machine that builds the machine.”
• “We want to be the Apple of robots—designed for people.”
• “This is the 1980s of humanoid robots—but innovation is 10x faster.”
00:00 Introduction to Humanoid Robot Innovation
00:31 Apron's Recent Achievements and Funding
01:23 Interview with Apptronik CEO, Jeff Cardenas
01:46 Advancements in Apollo Humanoid Robot
03:47 Challenges in Scaling Robotics
07:56 Future Plans and Human-Centered Robotics
10:35 Global Race and Investment in Robotics
20:03 Meeting Howard Morgan and B Capital
20:41 Inspiration from Mercedes-Benz and Steve Jobs
22:02 Global Investors and Supporters
23:37 Manufacturing Challenges and Strategies
29:36 The Global Race in Humanoid Robotics
35:39 Timetable for Humanoid Robots
39:57 The Future of Humanoid Robots in Elder Care
42:22 Closing Remarks and Final Thoughts
Would you want a personal AI that acts as your twin mind? I've always dreamed of never forgetting anything. And instantly and effortlessly remembering anything I need, right away. Now, an AI-driven app called TwinMind might help me do something similar.
In this episode of TechFirst we chat with Daniel George, the CEO of TwinMind. This innovative AI app aims to become your second brain, capturing and processing your life events in real-time.
We chat about George's inspiration behind TwinMind, its features, future vision, and the LLM tech making it possible. We also chat about privacy and security concerns.
00:00 Introduction to AI and Twin Mind
00:51 How Twin Mind Works
01:37 Real-World Applications and User Experience
03:37 Privacy and Security Concerns
11:06 Technology Behind Twin Mind
15:17 Future of AI and Twin Mind's Vision
21:08 Conclusion and Final Thoughts
Microsoft just announced a massive quantum computer breakthrough that uses an entirely new state of matter. The new quantum computer uses topological superconductors to create stable qubits with low error rates.
Topological superconductors enable stable qubits by utilizing Majorana zero modes to protect quantum information from decoherence.
The result: Microsoft should have a fault-tolerant usable quantum computer this decade. As in, before 2030.
In this TechFirst, we talk with Microsoft's head of quantum hardware, Chetan Nayak, who has been working on solving this problem for literally 19 years, and he talks us through the technology and what it means for quantum computer. He explains the methods to measure this new state non-destructively, the novel architecture that leverages it, and Microsoft's ambitious roadmap towards building a fault-tolerant quantum computer within this decade.
The conversation delves into potential future applications, the integration of this technology into global data infrastructures, and the transformative possibilities it holds for various fields, including chemistry, materials science, and beyond.
00:00 Introduction to Fault Tolerant Quantum Computing
00:48 Understanding the New Phase of Matter: Topological Superconducto
r02:10 Properties and Applications of Superconductors
03:11 Creating and Engineering Topological Superconductors
05:16 The Significance of Topological Superconductors for Qubits
09:54 Measuring Quantum States with Quantum Dots
13:03 Building and Testing Quantum Devices
19:43 Future Roadmap for Quantum Processors
19:53 Unveiling the Quantum Roadmap
20:34 DARPA Collaboration and Engineering Milestones
21:23 Fabrication and Demonstration of the Eight Qubit Processor
21:43 Accelerating Quantum Progress
23:22 Scaling Quantum Computers for Practical Applications
27:04 The Long Journey of Quantum Research at Microsoft
33:24 Future Prospects and Challenges in Quantum Computing
38:10 Quantum Computing's Role in Addressing Global Issues
42:32 Reflections on a 19-Year Journey