Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
TV & Film
History
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/ca/27/4d/ca274d47-0366-f9f6-dd85-ccc1be9810c1/mza_769800370802724180.jpg/600x600bb.jpg
Embodied AI 101
Shaoqing Tan
62 episodes
1 day ago
 Stay in the loop on research in AI and physical intelligence.
Show more...
Technology
RSS
All content for Embodied AI 101 is the property of Shaoqing Tan and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
 Stay in the loop on research in AI and physical intelligence.
Show more...
Technology
Episodes (20/62)
Embodied AI 101
Episode 63: Robo-Dopamine: Dense Reward Learning for High-Precision Robot Manipulation
# Robo-Dopamine: Dense Reward Learning for High-Precision Robot Manipulation In a recent preprint (Tan et al., *Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation*), researchers from Peking University, BAAI, Unive...
Show more...
1 day ago
51 minutes 22 seconds

Embodied AI 101
Episode 62: AstraNav-Memory: Compressing Visual Context for Lifelong Navigation
# AstraNav-Memory: Compressing Visual Context for Lifelong Navigation Recent work by Botao Ren, Junjun Hu, Xinda Xue and others (Alibaba’s Amap, Tsinghua, Peking) introduces **AstraNav-Memory: Contexts Compression for Long Memory**, a method for giv...
Show more...
2 days ago
21 minutes 53 seconds

Embodied AI 101
Episode 61: DexWM: World Models for Dexterous Manipulation from Human Videos
# DexWM: World Models for Dexterous Manipulation from Human Videos Dexterous manipulation – the art of using multi-fingered hands to pick, place, twist, or otherwise manipulate objects – remains a towering challenge in robotics. Even seemingly simp...
Show more...
1 week ago
34 minutes 45 seconds

Embodied AI 101
Episode 60: Video-Action Models: Generalizing Robot Control with Video Diffusion
# Video-Action Models: Generalizing Robot Control with Video Diffusion In a new preprint titled *“mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs”* (Pai et al., arXiv 25 Dec 2025), Jonas Pai and colleagues (from Mimic Ro...
Show more...
1 week ago
50 minutes 51 seconds

Embodied AI 101
Episode 59: Calibrated Confidence in Controllable Video Models
# Calibrated Confidence in Controllable Video Models In the recent preprint *“World Models That Know When They Don't Know: Controllable Video Generation with Calibrated Uncertainty”* (Mei et al., 2025) ([huggingface.co](https://huggingface.co/papers...
Show more...
1 week ago
50 minutes 25 seconds

Embodied AI 101
Episode 57: Scaling Up Offline Model-Based RL with Action Chunks (MAC)
# Scaling Up Offline Model-Based RL with Action Chunks (MAC) In *“Scalable Offline Model-Based RL with Action Chunks”* (Park et al., 2025) – by Kwanyoung Park, Seohong Park, Youngwoon Lee, and Sergey Levine (UC Berkeley and Yonsei Univ.) – the autho...
Show more...
1 week ago
37 minutes 8 seconds

Embodied AI 101
Episode 56: Emergent Human-to-Robot Transfer in Vision-Language-Action Models
# Emergent Human-to-Robot Transfer in Vision-Language-Action Models **Simar Kareer, Karl Pertsch, James Darpinian, Judy Hoffman, Danfei Xu, Sergey Levine, Chelsea Finn, Suraj Nair Physical Intelligence (PI); Georgia Tech – *“Emergence of Human to ...
Show more...
1 week ago
32 minutes 43 seconds

Embodied AI 101
Episode 55: DexScrew: Learning Dexterous Manipulation from “Imperfect” Simulations
# DexScrew: Learning Dexterous Manipulation from “Imperfect” Simulations *Learning Dexterous Manipulation Skills from Imperfect Simulations* is a recent preprint by Hsieh *et al.* (Dec 2025) from UC Berkeley (Elvis Hsieh*, Wen-Han Hsieh*, Yen-Jen Wa...
Show more...
2 weeks ago
58 minutes 21 seconds

Embodied AI 101
Episode 54: X-Humanoid: Robotizing Human Videos into Humanoid Videos
# X-Humanoid: Robotizing Human Videos into Humanoid Videos A new preprint titled **“X-Humanoid: Robotize Human Videos to Generate Humanoid Videos at Scale”** (Pei Yang, Hai Ci, Yiren Song, Mike Zheng Shou, NUS, Dec 2025) tackles a fundamental data b...
Show more...
2 weeks ago
32 minutes 39 seconds

Embodied AI 101
Episode 53: Decoupled Q-Chunking: Combining Long-Range Value Propagation with Reactive Policies
# Decoupled Q-Chunking: Combining Long-Range Value Propagation with Reactive Policies In *Decoupled Q-Chunking* (Li, Park, Levine, 2025), the authors tackle a long-standing issue in reinforcement learning: how to propagate value efficiently over far...
Show more...
2 weeks ago
1 hour 6 minutes 41 seconds

Embodied AI 101
Episode 52: F2D2: Joint Distillation for Fast Likelihood and Sampling in Flow Models
# F2D2: Joint Distillation for Fast Likelihood and Sampling in Flow Models In the recent preprint **“Joint Distillation for Fast Likelihood Evaluation and Sampling in Flow-based Models”** by Xinyue Ai, Yutong He, Albert Gu, Ruslan Salakhutdinov, J...
Show more...
3 weeks ago
52 minutes 35 seconds

Embodied AI 101
Episode 51: Training-Time Action Conditioning for Efficient Real-Time Chunking
# Training-Time Action Conditioning for Efficient Real-Time Chunking In “Training-Time Action Conditioning for Efficient Real-Time Chunking” (Black et al., 2025), researchers from Physical Intelligence (including Kevin Black, Allen Z. Ren, Michael E...
Show more...
3 weeks ago
28 minutes 24 seconds

Embodied AI 101
Episode 50: π0.6: Learning from Experience for Vision–Language–Action Robotic Models
In a recent technical report, the Physical Intelligence team (led by Ali Amin, Raichelle Aniceto, Ashwin Balakrishna, Kevin Black, Ken Conley, Grace Connors, and dozens of others) introduces **“π0.6: a VLA That Learns From Experience.”** The key ide...
Show more...
3 weeks ago
54 minutes 51 seconds

Embodied AI 101
Episode 49: Robotic World Model (RWM): Learning Stable Neural Simulators for Long-Horizon Control
# Robotic World Model (RWM): Learning Stable Neural Simulators for Long-Horizon Control Recent work by Chenhao Li, Andreas Krause, and Marco Hutter (ETH Zurich) introduces **Robotic World Model (RWM)** – a learned, recurrent simulator designed to su...
Show more...
3 weeks ago
33 minutes 5 seconds

Embodied AI 101
Episode 48: Much Ado About Noising: Dispelling the Myths of Generative Robotic Control
# Much Ado About Noising: Dispelling the Myths of Generative Robotic Control In recent years, robotics and control researchers have widely embraced *generative control policies* – that is, parameterizations of robotic policies using flow or diffusio...
Show more...
3 weeks ago
55 minutes 22 seconds

Embodied AI 101
Episode 47: ReWiND: Language-Guided Reward Learning for Robot Task Adaptation
# ReWiND: Language-Guided Reward Learning for Robot Task Adaptation Modern robot learning systems crave rich supervision, but collecting demonstrations or hand-designing rewards for every new task is impractical. Imagine wanting a robot to stack dif...
Show more...
3 weeks ago
38 minutes 16 seconds

Embodied AI 101
Episode 46: Nested Learning: Unifying Architecture and Optimization for Memoryful Models
# Nested Learning: Unifying Architecture and Optimization for Memoryful Models In “Nested Learning: The Illusion of Deep Learning Architectures” (Behrouz *et al.*, NeurIPS 2025), researchers at Google Research (with Peilin Zhong at Columbia) propose...
Show more...
3 weeks ago
53 minutes 4 seconds

Embodied AI 101
Episode 45: AsyncVLA: Real-Time Vision-Language-Action through Asynchronous Flow Matching
# AsyncVLA: Real-Time Vision-Language-Action through Asynchronous Flow Matching **Context & Motivation.** Robotics is riding a wave of foundation models and vision-language-action (VLA) controllers that promise generalist “pick-and-place” skills fo...
Show more...
4 weeks ago
1 hour 15 seconds

Embodied AI 101
Episode 44: GigaWorld-0: World Models as a Data Engine for Embodied AI
# GigaWorld-0: World Models as a Data Engine for Embodied AI **1. What Problem Is This Paper Trying to Solve?** Modern embodied AI — controlling robots and agents via vision and language — is bottlenecked by data. Training vision‐language‐action (V...
Show more...
4 weeks ago
24 minutes 30 seconds

Embodied AI 101
Episode 43: AAWR: Using Privileged Training to Learn Active Perception in Robotics
# AAWR: Using Privileged Training to Learn Active Perception in Robotics Modern robots often must act under partial observability: their onboard sensors don’t immediately reveal all task-relevant information. For instance, a robot may have to repos...
Show more...
4 weeks ago
21 minutes 2 seconds

Embodied AI 101
 Stay in the loop on research in AI and physical intelligence.