
In this episode of The Private AI Lab, Johan sits down with Frank Denneman to explore the past, present, and future of VMware’s Private AI portfolio.
This conversation goes beyond AI buzzwords and marketing fluff. Together, Johan and Frank dive deep into the real infrastructure and resource management challenges that emerge when AI workloads enter enterprise environments. GPUs, scheduling, isolation, and platform design all take center stage—viewed through the lens of real-world VMware deployments.
If you are an infrastructure architect, platform engineer, or IT decision-maker designing AI behind the firewall, this episode provides grounded insights into what actually matters.
🔍 What you’ll learn in this episode
How VMware’s Private AI strategy has evolved over time
Why AI workloads fundamentally change infrastructure assumptions
The importance of resource management for GPU-backed workloads
Key architectural trade-offs when running AI on-prem
How to think about the future of enterprise AI platforms
🎧 Listen & Subscribe
For more experiments, insights, and behind-the-firewall AI discussions, visit johan.ml.
Experiment complete. Until the next one — stay curious.