In this episode of AI x DevOps, Rohit sits down with Görkem Ercan, CTO at Jozu, a company building a DevOps platform for AI agents and models. Görkem, a veteran with over two decades of software experience (including contributions to the Eclipse Foundation), explains why MLOps is fundamentally different from traditional, deterministic DevOps—leading to extreme pipeline fragmentation.
Here are some of our favourite takeaways:
• Standardization is Key: Why OCI is the recognized standard for packaging AI/ML artifacts, and how the Model Packs project (with ByteDance, Red Hat, and Docker) is defining the artifact structure.
• Open Source Headaches: The critical challenge maintainers face when receiving large amounts of untested, verbose, AI-generated code.
• LLM Economics: Discover why running small, fine-tuned LLMs in-house can be cheaper and provide more predictable, consistent results than generic large providers.
• KitOps Solution: How KitOps creates an abstraction that allows data scientists to focus on training while leveraging existing DevOps platforms for deployment.
Tune in now to understand the standardization movement reshaping the future of AI development!
Join host Rohit (Facets Cloud) in conversation with Sanjeev Ganjihal, Senior Specialist Solutions Architect - Containers at AWS and early Kubernetes expert. They discuss the rapid evolution of AI and DevOps, Kubernetes as the new operating system, generative AI in engineering, and the shifting landscape of roles like DevOps, SRE, and AIOps. Sanjeev shares practical advice on using AI assistants, agentic tools, self-hosted models, and the balancing act between automation, productivity, and upskilling in today’s cloud-native world.
This podcast features a discussion with Nathan Hamiel, Director of Research at Kudelski Security, an expert with 25 years in the cybersecurity space, focusing specifically on AI security.
The conversation centers on navigating the generative AI revolution with a grounded and security-first perspective, particularly for product developers and the security community. Key topics explored include:
Ultimately, the podcast serves as a grounding discussion for product engineers on how to build and integrate AI solutions in a secure and responsible manner, emphasizing that AI tools should be used to solve tasks effectively rather than chasing a path to superintelligence.
In this episode, Facets.cloud co-founders Rohit and Anshul dive deep into Model Context Protocols (MCPs), explaining how they evolved from basic chat assistants to standardized tool connectors for AI-driven DevOps. You’ll learn best practices for designing MCP servers, naming conventions that reduce hallucinations, dry-run workflows for safe automation, and insights on when and why to adopt MCPs within your organization.
In the very first episode of the AI x DevOps Podcast, we dive into how AI is actually changing infrastructure, not hypothetically, but line by line.
Rohit Raveendran, is joined by Vincent De Smet, DevOps engineer at Handshakes.ai, and together, they explore what happens when LLMs start writing Terraform, the difference between deterministic and vibe-coded infra, and why CDK might offer a more AI-friendly future than raw HCL.
They talk about the trade-offs of trust, the future of platform engineering in an AI-powered world, and how inner-sourced guardrails could become the foundation for safe, scalable self-service. And yes, they touch on the scary parts too like what happens when your AI agent starts doing more than you asked.
If you're wondering what it actually looks like to bring AI into DevOps without losing control, this one’s for you.
Wondering how AI-Ready is your DevOps? Take a 2-minute survey here to find out.