Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/4b/6a/cc/4b6acc53-d792-b8bf-530d-d65d6b55a366/mza_16675100765413598218.jpg/600x600bb.jpg
MLOps.community
Demetrios
490 episodes
4 days ago
Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)
Show more...
Technology
RSS
All content for MLOps.community is the property of Demetrios and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/3809022/3809022-1765378660099-cf2d5cc90c63a.jpg
How Sierra AI Does Context Engineering
MLOps.community
1 hour 4 minutes 3 seconds
1 month ago
How Sierra AI Does Context Engineering

Zack Reneau-Wedeen is the Head of Product at Sierra, leading the development of enterprise-ready AI agents — from Agent Studio 2.0 to the Agent Data Platform — with a focus on richer workflows, persistent memory, and high-quality voice interactions.


How Sierra Does Context Engineering, Zack Reneau-Wedeen // MLOps Podcast #350


Join the Community:

https://go.mlops.community/YTJoinIn

Get the newsletter: https://go.mlops.community/YTNewsletter


// Abstract

Sierra’s Zack Reneau-Wedeen claims we’re building AI all wrong and that “context engineering,” not bigger models, is where the real breakthroughs will come from. In this episode, he and Demetrios Brinkmann unpack why AI behaves more like a moody coworker than traditional software, why testing it with real-world chaos (noise, accents, abuse, even bad mics) matters, and how Sierra’s simulations and model “constellations” aim to fix the industry’s reliability problems. They even argue that decision trees are dead, replaced by goals, guardrails, and speculative execution tricks that make voice AI actually usable. Plus: how Sierra trains grads to become product-engineering hybrids, and why obsessing over customers might be the only way AI agents stop disappointing everyone.


// Related Links

Website: https://www.zackrw.com/


~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~

Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore

Join our Slack community [https://go.mlops.community/slack]

Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)]

Sign up for the next meetup: [https://go.mlops.community/register]

MLOps Swag/Merch: [https://shop.mlops.community/]


Connect with Demetrios on LinkedIn: /dpbrinkm

Connect with Zack on LinkedIn: /zackrw/


Timestamps:

[00:00] Electron cloud vs energy levels

[03:47] Simulation vs red teaming

[06:51] Access control in models

[10:12] Voice vs text simulations

[13:12] Speaker-adaptive turn-taking

[18:26] Accents and model behavior

[23:52] Outcome-based pricing risks

[31:40] AI cross-pollination strategies

[41:26] Ensemble of models explanation

[46:47] Real-time agents vs decision trees

[50:15] Code and no-code mix

[54:04] Goals and guardrails explained

[56:23] Wrap up

[57:31] APX program!

MLOps.community
Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)