Home
Categories
EXPLORE
Music
Society & Culture
True Crime
Comedy
News
Religion & Spirituality
Business
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/51/84/1f/51841fc1-69af-0128-4b0f-6a95000952df/mza_11517810800037441896.jpg/600x600bb.jpg
Leading Change
Ema Roloff
51 episodes
2 days ago
Show more...
Business
RSS
All content for Leading Change is the property of Ema Roloff and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Show more...
Business
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/51/84/1f/51841fc1-69af-0128-4b0f-6a95000952df/mza_11517810800037441896.jpg/600x600bb.jpg
Salesforce, Synthetic Data, and the Death of AI Trust
Leading Change
10 minutes
1 week ago
Salesforce, Synthetic Data, and the Death of AI Trust
Is synthetic data the solution to "jagged" enterprise AI... or the fast track to Model Collapse?We just got used to "Agentic AI." Now, Salesforce is defining the next frontier of automation with the new term Enterprise Generalized Intelligence (EGI) and betting big on synthetic data to train its new Agent Force solutions. But is this the right path for enterprise trust?In this episode of Leading Change of the Wild, I dig into Salesforce's move and the massive risks involved in training AI on "fake" data.Here’s what I explore: What Salesforce's new term (EGI) really means and why they introduced it. The argument for synthetic data: cost savings, compliance (HIPAA), and mitigating historical bias. The critical risk of Model Collapse when AI models are trained on their own generative outputs. When synthetic data makes sense (e.g., self-driving cars and fraud detection) versus general enterprise use. The paradox: Using synthetic data to smooth out models may introduce new, unverified bias and hurt trust. The goal is 100% accurate, trustworthy AI. But training models on data that was literally designed to mimic human output might be the opposite of what's needed for lasting organizational trust.👇 Let’s discuss:Do you believe synthetic data is a viable path to increasing AI trust and accuracy in the enterprise?Should models be honed with proprietary data or a specialized synthetic environment before deployment?
Leading Change