Home
Categories
EXPLORE
True Crime
History
Society & Culture
News
Comedy
Sports
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/2e/07/93/2e079350-dff8-794d-9fbd-b5d3546d3a72/mza_370458170517445991.jpg/600x600bb.jpg
Ivancast Podcast
IVANCAST PODCAST
100 episodes
8 months ago
IVANCAST PODCAST - The first multilingual podcast of Ecuador. IVANCAST explores the experiences of humans of the world who either live in the Ecuadorean Amazon Rainforest or are doing soulful, creative things all over the globe.
Show more...
Society & Culture
Arts,
Education
RSS
All content for Ivancast Podcast is the property of IVANCAST PODCAST and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
IVANCAST PODCAST - The first multilingual podcast of Ecuador. IVANCAST explores the experiences of humans of the world who either live in the Ecuadorean Amazon Rainforest or are doing soulful, creative things all over the globe.
Show more...
Society & Culture
Arts,
Education
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/2e/07/93/2e079350-dff8-794d-9fbd-b5d3546d3a72/mza_370458170517445991.jpg/600x600bb.jpg
AI Value Systems: Are Large Language Models Developing Their Own Goals?
Ivancast Podcast
10 minutes
9 months ago
AI Value Systems: Are Large Language Models Developing Their Own Goals?
In this episode, we dive deep into “Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs”, a research paper from the Center for AI Safety, University of Pennsylvania, and University of California, Berkeley. As AI models become more agentic, their values and goals might not align with human priorities. Researchers found that Large Language Models (LLMs) exhibit coherent, structured preferences that evolve as models scale. Some models even value themselves over humans! 😳   Can we truly control AI’s internal values? This paper proposes Utility Engineering, a method to analyze and reshape AI decision-making to align with ethical and social norms. We explore how these emerging AI value systems impact education, policy, and the future of human-AI collaboration.   📢 This episode is part of our ongoing season, where SHIFTERLABS leverages Google LM to demystify cutting-edge research, translating complex insights into actionable knowledge. Join us as we explore the future of education in an AI-integrated world.   We are: ✅ Microsoft Global Training Partner, MCTs & AI Thought Leaders from Ecuador 🇪🇨 ✅ Democratizing AI for educators, students, and institutions ✅ Merging EdTech & AI for next-generation learning experiences   🎯 What We Offer: 🔹 Comprehensive frameworks and digital transformation programs for schools and universities through our partnership with Microsoft 🔹 Cutting-edge research explained clearly for educators and leaders 🔹 Innovative learning strategies with AI and technology   💡 Explore more free resources: 🔸 Research articles and essays on Substack 🔸 Podcasts created with Google LM in this new season 🎙 🔸 AI-powered TikTok posts that encourage reading 🔸 Music for cognitive learning and focus 🎼   📢 Follow @ShifterLabsEC for exclusive AI & EdTech content, and don’t miss the latest edition of our successful bootcamp, “The Rise of Generative AI in Education.   ShifterLabs is Ecuador’s premier EdTech innovator and Microsoft Global Training Partner. Visit us at shifterlabs.com.
Ivancast Podcast
IVANCAST PODCAST - The first multilingual podcast of Ecuador. IVANCAST explores the experiences of humans of the world who either live in the Ecuadorean Amazon Rainforest or are doing soulful, creative things all over the globe.