Home
Categories
EXPLORE
Comedy
True Crime
Society & Culture
Music
Education
News
Religion & Spirituality
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/ac/10/fc/ac10fcfb-3340-16a0-19bb-d977d1e79abb/mza_16396350651843740728.png/600x600bb.jpg
The HCL Review Podcast
HCI Podcast Network
100 episodes
19 hours ago
Want to listen to your favorite HCL Review article on the go?! We’ve got you covered! Catch all of your favorites right here in your podcast feed!
Show more...
Business
RSS
All content for The HCL Review Podcast is the property of HCI Podcast Network and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Want to listen to your favorite HCL Review article on the go?! We’ve got you covered! Catch all of your favorites right here in your podcast feed!
Show more...
Business
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/ac/10/fc/ac10fcfb-3340-16a0-19bb-d977d1e79abb/mza_16396350651843740728.png/600x600bb.jpg
AI Shaming in Organizations: When Technology Adoption Threatens Professional Identity, by Jonathan H. Westover PhD
The HCL Review Podcast
4 minutes
3 days ago
AI Shaming in Organizations: When Technology Adoption Threatens Professional Identity, by Jonathan H. Westover PhD
Abstract: Recent field-experimental evidence reveals that workers systematically reduce their reliance on artificial intelligence recommendations when that usage is visible to evaluators, even at measurable performance costs. This phenomenon—termed "AI shaming"—reflects emerging workplace norms in which heavy AI adoption signals lack of confidence, competence, or independent judgment. Drawing on labor economics, organizational behavior, and technology adoption research, this article examines how image concerns shape AI integration in contemporary organizations. Analysis shows that workers fear visible AI reliance conveys weakness in judgment—a trait increasingly valued in AI-assisted work—leading to systematic under-utilization of algorithmic recommendations. The performance penalty is substantial: accuracy declines approximately 3.4% when AI use becomes observable, with one in four potential successful human-AI collaborations lost to visibility concerns. These effects persist despite explicit performance incentives, reassurances about worker quality, and clear communication that evaluators assess only accuracy on identical AI-assisted tasks. The article synthesizes evidence on organizational responses, including transparency recalibration, distributed evaluation structures, and purpose-driven culture shifts, while highlighting why overcoming AI stigma proves particularly resistant to conventional interventions. Findings underscore that realizing AI's productivity promise requires not only better algorithms but fundamental rethinking of how organizations frame, monitor, and reward technology adoption.
The HCL Review Podcast
Want to listen to your favorite HCL Review article on the go?! We’ve got you covered! Catch all of your favorites right here in your podcast feed!