In this episode, Alex explores the impact of generative AI on writing and pedagogy with Dr. Victoria Livingstone, writer, editor, and author of the recent TIME magazine article, "I Quit Teaching Because of ChatGPT."
Dr. Livingstone discusses the wide reception of her article by thinkers and scholars around the world, and implications of generative AI in education, particularly in writing instruction. Together, they explore the challenges of identifying AI-assisted writing, AI literacy, and the evolving role of educators in adapting to and adopting these technologies.
In this episode, Alex explores the intersections of social media, mental health products, consumer trust, and building responsible AI products with Emmanuel Matthews, Product Leader, previously at Spring Health, Course Hero, Google, and DeepMind.
Emmanuel discusses building trust in AI-powered mental health solutions, bringing AI products to market and focusing on user experience, and shares his outlook for the future of AI with connected devices and personal AI assistants.
In this episode, Alex explores the intersections of design, civic engagement, and AI, with Sarah Lawrence, Design Director at Design Emporium and Founder of Tallymade.
Sarah discusses driving community engagement through the creation of AI-powered collective art, preserving creativity in the age of AI art, testing usability with her very own "mom test", and leveraging AI is for everyday utility, like generating recipes for eliminating food waste, or efficient errand mapping.
In this episode, Alex explores AI governance, transparency, and consent in healthcare and the creator economy with Nana Nwachukwu, AI Ethics and Governance Consultant at Saidot. Nana shares her three pillars for AI governance, the challenges of transparency in AI systems, and reveals the most critical question consumers should be asking about AI.
Nana is an accomplished lawyer, knowledge manager, and policy consultant currently specializing in digital governance and AI, with a decade of experience spanning multiple sectors and countries.
In this episode, Alex explores the intersections of ethics, privacy, and AI with Josh Schwartz, CEO and Co-founder of Phaselab. Josh shares the opportunities and risks when operationalizing ethics in a for-profit business, the importance of prioritizing responsible AI in early stage companies, and responsibility in the context of AI and privacy.
Josh founded Phaselab in 2023 to help companies automate their data privacy programs. Prior to Phaselab, he served as CTO at Chartbeat, leading one of the largest and most influential analytics companies in the world from its early days through its exit. Josh is a machine learning expert by training, and has researched computer vision and AI at MIT CSAIL, Cornell, and the University of Chicago.
In this episode, Alex chats with Sabrina Palme, CEO and Co-founder of Palqee, about language learning, culture, transparency and explainability in AI systems, the power of a diverse founding team, and the future of AI morality. Sabrina also explains her unique hierarchy for data governance.
Sabrina is a certified Data Compliance Officer with broad experience in implementing privacy-by-design and security-by-design principles, aligning with international data protection laws and Information Security frameworks. As CEO of Palqee, a UK-based company specializing in Governance, Risk, Compliance (GRC), and AI Governance automation software operating across Europe, LatAm, and the US, Sabrina is driving the development of an innovative solution designed to detect contextual bias in AI systems to enhance trustworthiness, transparency, and fairness in AI technology.
In this episode, Alex chats with Gerald Carter, CEO and Founder of Destined AI, about the importance of de-biasing data, the power of language, founder perseverance, and what it means to generate diverse, consented data at scale.
Gerald Carter founded Destined AI to help companies detect and mitigate unwanted bias in AI. He developed the company after experiencing AI’s mislabeling of diverse people with derogatory terms, failing to decipher speech by people with southern U.S. accents, and struggling to detect brown skin tones. Through accurate, balanced, and ethically sourced data, Destined AI aims to create a safe and equitable world where AI represents the best that humankind has to offer, not the worst. The company has ethically sourced data and has worked with over 800 contributors.
In the inaugural episode, Alex and Sharon Zhang, CTO and Co-Founder of Personal.ai, discuss building a consumer AI company, explainability, AI applications for good, and its impact on society and individuals.
Sharon Zhang (she/her) is passionate about building AI and NLP products that impact the lives of patients, employees, and everyday people. She is the Co-founder and CTO of Personal.ai– a consumer AI startup on a mission to build private, personal, and trainable AI models for people and brands. Sharon informs her perspectives from well over a decade of experience in AI and NLP, and leading AI dev teams at Nuance, Glint, Kaiser Permanente, and others.
Welcome to The Culture of Machines hosted by Alex de Aranzeta (she/her), the show where responsible AI meets culture.
Together with AI founders, ethicists, and experts, Alex will uncover and explore key questions about responsible AI and its intersections with culture, people, and society.
Follow us on X/ Twitter and LinkedIn.