When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
All content for ADAPT Radio is the property of The ADAPT Centre and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
Explainable AI in Action: From Tutors to Health Tech
ADAPT Radio
59 minutes 36 seconds
5 months ago
Explainable AI in Action: From Tutors to Health Tech
Every day, AI systems influence how we learn, shop, and make decisions—but to truly support us, AI must communicate in ways tailored to who we are as individuals.
In this episode, we share a keynote speech from the ADAPT Annual Scientific Conference that explores Explainable AI and its potential to personalize user experiences. The talk discusses how AI can adapt explanations to users’ unique traits and moment-to-moment states, improving trust and understanding. It highlights real-world applications in intelligent tutoring systems, recommender platforms, and healthcare technologies, illustrating how human-centered AI is reshaping interactions.
Join us as we explore the future of AI that not only acts intelligently but also connects meaningfully with each user. Our guest speaker is Professor of Computer Science at the University of British Columbia, Cristina Conati, a pioneer in user modeling, personalization, and explainable AI.
THINGS WE SPOKE ABOUT
● Evolving from generic systems to human-centred AI
● Unlocking personalization through multimodal signals
● Improving user trust, engagement, and learning outcomes
● Real-World applications from intelligent tutoring systems to healthcare technologies
● Complexities and ethical considerations of delivery
GUEST DETAILS
Professor Cristina Conati is a Professor of Computer Science at the University of British Columbia (UBC). She holds a Master’s degree in Computer Science from the University of Milan and both a Master’s and Ph.D. in Intelligent Systems from the University of Pittsburgh. Her research lies at the intersection of Artificial Intelligence (AI), Human-Computer Interaction (HCI), and Cognitive Science, focusing on creating intelligent systems that adapt to individual users' needs. Professor Conati has over 100 peer-reviewed publications and has received multiple Best Paper Awards. She is an ACM Distinguished Member and an AAAI Senior Member.
https://www.cs.ubc.ca/people/cristina-conati
MORE INFORMATION
You can learn more about your AI Literacy in the Classroom here: https://ai-literacy-in-the-classroom.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the Adapt Centre
For more information about ADAPT visit www.adaptcentre.ie/
QUOTES
In order to have this AI driven personalization during interaction, what needs to be done is to establish what we call the AI driven personalization loop. - Cristina Conati
We're working towards creating intelligent systems that can understand to whom, when and how, to provide explanations of their behaviors. - Cristina Conati
The explanation should be designed so that a user can choose at what level of detail to go deeper. - Cristina Conati
It would be important to look at different user characteristics that might impact, like user reading proficiency or abilities to process visual information. - Cristina Conati
It’s super important to understand interplay between explanations and under or over reliance with AI. - Cristina Conati
KEYWORDS
#HumancenteredAI #explainableAI #usermodels #multimodal #learning #health
ADAPT Radio
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker