Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Sports
History
News
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/48/0b/7a/480b7abc-420b-b1f0-6326-d12ed0507155/mza_7767721410001904630.jpg/600x600bb.jpg
Computer Says Maybe
Alix Dunn
96 episodes
1 day ago
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
Show more...
Technology
Society & Culture
RSS
All content for Computer Says Maybe is the property of Alix Dunn and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
Show more...
Technology
Society & Culture
Episodes (20/96)
Computer Says Maybe
Worker Power & Big Tech Bossmen w/ David Seligman (replay)

Litigator David Seligman describes how big tech companies act brazenly as legal bullies to extract wealth and power from the working class in the US.

More like this: The Human in the Loop: The AI Supply Chain

We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!

Alix and David talk about legal devices such as forced arbitration and monopolistic practices like algorithmic price fixing and wage suppression — and the cases that David’s team are bringing to fight these practices

Further reading & resources

  • Seligman for Attorney General Colorado
  • Towards Justice California drivers lawsuit
  • Eichman in Jerusalem: A Report on the Banal State of Evil by Hannah Arendt
  • The Dual State by Ernst Fraenkel
  • Prohibiting Surveillance Prices and Wages by Towards Justice
  • Gill VS Uber — class action led by Towards Justice

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
4 days ago
46 minutes

Computer Says Maybe
Reporting on AI’s climate injustices w/ Karen Hao (replay)

Reporting on the tech industry proves a huge challenge due to how opaque it all is — Empire of AI author Karen Hao talks us through her investigative methods in a conversation from November 2024.

More like this: Net 0++ AI Thirst in a Water-Scarce World w/ Julie McCarthy

We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!

AI companies are flagrantly obstructive when it comes to sharing information about their infrastructure — this makes reporting on the climate injustices of AI really hard. Karen shares the tactics that these companies use, and the challenges that she has faced in her investigative reporting.

Further reading:

  • Buy Empire of AI by Karen Hao
  • Microsoft’s Hypocrisy on AI by Karen Hao
  • AI is Taking Water from the Desert by Karen Hao


Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
1 week ago
27 minutes

Computer Says Maybe
How to (Actually) Keep Kids Safe Online w/ Kate Sim (replay)

A replay of our conversation with Kate Sim, on the state of child safety online.

More like this: Dogwhistles: Networked Transphobia Online

We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!

Child safety is a fuzzy catch-all concept for our broader social anxieties that seems to be everywhere in our conversations about the internet. But child safety isn’t a new concept, and the way our politics focuses on the spectacle isn’t new either.

To help us unpack this is Kate Sim, who has over a decade of experience in sexual violence prevention and response and is currently the Director of the Children’s Online Safety and Privacy Research (COSPR) program at the University of Western Australia’s Tech & Policy Lab. We discuss the growth of ‘child safety’ regulation around the world, and how it often conflates multiple topics: age-gating adult content, explicit attempts to harm children, national security, and even ‘family values’.

Further reading & resources:

  • On COSPRs forthcoming paper on the CSAM detection ecosystem. Here is a fact sheet with ecosystem map based on it: https://bit.ly/cospr-collateral
  • On CSAM bottleneck problem: https://doi.org/10.25740/pr592kc5483
  • IBCK episode on the Anxious Generation: https://pod.link/1651876897/episode/47a8aa95c83be96b044dcb3f4e43d158
  • Child psychology expert Candace Odgers debunking Jonathan Haidt’s claims in real-time here: https://tyde.virginia.edu/event/haidt-odgers/)
  • A primer on client-side scanning and CSAM from Mitali Thakor: https://mit-serc.pubpub.org/pub/701yvdbh/release/2
  • On effective CSA prevention and scalability: https://www.prevention.global/resources/read-full-scalability-report

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
1 week ago
49 minutes

Computer Says Maybe
Digitisation, Privatisation, and Human Centipedes: Our Learnings from 2025

Before we break for the year we wanted to reflect on what the podcast brought us in 2025, and what we want to see for 2026

This week Alix is joined by two members of The Maybe team: Prathm Juneja and Georgia Iacovou. We discuss our favourite episodes from the year while making it clear we love all episodes equally. And also this is not your standard clip show. We ask ourselves what we learned, why it was important to us, and what we are hungry for in 2026.

Featured episodes:

  • Is Digitisation Killing Democracy? w/ Marietje Schaake
  • Nodestar: Building Blacksky w/ Rudy Fraser
  • Regulating Privacy in an AI Era w/ Carly Kind
  • To be Seen and Not Watched w/ Tawana Petty
  • The Taiwan Bottleneck w/ Brian Chen
  • Gotcha! How MLMs Ate the Economy w/ Bridget Read
  • Also mentioned in this episode: AI in Gaza Live from Mexico City

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
2 weeks ago
52 minutes

Computer Says Maybe
Ben Collins: Computer Says MozFest

The Onion CEO Ben Collins has successfully turned political satire into a sustainable business. He explains why humorous messaging is important to understand times like these — and why he’s dead serious about buying Infowars.

Head to our feed for more conversations from MozFest with Abeba Birhane, Audrey Tang, and Luisa Franco Machado.

Further reading & resources:

  • Read The Onion, America’s finest news source, if you don’t already…
  • The Onion to buy Infowars

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
2 weeks ago
10 minutes

Computer Says Maybe
Audrey Tang: Computer Says MozFest

Audrey Tang has some big ideas on how we can use collective needs to shape AI systems — and avoid a future where human life is seen as an obstacle to paper clip production. She also shares what might be the first actual good use-case for AI agents…


Further reading & resources:

  • 6-Pack of Care — a research project by Audrey Tang and Caroline Green as part of the Institute for Ethics in AI
  • More about Kami — the Japanese local spirits Audrey mentions throughout the conversation
  • The Oxford Institute for Ethics in AI

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
2 weeks ago
23 minutes

Computer Says Maybe
Luisa Franco Machado: Computer Says MozFest

You can’t build a digital rights movement if you don’t know what you’re fighting for. Luisa says that we’re in a crisis of imagination, and that participation — the non-performative kind — is one big way out of this.

Further reading & resources:

  • Learn more about Equilabs
  • Follow Luisa on Instagram — sorry, email is too ‘analog’
  • Check out her Linktree

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
3 weeks ago
21 minutes

Computer Says Maybe
Abeba Birhane: Computer Says MozFest

Earlier this year Abeba Birhane was asked to give a keynote at the AI for Good Summit for the UN — and at the eleventh hour they attempted to censor any mention of genocide in Palestine, and their Big Tech sponsors. She was invited to give her full uncensored talk at Mozfest.


Further reading & resources:

  • Abeba’s blog post on the UN censoring her talk on AI
  • More about Abeba
  • More about The AI Accountability Lab

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
3 weeks ago
17 minutes

Computer Says Maybe
Computer Says MozFest 2025

Mozilla Festival 2025. Barcelona. Three days in a bonanza of interesting people, ideas, and technology politics. These were our highlights!

More like this: FAccT 2025 episodes one and two

This is an extra special episode packed full of conversations and on-site impressions of the biggest Mozfest we’ve had in years. This year Alix moderated three panels, ran an AMA, and even hosted a game show — and somehow also had time to record all of this, for your pleasure.

Included in this episode is:

  • A preview of Exposing and Reshaping the Global Footprint of Data Centers, with independent journalist Pablo Jiménez Arandia, Tessa Pang (impact editor for Lighthouse Reports), and Paz Peña (Mozilla Fellow, and founder of the Latin American Institute of Terraforming.)
  • A conversation with Hana Memon, developer at Gen Z for Change
  • A conversation with creative technologist Malik Afegbua, on his project The Elder Series
  • Nabiha Syed and Helen Turvey will also reflect on how this Mozfest went, and what they hope to see for the future of the festival in the coming years

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
3 weeks ago
53 minutes

Computer Says Maybe
Who Knows? Fact-Finding in a Failing State w/ HRDAG and Data & Society

Everything is happening so fast. And a lot of it’s bad. What can research and science organizations do when issues are complex, fast-moving, and super important?

More like this: Independent Researchers in a Platform Era w/ Brandi Guerkink

Building knowledge is more important than ever in times like these. This week, we have three guests. Megan Price from the Human Rights Data Analysis Group (HRDAG) shares how statistics and data science can be used to get justice. Janet Haven and Charlton McIlwan from Data & Society explore the role that research institutions can offer to bridge research knowledge and policy prescription.

Further reading & resources:

  • HRDAG’s involvement in the trial of José Efraín Ríon Montt
  • A profile of Guatemala and timeline of its conflict — BBC (last updated in 2024)
  • To Protect and Serve? — a study on predictive policing by William Isaac and Kristian Lum
  • An article about the above study — The Appeal
  • HRDAG’s stand against tyranny
  • More on Understanding AI — Data & Society’s event series with the New York Public Library
  • About Janet Haven, Executive Director of Data & Society
  • About Charlton McIlwan, board president of Data & Society
  • Bias in Computer Systems by Helen Nissenbaum
  • Center for Critical Race and Digital Studies
  • If you want to hear more about the history of D&S, the full conversation is up on Youtube (add link when we have).

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Show more...
1 month ago
59 minutes

Computer Says Maybe
Who Knows? Independent Researchers in a Platform Era w/ Brandi Geurkink

Imagine doing tech research… but from outside the tech industry? What an idea…

More like this: Nodestar: Turning Networks into Knowledge w/ Andrew Trask

So much of tech research happens within the tech industry itself, because it requires data access, funding, and compute. But what the tech industry has in resources, it lacks in independence, scruples, and a public interest imperative. Alix is joined by Brandi Guerkink from The Coalition of Independent Tech Research to discuss her work at a time where platforms have never been so opaque, and funding has never been so sparse

Further Reading & Resources:

  • More about Brandi and The Coalition
  • Understanding Engagement with U.S. (Mis)Information News Sources on Facebook by Laura Edelson & Dan McCoy
  • More on Laura Edelson
  • More on Dan McCoy
  • Jim Jordan bringing in Nigel Farage from the UK to legitimise his attacks on EU tech regulations — Politico
  • Ted Cruz on preventing jawboning & government censorship of social media — Bloomberg
  • Judge dismisses ‘vapid’ Elon Musk lawsuit against group that cataloged racist content on X — The Guardian
  • See the CCDH’s blog post on getting the case thrown out
  • Platforms are blocking independent researchers from investigating deepfakes by Ariella Steinhorn

Disclosure: This guest is a PR client of our consultancy team. As always, the conversation reflects our genuine interest in their work and ideas.

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Show more...
1 month ago
48 minutes

Computer Says Maybe
Tres Publique: Algorithms in the French Welfare State w/ Soizic Pénicaud

Governments around the world are using predictive systems to manage engagement with even the most vulnerable. Results are mixed.

More like this: Algorithmically Cutting Benefits w/ Kevin De Liban

Luckily people like Soizic Pénicaud are working to prevent the modern welfare state from becoming a web of punishment of the most marginalised. Soizic has worked on algorithmic transparency both in and outside of a government context, and this week will share her journey from working on incrementally improving these systems (boring, ineffective, hard) — to escaping the slow pace of government and looking at the bigger picture of algorithmic governance, and how it can build better public benefit in France (fun, transformative, and a good challenge).

Soizic is working to shift political debates about opaque decision-making algorithms to focus on what they’re really about: the marginalised communities who’s lives are most effected by these systems.

Further reading & resources:

  • The Observatory of Public Algorithms and their Inventory
  • The ongoing court case against the French welfare agency's risk-scoring algorithm
  • More about Soizic
  • More on the Transparency of Public Algorithms roadmap from Etalab — the task force Soizic was part of
  • La Quadrature du Net
  • France’s Digital Inquisition — co-authored by Soizic in collaboration with Lighthouse Reports, 2023
  • AI prototypes for UK welfare system dropped as officials lament ‘false starts’ — The Guardian Jan 2025
  • Learning from Cancelled Systems by Data Justice Lab
  • The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment — by Nari Johnson et al, featured in FAccT 2024

**Subscribe to our newsletter to get more stuff than just a podcast — we host live shows and do other work that you will definitely be interested in!**

Show more...
1 month ago
52 minutes

Computer Says Maybe
Straight to Video: From Rodney King to Sora w/ Sam Gregory

Seeing is believing. Right? But what happens when we lose trust in the reproductive media put in front of us?

More like this: The Toxic Relationship Between AI and Journalism w/ Nic Dawes

We talked to a global expert and leading voice on this issue for the past 20 years, Sam Gregory to get his take. We started way back in 1992 when Rodney King was assaulted by 4 police officers in Los Angeles. Police brutality was (and is) commonplace, but something different happened in this case. Someone used a camcorder and caught it on video. It changed our understanding about the role video could play in accountability. And in the past 30 years, we’ve gone from seeking video as evidence and advocacy, to AI slop threatening to seismically reshape our shared realities.

Now apps like Sora provide impersonation-as-entertainment. How did we get here?

Further reading & resources:

  • More on the riots following Rodney King’s murder — NPR
  • More about Sam and Witness
  • ObscuraCam — a privacy-preserving camera app from WITNESS and The Guardian Project
  • C2PA: the Coalition for Content Provenance and Authenticity
  • Deepfakes Rapid Response Force by WITNESS

Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!

Post Production by Sarah Myles

Show more...
1 month ago
1 hour

Computer Says Maybe
The Toxic Relationship Between AI & Journalism w/ Nic Dawes

What happens when AI models try to fill the gaping hole in the media landscape where journalists should be?

More like this: Reanimating Apartheid w/ Nic Dawes

This week Alix is joined by Nic Dawes, who until very recently ran the non-profit newsroom The City. In this conversation we explore journalism’s new found toxic relationship with AI and big tech: can journalists meaningfully use AI in their work? If a model summarises a few documents, does that add a new layer of efficiency, or inadvertently oversimplify? And what can we learn from big tech positioning itself as a helpful friend to journalism during the Search era?

Beyond the just accurate relaying of facts, journalistic organisations also represent an entire backlog of valuable training data for AI companies. If you don’t have the same resources as the NYT, suing for copyright infringement isn’t an option — so what then? Nic says we have to break out of the false binary of ‘if you can’t beat them, join them!’

Further reading & resources:

  • Judge allows ‘New York Times’ copyright case against OpenAI to go forward — NPR
  • Generative AI and news report 2025: How people think about AI’s role in journalism and society — Reuters Institute
  • An example of The City’s investigative reporting: private equity firms buying up property in the Bronx — 2022
  • The Intimacy Dividend — Shuwei Fang
  • Sam Altman on Twitter announcing that they’ve improved ChatGPT to be mindful of the mental health effects — “We realize this made it less useful/enjoyable to many users who had no mental health problems, but…”

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Show more...
2 months ago
41 minutes

Computer Says Maybe
Unlearning in the AI Era w/ Nabiha Syed at Mozilla Foundation

Mozilla Foundation wants to chart a new path in the AI era. But what is its role now and how can it help reshape the impacts and opportunities of technology for… everyone?

More like this: Defying Datafication w/ Abeba Birhane

Alix sat down with Nabiha Syed to chat through her first year as the new leader of Mozilla Foundation. How does she think about strategy in this moment? What role does she want the foundation to play? And crucially, how is she stewarding a community of human-centered technology builders in a time of hyper-scale and unchecked speculation?

As Nabiha says, “restraint is a design principle too”.

Plug: We’ll be at MozFest this year broadcasting live and connecting with all kinds of folks. If you’re feeling the FOMO, be on the look out for episodes we produce about our time there.

Further reading & resources:

  • Watch this episode on YouTube
  • Imaginative Intelligences — a programme of artist assemblies run by Mozilla Foundation
  • Nothing Personal — a new counterculture editorial platform from the Mozilla Foundation
  • More about Mozfest
  • Nabiha on the Computer Says Maybe live show at the 2025 AI Action Summit
  • Nabiha Syed remakes Mozilla Foundation in the era of Trump and AI — The Register
  • Nabiha on why she joined MF as executive director — MF Blog

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Show more...
2 months ago
45 minutes

Computer Says Maybe
You Seem Lonely. Have a Robot w/ Stevie Chancellor

Loneliness and mental health illnesses are rising in the US, while access to care dwindles — so a lot of people are turning to chatbots. Do chatbots work for therapy?

More like this: The Collective Intelligence Project w/ Divya Siddarth and Zarinah Agnew

Why are individuals are confiding in chatbots over qualified human therapists? Stevie Chancellor explains why an LLM can’t replace a therapeutic relationship — but often there’s just no other choice. Turns out the chatbots designed specifically for therapy are even worse than general models like ChatGPT; Stevie shares her ideas on how LLMs could potentially be used — safely — for therapeutic support. This is really helpful primer on how to evaluate chatbots for specific, human-replacing tasks.

Further reading & resources:

  • Stevie’s paper on whether replacing therapists with LLMs is even possible (it’s not)
  • See the research on Github
  • People are Losing Their Loved Ones to AI-Fuelled Spiritual Fantasies — Rolling Stone (May 2025)
  • Silicon Valley VC Geoff Lewis becomes convinced that ChatGPT is telling him government secrets from the future
  • Loneliness considered a public health epidemic according to the APA
  • FTC orders online therapy company BetterHelp to pay damages of $7.8m
  • Delta plans to use AI in ticket pricing draws fire from US lawmakers — Reuters July 2025

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Show more...
2 months ago
52 minutes

Computer Says Maybe
Local Laws for Global Technologies w/ Hillary Ronen

What’s it like working as a local representative when you live next door to Silicon Valley?

More like this: Chasing Away Sidewalk Labs w/ Bianca Wylie

When Hilary Ronen was on the board of supervisors for San Francisco, she had to make lots of decisions about technology. She felt unprepared. Now she sees local policymakers on the frontlines of a battle of resources and governance in an AI era, and is working to upskill them to make better decisions for their constituents. No degree in computer science required.

Further reading & resources:

  • Local Leadership in the Era of Artificial Intelligence and the Tech Oligarchy by Hillary Ronen
  • More on Hillary’s work as a Supervisor for SF
  • Hillary Ronen on progressives, messaging, hard choices, and justice — interview in 48Hills from January 2025
  • More about Local Progress
  • Confronting Preemption — a short briefing by Local Progress
  • What Happens When State and Local Laws Conflict — article on state-level preemption by State Court Report

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Show more...
2 months ago
58 minutes

Computer Says Maybe
Gotcha! Enshittification w/ Cory Doctorow

Welcome to the final boss of scams in the age of technology: Enshittification 

More like this: Nodestar: The Eternal September w/ Mike Masnick

This is our final episode of Gotcha! — our series on scams, how they work, and how technology both amplifies and obscures them. For this final instalment we have Cory Doctorow on to chat about his new book Enshittification.

Is platformisation essentially just an industrial level scam? We will deep-dive the enshittification playbook to understand how companies lock users into decaying platforms, and get away with it. Cory shares ideas on what we can do differently to turn tide. Listen to learn what a ‘chickenised reverse centaur’ is…

Further reading & resources:

  • Buy Enshittifcation now from Verso Books!
  • Picks and Shovels by Cory Doctorow
  • On The Media series on Enshittification
  • Pluralistic — Daily Links and essays by Cory Doctorow
  • Conservatism Considered as a Movement of Bitter Rubes — Cory on why conservatism creates a friendly environment for scams
  • How I Got Scammed — Cory on his personal experiences of being scammed
  • All of Cory’s books
  • All (Antitrust) Politics Are Local — the entry to Pluralistic that Cory wrote on the day of recording
Show more...
2 months ago
55 minutes

Computer Says Maybe
Gotcha! ScamGPT w/ Lana Swartz & Alice Marwick

Thought we were at peak scam? Well, ScamGPT just entered the chat.

More like this: Gotcha! The Crypto Grift w/ Mark Hays

This is part three of Gotcha! — our series on scams, how they work, and how technology is supercharging them. This week Lana Swartz and Alice Marwick join Alix to discuss their primer on how generative AI is automating fraud.

We dig into the very human, very dark world of the scam industry, where the scammers are often being exploited in highly sophisticated human trafficking operations — and are now using generative AI to scale up and speed up.

We talk about how you probably aren’t going to get a deepfake call from a family member to demand a ransom, but the threats are still evolving in ways that are scary and until now largely unregulated. And as ever even though the problems are made worse by technology, we explore the limitations of technology and laws to stem the tide.

Further reading & resources:

  • Read the primer here!
  • More about Lana Swartz
  • More about Alice Marwick
  • New Money by Lana Swartz
  • Scam: Inside Southeast Asia's Cybercrime Compounds by Mark Bo, Ivan Franceschini, and Ling Li
  • Revealed: the huge growth of Myanmar scam centres that may hold 100,000 trafficked people
  • Al Jazeera True Crime Report on scamming farms in South East Asia
  • Scam Empire project by the Organised Crime and Corruption Reporting Project

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Show more...
3 months ago
54 minutes

Computer Says Maybe
NYC Live: Let Them Eat Compute

This just in with data centers: Energy grids are strained, water is scarce, utility costs are through the roof — ah well, let them eat compute, I guess!

More like this: AI Thirst in a Water-Scarce World w/ Julie McCarthy

It was just climate week in NYC and we did a live show on data centers with four amazing guests from around the US…

Thank you to the Luminate Foundation for sponsoring this live show and for all of our NY-based friends, and network from around the world that made it to Brooklyn for a magical evening. You can also watch the live recording on Youtube.

  • KeShaun Pearson (Memphis Community Against Pollution) will break down how Elon Musk’s xAI supercomputer is polluting the air of historically Black neighborhoods in Memphis, and how organizers are fighting back against yet another chapter of corporate extraction in their communities.
  • KD Minor (Alliance for Affordable Energy) will demystify the energy impacts of data centers in Louisiana and share organizing strategies to mobilize community opposition to Big Tech and Big Oil infrastructure.
  • Marisol (No Desert Data Center) will talk about their grassroots coalition’s recent win in Tucson to stop Amazon’s Project Blue data center proposal, which threatened the city’s scarce water supply, and how they’re organizing for future protections.
  • Amba Kak (AI Now Institute) will talk us through the bigger picture: what’s behind Big Tech’s AI data center expansion, who stands to benefit from this boom, and what we sacrifice in return.

Further reading & resources:

  • Amazon Web Services is company behind Tucson’s Project Blue, according to 2023 county memo — from Luminaria
  • Tuscon to create new policies around NDAs following the councils regret around not knowing more about Project Blue — from Luminaria
  • How Marana, also in the Tuscon area, employed an ordinance to regulate water usage after learning about data center interest in the area.
  • xAI has requested an additional 150MGW of power for Colossus in Memphis, bring it to a total of 300MGW
  • Time reports on increase in nitrogen dioxide pollution around Memphis due to xAI turbines
  • Keshaun and Justin Pearson on Democracy Now discussing xAI’s human rights violations
  • Meta’s Mega Data Center Could Strain Louisiana’s Grid — and Entergy Isn’t Prepared — report by the Alliance for Affordable Energy
  • 'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community — 404 Media

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Show more...
3 months ago
52 minutes

Computer Says Maybe
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.