Home
Categories
EXPLORE
True Crime
Comedy
Business
Sports
Society & Culture
History
Fiction
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/42/71/04/427104b8-4782-0cf5-3808-d924f410f679/mza_253622433235731118.jpg/600x600bb.jpg
The Rip Current with Jacob Ward
Jacob Ward
52 episodes
1 day ago
The Rip Current covers the big, invisible forces carrying us out to sea, from tech to politics to greed to beauty to culture to human weirdness. The currents are strong, but with a little practice we can learn to spot them from the beach, and get across them safely. Veteran journalist Jacob Ward has covered technology, science and business for NBC News, CNN, PBS, and Al Jazeera. He's written for The New Yorker, The New York Times Magazine, Wired, and is the former Editor in Chief of Popular Science magazine.
Show more...
News
RSS
All content for The Rip Current with Jacob Ward is the property of Jacob Ward and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Rip Current covers the big, invisible forces carrying us out to sea, from tech to politics to greed to beauty to culture to human weirdness. The currents are strong, but with a little practice we can learn to spot them from the beach, and get across them safely. Veteran journalist Jacob Ward has covered technology, science and business for NBC News, CNN, PBS, and Al Jazeera. He's written for The New Yorker, The New York Times Magazine, Wired, and is the former Editor in Chief of Popular Science magazine.
Show more...
News
Episodes (20/52)
The Rip Current with Jacob Ward
Every Oil Empire Thinks This Time Will Be Different.

It’s a very weird Monday back from the holidays. While most of us were shaking off jet lag and reminding ourselves who we are when we’re not sleeping late and hanging with family, the world woke up to a piece of news this weekend that showed no one in power learned a goddamn thing in history class: the United States has rendered Venezuela’s president to New York, and powerful people are openly fantasizing about “fixing” a broken country by taking control of its oil.

This isn’t a defense of Nicolás Maduro. He presided over the destruction of a nation sitting on the world’s largest proven oil reserves. Venezuela’s state now barely functions beyond preserving its own power. The Venezuelans I’ve spoken with have a wide variety of feelings about an incompetent dictator being arrested by the United States.

But what’s clear is that anyone who has read anything knows that the history of oil grabs is a history of financial disaster. So when I hear confident talk about oil revenues flowing back to the U.S., I don’t hear a plan. I hear the opening chapter of a time-honored financial tragedy that’s been repeated again and again, even in our lifetimes.

Let’s put aside the moral horror of military invasion and colonial brutality, and just focus on whether the money ever actually flows back to the invader. Example after example shows it doesn’t: Iraq was supposed to stabilize energy markets. Instead, it delivered trillions in war costs, higher deficits, and zero leverage over oil prices. Britain’s attempt to hang onto the Suez Canal ended with a humiliating retreat, an IMF bailout, and the end of its time as a superpower. France’s war in Algeria collapsed its government. Dutch oil extraction in Nigeria boomeranged back home as lawsuits, environmental liability, and reputational ruin.

Oil empires all make the same mistake: they think they can nationalize the upside while outsourcing the risk. In reality, profits stay local or corporate. Costs always come home. And we’re about to learn it all over again.


Read more at TheRipCurrent.com.

Show more...
2 days ago
13 minutes 6 seconds

The Rip Current with Jacob Ward
Why So Many People Hate AI — and Why 2026 Is the Breaking Point

Happy New Year! I’ve been off for the holiday — we cranked through a bake-off, a dance party, a family hot tub visit, and a makeshift ball drop in the living room of a snowy cabin — and I’m feeling recharged for (at least some portion of) 2026. So let’s get to it.

I woke to reports that “safeguard failures” in Elon Musk’s Grok led to the generation of child sexual exploitative material (Reuters) — a euphemism that barely disguises how awful this is. I was on CBS News to talk about it this morning, but I made the point that the real question isn’t how did this happen? It’s how could it not?

AI systems are built by vacuuming up the worst and best of human behavior and recombining it into something that feels intelligent, emotional, and intimate. I explored that dynamic in The Loop — and we’re now seeing it play out in public, at scale.

The New York Times threw a question at all of us this morning: Why Do Americans Hate AI? (NYT). One data point surprised me: as recently as 2022, people in many other countries were more optimistic than Americans when it came to the technology. Huh! But the answer to the overall question seems to signal that we’ve all learned something from the social media era and from the recent turn toward a much more realistic assessment of technology companies’ roles in our lives: For most people, the benefits are fuzzy, while the threats — to jobs, dignity, and social stability — are crystal clear.

Layer onto that a dated PR playbook (“we’re working on it”), a federal government openly hostile to regulation, and headlines promising mass job displacement, and the distrust makes a lot of sense.

Of course, this is why states are stepping in. The rise of social media and the simultaneous correlated crisis in political discord, health misinformation, and depression rates left states holding the bag, and they’re clearly not going to let that happen again. California’s new AI laws — addressing deepfake pornography, AI impersonation of licensed professionals, chatbot safeguards for minors, and transparency in AI-written police reports — are a direct response to the past and the future.

But if you think the distaste for AI’s influence is powerful here, I think we haven’t even gotten started in the rest of the world. Here’s a recent episode that has me more convinced of it than ever: a stadium in India became the scene of a violent protest when Indian football fans who’d paid good money for time with Lionel Messi were kept from seeing the soccer star by a crowd of VIPs clustered around him for selfies. The resulting (and utterly understandable) outpouring of anger made me think hard about what happens when millions of outsourced jobs disappear overnight. I think those fans’ rage at being excluded from a promised reward, bought with the money they work so hard for, is a preview.

So yes — Americans distrust AI. But the real question is how deep those feelings go, and how much unrest this technology is quietly banking up, worldwide. That’s the problem we’ll be reckoning with all year long.

Show more...
5 days ago
14 minutes 24 seconds

The Rip Current with Jacob Ward
AI Has Us Lying to One Another (and It's Changing How We Think)

Okay, honest admission here: I don’t fully know what I think about this topic yet. A podcast producer (thanks Nancy!) once told me “let them watch you think out loud,” and I’m taking her to heart — because the thing I’m worried about is already happening to me.

Lately, I’ve been leaning hard on AI tools, God help me. Not to write for me — a little, sure, but for the most part I still do that myself — but to help me quickly get acclimated to unfamiliar worlds. The latest unfamiliar world is online marketing, which I do not understand AT ALL but now need to master to survive as an independent journalist. And here’s the problem: the advice these systems give isn’t neutral, because first of all it’s not really “advice,” it’s just statistically relevant language regurgitated as advice, and second, because it just vacuums up the language wherever it can find it, its suggestions come with online values baked in. I know this — I wrote a whole fucking book about it — but I lose track of it in my desperation to learn quickly.

I’m currently trying to analyze who it is that follows me on TikTok, and why, so I can try to port some of those people (or at least those types of people) over to Substack and YouTube, where one can actually make a living filing analysis like this. One of the metrics I was told to prioritize? Disagreement in the comments. Not understanding, learning, clarity, the stuff I’m after in my everyday work. Fighting. Comments in which people want to argue with me are “good,” according to ChatGPT. Thoughtful consensus? Statistically irrelevant.

Here’s the added trouble. It’s one thing to read that and filter out what’s unhelpful. It’s another thing to do so in a world where all of us are supposed to pretend we had this thought ourselves.

AI isn’t just helping us work faster. It’s quietly training us to behave differently — and to hide how that training happens. We’re all pretending this output is “ours,” because the unspoken promise of AI right now is that you can get help and still take the credit. (I believe this is a fundamental piece of the marketing that no one’s saying out loud, but everyone is implying.) And the danger isn’t just dishonesty toward others. It’s that we start believing our own act.

There’s a huge canon of scientific literature showing that lying about a thing causes us to internalize the lie over time. The Harvard psychologist Daniel Schachter wrote a sweeping review of the science in 1999 entitled “The Seven Sins of Memory,” in which he synthesized a range of studies that showed that memory is us building a belief on the prior belief, not drawing on a perfect replay of reality, and that repetition and suggestion can implant or strengthen false beliefs that feel subjectively true. Throw us enough ideas and culturally condition us to hide where we got them, and eventually we’ll come to believe they were our own. (And to be clear, I knew a little about the reconstructive nature of memory, but ChatGPT brought me Schachter’s paper. So there you go.)

What am I suggesting here? I know we’re creating a culture where machine advice is passed off as human judgment. I don’t know whether the answer is transparency, labeling, norms, regulation, or something else entirely. So I guess I’m starting with transparency.

In any event, I do know this: lying about how we did or learned something makes us less discerning thinkers. And AI’s current role in our lives is built on that lie.

Thinking out loud. Feedback welcome. Thanks!

Show more...
5 days ago
12 minutes 36 seconds

The Rip Current with Jacob Ward
Did Weed Just Escape the Culture War?

Here’s one I truly didn’t see coming: the Trump administration just made the most scientifically meaningful shift in U.S. marijuana policy in years.

No, weed isn’t suddenly legal everywhere. But moving marijuana from Schedule I — alongside heroin — to Schedule III is a very big deal. That single bureaucratic change cracks open something that’s been locked shut for half a century: real research.

For years, I’ve covered the strange absurdities of marijuana science in America. If you were a federally funded researcher — which almost every serious scientist is — you weren’t allowed to study the weed people actually use. Instead, you had to rely on a single government-approved grow operation producing products that didn’t resemble what’s sold in dispensaries. As a result, commercialization raced ahead while our understanding lagged far behind.

That’s how we ended up with confident opinions, big business, and weak data. We know marijuana can trigger severe psychological effects in a meaningful number of people. We know it can cause real physical distress for others. What we don’t know — because we’ve blocked ourselves from knowing — is who’s at risk, why, and how to use it safely at scale.

Meanwhile, the argument that weed belongs in the same category as drugs linked to violence and mass death has always collapsed under scrutiny. Alcohol, linked to more than 178,000 deaths per year in the United States alone, does far more damage, both socially and physically, yet sits comfortably in legal daylight.

If this reclassification sticks, the excuse phase is over. States making billions from legal cannabis now need to fund serious, independent research.

I didn’t expect this administration to make a science-forward move like this — but here we are. Here’s hoping we can finish the job and finally understand what we’ve been pretending to regulate for decades.

Covering earlier regulatory changes for Al Jazeera in 2016...

Show more...
2 weeks ago
13 minutes 2 seconds

The Rip Current with Jacob Ward
AI Data Centers Are Draining Our Power — and Making Strange Political Allies

The United States has a split personality when it comes to AI data centers. On the one side, tech leaders (and the White House) celebrate artificial intelligence as a symbol of national power and economic growth. But politicians from Bernie Sanders to Ron DeSantis point out that when it shows up in our towns, it drains water, drives up electricity prices, and demands round-the-clock power like an always-awake city.

Every AI prompt—whether it’s wedding vows or a goofy image—fires up racks of servers that require enormous amounts of electricity and water to stay cool. The result is rising pressure on local water supplies and power grids, and a wave of protests and political resistance across the country. I’m covering that in today’s episode, and you can read the whole report over at Hard Reset.

Show more...
3 weeks ago
10 minutes 5 seconds

The Rip Current with Jacob Ward
AI Isn’t Just a Money Risk Anymore — It’s Bigger than That

For most of modern history, regulation in Western democracies has focused on two kinds of harm: people dying and people losing money. But with AI, that’s beginning to change.

This week, the headlines point toward a new understanding that more is at stake than our physical health and our wallets: governments are starting to treat our psychological relationship with technology as a real risk. Not a side effect, not a moral panic, not a punchline to jokes about frivolous lawyers. Increasingly, I’m seeing lawmakers understand that it’s a core threat.

There is, for instance, the extraordinary speech from the new head of MI6, Britain’s intelligence agency. Instead of focusing only on missiles, spies, or nation-state enemies, she warned that AI and hyper-personalized technologies are rewriting the nature of conflict itself — blurring peace and war, state action and private influence, reality and manipulation. When the person responsible for assessing existential threats starts talking about perception and persuasion, that stuff has moved from academic hand-wringing to real danger.

Then there’s the growing evidence that militant groups are using AI to recruit, radicalize, and persuade — often more effectively than humans can. Researchers have now shown that AI-generated political messaging can outperform human persuasion. That matters, because most of us still believe we’re immune to manipulation. We’re not. Our brains are programmable, and AI is getting very good at learning our instructions.

That same playbook is showing up in the behavior of our own government. Federal agencies are now mimicking the president’s incendiary online style, deploying AI-generated images and rage-bait tactics that look disturbingly similar to extremist propaganda. It’s no coincidence that the Oxford University Press crowned “rage bait” its word of the year. Outrage is no longer a side effect of the internet — it’s a design strategy.

What’s different now is the regulatory response. A coalition of 42 U.S. attorneys general has formally warned AI companies about psychologically harmful interactions, including emotional dependency and delusional attachment to chatbots and “companions.” This isn’t about fraud or physical injury. It’s about damage to people’s inner lives — something American law has traditionally been reluctant to touch.

At the same time, the Trump administration is trying to strip states of their power to regulate AI at all, even as states are the only ones meaningfully responding to these risks. That tension — between lived harm and promised utopia — is going to define the next few years.

We can all feel that something is wrong. Not just economically, but cognitively. Trust, truth, childhood development, shared reality — all of it feels under pressure. The question now is whether regulation catches up before those harms harden into the new normal.


Mentioned in This Article:

Britain caught in ‘space between peace and war’, says new head of MI6 | UK security and counter-terrorism | The Guardian

https://www.theguardian.com/uk-news/2025/dec/15/britain-caught-in-space-between-peace-and-war-new-head-of-mi6-warns

Islamic State group and other extremists are turning to AI | AP News

https://apnews.com/article/islamic-state-group-artificial-intelligence-deepfakes-ba201d23b91dbab95f6a8e7ad8b778d5

‘Virality, rumors and lies’: US federal agencies mimic Trump on social media | Donald Trump | The Guardian

https://www.theguardian.com/us-news/2025/dec/15/trump-agencies-style-social-media

US state attorneys-general demand better AI safeguards

https://www.ft.com/content/4f3161cc-b97a-496e-b74e-4d6d2467d59c


Show more...
3 weeks ago
10 minutes 47 seconds

The Rip Current with Jacob Ward
The President Just Moved to Kill State AI Laws. Here's What Happens Next.

President Trump has signed a sweeping executive order aimed at blocking U.S. states from regulating artificial intelligence — arguing that a “patchwork” of laws threatens innovation and America’s global competitiveness. But there’s a catch: there is no federal AI law to replace what states have been doing.


In this episode, I break down what the executive order actually does, why states stepped in to regulate AI in the first place, how this move conflicts with public opinion, and why legal experts believe the fight is headed straight to the courts.


This isn’t just a tech story. It’s a constitutional one.


Read the full analysis in my weekly column at HardResetMedia.com.

Show more...
3 weeks ago
11 minutes 29 seconds

The Rip Current with Jacob Ward
AI Is Even More Biased Than We Are: Mahzarin Banaji on the Disturbing Truth Behind LLMs

This week I sat down with the woman who permanently rewired my understanding of human nature — and now she’s turning her attention to the nature of the machines we’ve gone crazy for.

Harvard psychologist Mahzarin Banaji coined the term “implicit bias” and has conducted research for decades into the blind spots we don’t admit even to ourselves. The work that blew my hair back shows how prejudice has and hasn’t changed since 2007. Take one of the tests here — I was deeply disappointed by my results.

More recently, she’s been running new experiments on today’s large language models.

What has she learned?

They’re far more biased than humans.

Sometimes twice or three times as biased.

They show shocking behavior — like a model declaring “I am a white male” or demonstrating literal self-love toward its own company. And as their most raw and objectionable responses are papered over, our ability to understand just how prejudiced they really are is being whitewashed, she says.

In this conversation, Banaji explains:

  • Why LLMs amplify bias instead of neutralizing it

  • How guardrails and “alignment” may hide what the model really thinks

  • Why kids, judges, doctors, and lonely users are uniquely exposed

  • How these systems form a narrowing “artificial hive mind”

  • And why we may not be mature enough to automate judgement at all

Banaji is working at the very cutting edge of the science, and delivers a clear and unsettling picture of what AI is amplifying in our minds.


00:00 — AI Will Warp Our Decisions

Banaji on why future decision-making may “suck” if we trust biased systems.


01:20 — The Woman Who Changed How We Think About Bias

Jake introduces Banaji’s life’s work charting the hidden prejudices wired into all of us.


03:00 — When Internet Language Revealed Human Bias

How early word-embedding research mirrored decades of psychological findings.


05:30 — AI Learns the One-Drop Rule

CLIP models absorb racial logic humans barely admit.


07:00 — The Moment GPT Said “I Am a White Male”

Banaji recounts the shocking early answer that launched her LLM research.


10:00 — The Rise of Guardrails… and the Disappearance of Honesty

Why the cleaned-up versions of models may tell us less about their true thinking.


12:00 — What “Alignment” Gets Fatally Wrong

The Silicon Valley fantasy of “universal human values” collides with actual psychology.


15:00 — When AI Corrects Itself in Stupid Ways

The Gemini fiasco, and why “fixing” bias often produces fresh distortions.


17:00 — Should We Even Build AGI?

Banaji on why specialized models may be safer than one general mind.


19:00 — Can We Automate Judgment When We Don’t Know Ourselves?

The paradox at the heart of AI development.


21:00 — Machines Can Be Manipulated Just Like Humans

Cialdini’s persuasion principles work frighteningly well on LLMs.


23:00 — Why AI Seems So Trustworthy (and Why That’s Dangerous)

The credibility illusion baked into every polished chatbot.


25:00 — The Discovery of Machine “Self-Love”

How models prefer themselves, their creators, and their own CEOs.


28:00 — The Hidden Line of Code That Made It All Make Sense

What changes when a model is told its own name.


31:00 — Artificial Hive Mind: What 70 LLMs Have in Common

The narrowing of creativity across models and why it matters.


34:00 — Why LLM Bias Is More Extreme Than Human Bias

Banaji explains effect sizes that blow past anything seen in psychology.


37:00 — A Global Problem: From U.S. Race Bias to India’s Caste Bias

How Western-built models export prejudice worldwide.


40:00 — The Loan Officer Problem: When “Truth to the Data” Is Immoral

A real-world example of why bias-blind AI is dangerous.


43:00 — Bayesian Hypocrisy: Humans Do It… and AI Does It More

Models replicate our irrational judgments — just with sharper edges.


48:00 — Are We Mature Enough to Hand Off Our Thinking?

Banaji on the risks of relying on a mind we didn’t design and barely understand.


50:00 — The Big Question: Can AI Ever Make Us More Rational?

Show more...
4 weeks ago
1 hour 6 minutes 30 seconds

The Rip Current with Jacob Ward
Australia Just Rebooted Childhood — And the World Is Watching

Australia just imposed a blanket ban on social media for kids under the age of 16. It’s not just the strictest tech policy of any democracy — it’s stricter than China’s laws. No TikTok, no Instagram, no SnapChat, that’s it. And while Washington dithers behind a 1998 law written before Google existed, other countries are gearing up to copy Australia’s homework (Malaysia imposes a similar ban on January 1st). What happens now — the enforcement mess, the global backlash, the accidental creation of the largest clean “control group” in tech-history — could reshape how we think about childhood, mental health, and what governments owe the developing brain.

00:00 — Australia’s historic under-16 social-media ban

01:10 — What counts as “social media” under the law?

02:04 — Why platforms — not kids — get fined

03:01 — How the U.S. is still stuck with COPPA (from 1998!)

04:28 — Why age 13 was always a fiction

05:15 — Psychologists on the teenage brain: “all gas, no brakes”

07:02 — Malaysia and the EU consider following Australia’s lead

08:00 — Nighttime curfews and other global experiments

09:11 — Albanese’s pitch: reclaiming “a real childhood”

10:20 — Could isolation leave Aussie teens behind socially?

11:22 — Why Australia is suddenly stricter than China

12:40 — Age-verification chaos: the AI that thinks my uncle is 12

13:40 — The enforcement black box

14:10 — Australia as the first real longitudinal control group

15:18 — If mental-health outcomes improve, everything changes

16:05 — The end of the “wild west” era of social platforms?

Show more...
4 weeks ago
9 minutes 30 seconds

The Rip Current with Jacob Ward
AI is Creating a ‘Hive Mind' — Scientists Just Proved It

The big AI conference NeurIPS is under way in San Diego this week, and nearly 6,000 papers presented there will set the technical, intellectual, and ethical course for AI for the year.

NeurIPS is a strange pseudo-academic gathering, where researchers from universities show up to present their findings alongside employees of Apple and Nvidia, part of the strange public-private revolving door of the tech industry. Sometimes they’re the same person: Increasingly, academic researchers are allowed to also hold a job at a big company. I can’t blame them for taking opportunities where they arise—I’m sure I would, in their position—but it’s particularly bothersome to me as a journalist, because it limits their ability to speak publicly.

The papers cover robotics, alignment, and how to deliver kitty cat pictures more efficiently, but one paper in particular—awarded a top prize at the conference—grabbed me by the throat.

A coalition from Stanford, the Allen Institute, Carnegie Mellon, and the University of Washington presented “Artificial Hive Mind: The Open-Ended Homogeneity of Language Models (and Beyond),” which shows that average large language model converges toward a narrow set of responses when asked big, brainstormy, open-ended questions. Worse, different models tend to produce similar answers, meaning when you switch from ChatGPT to Gemini or Claude for “new perspective,” you’re not getting it. I’ve warned for years that AI could shrink our menu of choices while making us believe we have more of them. This paper shows just how real that risk is. Today I walk through the NIPS landscape, the other trends emerging at the conference, and why “creative assistance” may actually be the crushing of creativity in disguise. Yay!

Show more...
1 month ago
11 minutes 54 seconds

The Rip Current with Jacob Ward
OpenAI Declares "Code Red" — And Takes Aim at Your Brain

According to the Wall Street Journal, Sam Altman sent an internal memo on Monday declaring a company-wide emergency and presumably ruining the holiday wind-down hopes of his faithful employees. OpenAI is hitting pause on advertising plans, delaying AI agents for health and shopping, and shelving a personal assistant called “Pulse.” All hands are being pulled back to one mission: making ChatGPT feel more personal, more intuitive, and more essential to your daily life.

The company says it wants the general quality, intelligence, and flexibility to improve, but I’d argue this is less about making the chatbot smarter, and more about making it stickier.

Google’s Gemini has been surging — monthly active users jumped from 450 million in July to 650 million in October. Industry leaders like Salesforce CEO Marc Benioff are calling it the best LLM on the market. OpenAI seems to feel the heat, and also seems to feel it doesn’t have the resources to keep building everything it wants all at once — it has to prioritize. Consider that when Altman was recently asked on a podcast how he plans to get to profitability, he grew exasperated. “Enough,” he said.

But here’s what struck me about the Code Red. While Gemini is supposedly surpassing ChatGPT in industry benchmarkes, I don’t think Altman is chasing benchmarks. He’s chasing the “toothbrush rule” — the Google standard for greenlighting new products that says a product needs to become an essential habit used at least three times a day. The memo specifically emphasizes “personalization features.” They want ChatGPT to feel like it knows you, so that you feel known, and can’t stop coming back to it.

I’ve been talking about AI distortion — the strange way these systems make us feel a genuine connection to what is, ultimately, a statistical pattern generator. That feeling isn’t a bug. It’s becoming the business model.

Facebook did this. Google did this. Now OpenAI is doing it: delaying monetization until the product is so woven into your life that you can’t imagine pulling away. Only then do the ads come.

Meanwhile, we’re living in a world where journalists have to call experts to verify whether a photo of Trump fellating Bill Clinton is real or AI-generated.

The image generators keep getting better, the user numbers keep climbing, and the guardrails remain an afterthought.

This is the AI industry in December 2025: a race to become indispensable.


Show more...
1 month ago
12 minutes 21 seconds

The Rip Current with Jacob Ward
Trump’s New Big Tech Era, TSMC’s Shift, and the A.I. Conferences Steering 2026

It’s Monday, December 1st. I’m not a turkey guy, and I’m of the opinion that we’ve all made a terrible habit of subjecting ourselves to the one and only time anyone cooks the damn thing each year. So I hope you had an excellent alternative protein in addition to that one. Ours was the Nobu miso-marinated black cod. Unreal.

Okay, after the food comes the A.I. hangover. This week I’m looking at three fronts where the future of technology just lurched in a very particular direction: politics, geopolitics, and the weird church council that is the A.I. conference circuit.

First, the politics. Trump’s leaked executive order to wipe out state A.I. laws seems to have stalled — not because he’s suddenly discovered restraint, but maybe because the polling suggests that killing A.I. regulation is radioactive. Instead, the effort is being shoved into Congress via the National Defense Authorization Act, the “must-pass” budget bill where bad ideas go to hide. Pair that with the Federal Trade Commission getting its teeth kicked in by Meta in court, and you can feel the end of the Biden-era regulatory moment and the start of a very different chapter: a government that treats Big Tech less as something to govern and more as something to protect.

Second, the geopolitics. TSMC’s CEO is now openly talking about expanding chip manufacturing outside Taiwan. That sounds like a business strategy, but it’s really a tectonic shift. For years, America’s commitment to Taiwan has been tied directly to that island’s role as our chip lifeline. If TSMC starts building more of that capacity in Arizona and elsewhere, the risk calculus around a Chinese move on Taiwan changes — and so does the fragility of the supply chain that A.I. sits on top of.

Finally, the quiet councils of the faithful: AWS re:Invent and NeurIPS. Amazon is under pressure to prove that all this spending on compute actually makes money. NeurIPS, meanwhile, is where the people who build the models go to decide what counts as progress: more efficient inference, new architectures, new “alignment” tricks. A single talk or paper at that conference can set the tone for years of insanely expensive work. So between Trump’s maneuvers, the FTC’s loss, TSMC’s hedging, and the A.I. priesthood gathering in one place, the past week and this one are a pretty good snapshot of who really steers the current we’re all in.

Show more...
1 month ago
15 minutes 24 seconds

The Rip Current with Jacob Ward
AI Distortion Is Here — And It’s Already Warping Us All

It’s a warning siren: people seeing delusions they never knew they had amplified by AI, a wave of lawsuits alleging emotional manipulation and even suicide coaching, a major company banning minors from talking freely with chatbots for fear of excessive attachment, and a top mental-health safety expert at OpenAI quietly heading for the exit.

For years I’ve argued that AI would distort our thinking the same way GPS distorted our sense of direction. But I didn’t grasp how severe that distortion could get—how quickly it would slide from harmless late-night confiding to full-blown psychosis in some users.

OpenAI’s own data suggests millions of people each week show signs of suicidal ideation, emotional dependence, mania, or delusion inside their chats. Independent investigations and a growing legal record back that up. And all of this is happening while companies roll out “AI therapists” and push the fantasy that synthetic friends might be good for us.

As with most of what I’ve covered over the years, this isn’t a tech story. It’s a psychological one. A biological one. And a story about mixed incentives. A story about ancient circuitry overwhelmed by software, and by the companies who can’t help but market it as sentient. I’m calling it AI Distortion—a spectrum running from mild misunderstanding all the way to dependency, delusion, isolation, and crisis.

It’s becoming clear that we’re not just dealing with a tool that organizes our thoughts. We’re dealing with a system that can warp them, in all of us, every time.


Show more...
1 month ago
11 minutes 49 seconds

The Rip Current with Jacob Ward
Insurers Are Backing Away From A.I.—and That Should Scare You More Than Any Sci-Fi Predictions

Today I dug into the one corner of the economy that’s supposed to keep its head when everyone else is drunk on hype: the insurance industry. Three of the biggest carriers in the country—AIG, Great American, and W.R. Berkley—are now begging regulators not to force them to cover A.I.-related losses, according to the Financial Times. These are the people who price hurricanes, wildfires, and war zones… and they look at A.I. and say, “No thanks.” That tells you something about where we really are in the cycle.

I also walked through the Trump administration’s latest maneuver, which looks a lot like carrying water for Big Tech in Brussels: trading lower steel tariffs for weaker European tech rules. (The Europeans said “no thank you.”) Meanwhile, we’re still waiting on the rumored executive order that would bulldoze state A.I. laws—the only guardrails we have in this country.

On the infrastructure front, reporting out of Mumbai shows how A.I. demand is forcing cities back toward coal just to keep data centers running. And if that wasn’t dystopian enough, I close with a bleak little nugget from Business Insider advising Gen Z to “focus on tasks, not job titles” in the A.I. economy. Translation: don’t expect a career—expect a series of gigs glued together by hope.

It’s a full Monday’s worth of contradictions: the fragile hype economy, the political favoritism behind it, and the physical reality—pollution, burnout, precarity—that always shows up eventually.

Show more...
1 month ago
9 minutes 18 seconds

The Rip Current with Jacob Ward
The Executive Order That Would End AI Regulation in America

The only laws protecting you from the worst excesses of A.I. might be wiped out — and fast. A leaked Trump executive order would ban states from regulating A.I. at all, rolling over the only meaningful protections any of us currently have. There is no federal A.I. law, no federal data-privacy law, nothing. States like California, Illinois, and Colorado are the only line of defense against discriminatory algorithms, unsafe model deployment, and the use of A.I. as a quasi-therapist for millions of vulnerable people.


This isn’t just bad policy — it’s wildly unpopular. The last time Republicans tried this maneuver, the Senate killed it 99–1. And Americans across the political spectrum overwhelmingly want A.I. regulated, even if it slows the industry down. But the tech sector wants a frictionless, regulation-free environment, and the Trump administration seems eager to give it to them — from crypto dinners and gilded ballrooms to billion-dollar Saudi co-investment plans.


There’s another layer here: state laws also slow down the federal government’s attempt to build a massive surveillance apparatus using private data brokers and companies like Palantir. State privacy protections cut off that flow of data. Removing those laws clears the pipe.


The White House argues this is about national security, China, and “woke A.I.” But legal experts say the order is a misreading of commerce authority and won’t survive in court. And state leaders like California’s Scott Wiener are already preparing to sue. For now, the takeaway is simple: states are the only governments in America protecting you from A.I. — and the administration is trying to take that away.

Show more...
1 month ago
8 minutes 26 seconds

The Rip Current with Jacob Ward
When AI Stops Being About Jobs — and Starts Being About Us

In today’s episode, I’m following the money, the infrastructure, and the politics:
Nvidia just posted another monster quarter and showed that it’s still the caffeine in the US economy. Investors briefly relaxed, even as they warned that an AI bubble is still the top fear in markets. Google jammed Gemini 3 deeper into Search in a bid to regain narrative control. Cloudflare broke down and reminded us that the “smart” future still runs on pretty fragile plumbing. The EU blinked on AI regulation. And here in the U.S., the White House rolled out the red carpet for Saudi Arabia as part of a multibillion-dollar AI infrastructure deal that seems to be shiny enough to have President Trump openly chastising a journalist for asking Crown Prince about his personal responsibility for the murder of an American journalist.

But the deeper story I’m looking at today is social, not financial. Politicians like Bernie Sanders are beginning to voice the fear that AI won’t just destroy jobs — it might quietly corrode our ability to relate to one another. If you’ve been following me you know this is more or less all I’m thinking about at the moment. So I looked at the history of this kind of concern, and while we’re generally only concerned with death and financial loss in this country, we do snap awake from time to time when a new technology threatens our social fabric. Roll your eyes if you want to, but we’ve seen this moment before with telegraphs, movies, radio demagogues, television, video games, and social media, and there’s a lot to learn from that history.

This episode explores that lineage, what it means for AI, and why regulation might arrive faster than companies expect.

Show more...
1 month ago
16 minutes 5 seconds

The Rip Current with Jacob Ward
Are We Overbuilding AI? Tulsa, Rail Mania, Space Data Centers & the Billion-Dollar Reality Check

Today’s Deep Cut asks a simple question: Is the AI industry building way more capacity than the world actually needs?


To answer it, I look at three historical warnings:


• Tulsa, Oklahoma, a city built for millions who never came after early oil wealth exploded and then evaporated.

• Britain’s “Railway Mania” of the 1840s, when investors poured money into duplicate train lines that bankrupted entire companies.

• And today’s AI giants, spending trillions on data centers, energy infrastructure, and even floating ideas about putting compute facilities in space.


We’ll talk about why companies like OpenAI, Amazon, Meta, and others believe this infrastructure binge is justified, and where the logic breaks down. I also dig into the Kardashev Scale, the ecological cost of rocket launches, and the mismatch between AI’s lofty energy dreams and the reality of using all that power to generate wedding vows and knock-knock jokes.


History is full of moments when industries overbuilt themselves into crisis. Are we repeating the pattern with AI?


If you enjoy the show, you can subscribe to the newsletter at TheRipCurrent.com.

Show more...
1 month ago
10 minutes 18 seconds

The Rip Current with Jacob Ward
AI Money Is Reshaping Global Power: Buffett Buys Big, Thiel Bails, Robots Faceplant, and Saudi Arabia Arrives

Today’s “Map” tracks the forces shaping tech, money, and global power on Monday, November 17th.


We start with a rare move: Warren Buffett’s Berkshire Hathaway quietly taking a $4.9B stake in Alphabet — one of the most surprising bets of his career, and a clear signal about where long-term AI value is concentrating.


Meanwhile, Peter Thiel just sold his entire stake in Nvidia (~$100M). For a man who’s made a career out of contrarian timing, this exit raises the question: what does he see (or not see) in AI’s hardware boom?


I also recap a discussion I moderated with consular officials and regulators from across Asia, where the loudest concern wasn’t about safety or innovation — it was about AI’s failure to work in languages other than English. Meta is now pushing its new Omnilingual ASR model, supporting 1,600+ languages, to become a global “voice layer.” Whether it actually works is an open question.


And then there’s Moscow’s big humanoid robot debut — where the machine walked onstage looking drunk, staggered around, and face-planted so hard its panels came off. It’s funny, but it’s also a reality check: the dream of a general-purpose home robot is still nowhere near ready.


Finally, we look ahead: Saudi Crown Prince Mohammed bin Salman is visiting the White House with a massive investment and technology package — including AI access and a civilian nuclear deal — at the exact moment AI energy demand is exploding past U.S. grid capacity.


The throughline:

AI money — not AI models — is steering the world right now. A third of U.S. GDP growth last year came from AI infrastructure spending, and this week’s Nvidia earnings call will reveal where the next wave is headed.


If you want more breakdowns like this every weekday, you can subscribe at TheRipCurrent.com.

Show more...
1 month ago
7 minutes 55 seconds

The Rip Current with Jacob Ward
Is America Ready to Fight Big Tech? (with Sacha Haworth)

Are we ready to take on the tech titans? Sacha Haworth thinks maybe—just maybe—we finally are. The head of the Tech Oversight Project joins me this week to talk about the pervasive influence of Big Tech on our lives, and why recognizing a growing allergy to that influence is becoming a centerpiece of political strategy. We discuss the public’s growing concerns over privacy, children’s addiction to technology, and the economic and environmental effects of tech companies’ big AI plans on local communities. Sacha shares insights on political will and the bipartisan potential to regulate and hold big tech accountable, and the court cases and regulatory moves she’ll be watching most closely in 2026 and beyond.


00:00 Introduction: The Growing Influence of Tech

00:22 The Rip Current: Exploring Big Tech’s Impact

01:05 Guest Introduction: Sasha Hayworth

01:38 Election Insights: Tech’s Role in Political Wins

02:43 Tech and Economic Issues in Elections

03:35 The Rise of Data Centers and Their Impact

06:29 Personal Journey: From Policy School to Tech Oversight

10:41 The Tech Oversight Project: Mission and Goals

11:46 Shaping the Narrative: Tech in Politics

17:22 The Politics of Tech: Power and Influence

22:03 Economic Speculation and the Tech Bubble

28:36 Future Vision: The Impact of AI and Tech

31:22 The Impact of Job Loss and Tax Incentives

32:39 AI’s Influence on Young Minds

34:49 Parental Concerns and Legislative Efforts

40:28 The Dark Side of Chatbots

49:03 Section 230 and Legal Protections

01:00:56 Political Will and Bipartisan Efforts

01:03:43 Conclusion and Call to Action

Show more...
1 month ago
1 hour 4 minutes 26 seconds

The Rip Current with Jacob Ward
Can Journalism Survive? (with UC Berkeley Journalism Dean Michael Bolden)

We can all agree that a free press is a cornerstone of American democracy, and that we want journalism in our lives. But that's different from making it possible to make a living as a journalist, and it's also not enough to protect the power of journalism against the libertarian worldview and AI slop being pushed on us all by the world's biggest companies. How will journalism survive? Jake talks with Michael Bolden, the new Dean of the Berkeley Journalism School, about his personal journey from Mobile, Alabama, to leading one of the country's top journalism schools. They dive deep into the philosophical importance of journalism, the complications brought by AI and media technology, and the crucial role of local news. Bolden emphasizes the necessity of adapting journalism education to future demands, including the incorporation of AI and influencer collaborations, and together they try to sort out how to bring together the best of this new, open world of information and the old world of true expertise and editorial rigor.


00:00 Introduction: The Impact of Personal Background on Journalism

00:29 The State of Journalism Today

01:07 Challenges Facing Modern Journalism

02:27 Introducing Michael Bolden: A Career in Journalism

03:56 Michael Bolden's Early Life and Influences

07:17 The Importance of Representation in Journalism

14:04 Navigating Professional Challenges

19:53 The Future of Journalism Education

27:31 The Evolving Role of Journalists

28:53 The Decline of Traditional Media

33:38 The Rise of Influencers and Independent Journalists

38:32 Political Influence and Media Ownership

47:25 AI and the Future of Journalism

57:12 Innovative Journalism Models

59:20 Conclusion and Final Thoughts

Show more...
2 months ago
1 hour 21 seconds

The Rip Current with Jacob Ward
The Rip Current covers the big, invisible forces carrying us out to sea, from tech to politics to greed to beauty to culture to human weirdness. The currents are strong, but with a little practice we can learn to spot them from the beach, and get across them safely. Veteran journalist Jacob Ward has covered technology, science and business for NBC News, CNN, PBS, and Al Jazeera. He's written for The New Yorker, The New York Times Magazine, Wired, and is the former Editor in Chief of Popular Science magazine.