All content for Brownstone Journal is the property of Brownstone Institute and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Daily readings from Brownstone Institute authors, contributors, and researchers on public health, philosophy, science, and economics.
By Roger Bate at Brownstone dot org.
Before Covid, I would have described myself as a technological optimist. New technologies almost always arrive amid exaggerated fears. Railways were supposed to cause mental breakdowns, bicycles were thought to make women infertile or insane, and early electricity was blamed for everything from moral decay to physical collapse. Over time, these anxieties faded, societies adapted, and living standards rose. The pattern was familiar enough that artificial intelligence seemed likely to follow it: disruptive, sometimes misused, but ultimately manageable.
The Covid years unsettled that confidence—not because technology failed, but because institutions did.
Across much of the world, governments and expert bodies responded to uncertainty with unprecedented social and biomedical interventions, justified by worst-case models and enforced with remarkable certainty. Competing hypotheses were marginalized rather than debated. Emergency measures hardened into long-term policy. When evidence shifted, admissions of error were rare, and accountability rarer still. The experience exposed a deeper problem than any single policy mistake: modern institutions appear poorly equipped to manage uncertainty without overreach.
That lesson now weighs heavily on debates over artificial intelligence.
The AI Risk Divide
Broadly speaking, concern about advanced AI falls into two camps. One group—associated with thinkers like Eliezer Yudkowsky and Nate Soares—argues that sufficiently advanced AI is catastrophically dangerous by default. In their deliberately stark formulation, If Anyone Builds It, Everyone Dies, the problem is not bad intentions but incentives: competition ensures someone will cut corners, and once a system escapes meaningful control, intentions no longer matter.
A second camp, including figures such as Stuart Russell, Nick Bostrom, and Max Tegmark, also takes AI risk seriously but is more optimistic that alignment, careful governance, and gradual deployment can keep systems under human control.
Despite their differences, both camps converge on one conclusion: unconstrained AI development is dangerous, and some form of oversight, coordination, or restraint is necessary. Where they diverge is on feasibility and urgency. What is rarely examined, however, is whether the institutions expected to provide that restraint are themselves fit for the role.
Covid suggests reason for doubt.
Covid was not merely a public-health crisis; it was a live experiment in expert-driven governance under uncertainty. Faced with incomplete data, authorities repeatedly chose maximal interventions justified by speculative harms. Dissent was often treated as a moral failing rather than a scientific necessity. Policies were defended not through transparent cost-benefit analysis but through appeals to authority and fear of hypothetical futures.
This pattern matters because it reveals how modern institutions behave when stakes are framed as existential. Incentives shift toward decisiveness, narrative control, and moral certainty. Error correction becomes reputationally costly. Precaution stops being a tool and becomes a doctrine.
The lesson is not that experts are uniquely flawed. It is that institutions reward overconfidence far more reliably than humility, especially when politics, funding, and public fear align. Once extraordinary powers are claimed in the name of safety, they are rarely surrendered willingly.
These are precisely the dynamics now visible in discussions of AI oversight.
The "What if" Machine
A recurring justification for expansive state intervention is the hypothetical bad actor: What if a terrorist builds this? What if a rogue state does that? From that premise flows the argument that governments must act pre-emptively, at scale, and often in secrecy, to prevent catastrophe.
During Covid, similar logic justified sweeping biomedical research agendas, emergency authorizations, and social controls. The reasoning was...
Brownstone Journal
Daily readings from Brownstone Institute authors, contributors, and researchers on public health, philosophy, science, and economics.