Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenstein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy.
Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026.
Hosted on Acast. See acast.com/privacy for more information.
Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, speaks with Cass Sunstein, the Robert Walmsley University Professor at Harvard University, about his new book, Imperfect Oracle: What AI Can and Cannot Do. They discuss when we should trust algorithms over our own judgment, why AI can eliminate the noise and bias that plague human decision-making but can't predict revolutions, cultural hits, or even a coin flip—and, perhaps most importantly, when it makes sense to delegate our choices to AI and when we should insist on deciding for ourselves.
Hosted on Acast. See acast.com/privacy for more information.
Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown's McCourt School of Public Policy, and Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, spoke with Jacob Mchangama, research professor of political science at Vanderbilt University and founder of The Future of Free Speech, and Jacob Shapiro, the John Foster Dulles Professor of International Affairs at Princeton University.
The conversation covered the findings of a new report examining how AI models handle contested speech; comparative free speech regulations across six jurisdictions; empirical testing of how major chatbots respond to politically sensitive prompts; and the tension between free expression principles and concerns about manipulation in AI systems.
Hosted on Acast. See acast.com/privacy for more information.
In this rapid response episode, Lawfare senior editors Alan Rozenshtein and Kevin Frazier and Lawfare Tarbell fellow Jakub Kraus discuss President Trump's new executive order on federal preemption of state AI laws, the politics of AI regulation and the split between Silicon Valley Republicans and MAGA populists, and the administration's decision to allow Nvidia to export H200 chips to China.
Mentioned in this episode:
Hosted on Acast. See acast.com/privacy for more information.
Graham Dufault, General Counsel at ACT | The App Association, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how small- and medium-sized enterprises (SMEs) are navigating the EU's AI regulatory framework. The duo breakdown the Association's recent survey of SMEs, which included the views of more than 1,000 enterprises and assessed their views on regulation and adoption of AI.
Follow Graham: @GDufault and ACT | The App Association: @actonline
Hosted on Acast. See acast.com/privacy for more information.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents.Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.
Check out Caleb’s recent research here.
Hosted on Acast. See acast.com/privacy for more information.
Andrew Prystai, CEO and co-founder of Vesta, and Thomas Bueler-Faudree, co-founder of August Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to think through AI policy from the startup perspective.
Andrew and Thomas are the sorts of entrepreneurs that politicians on both sides of the aisle talk about at town halls and press releases. They’re creating jobs and pushing the technological frontier. So what do they want AI policy leaders to know as lawmakers across the country weigh regulatory proposals?
That’s the core question of the episode. Giddy up for a great chat!
Learn more about the guests and their companies here:
Andrew's Linkedin, Vesta's Linkedin
Thomas’s LinkedIn, August’s LinkedIn
Hosted on Acast. See acast.com/privacy for more information.
Jeff Bleich, General Counsel at Anthropic, former Chief Legal Officer at Cruise, and former Ambassador to Australia during the Obama administration, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to get a sense of how the practice of law looks at the edge of the AI frontier.
The two also review how Jeff’s prior work in the autonomous vehicle space prepared him for the challenges and opportunities posed by navigating legal uncertainties in AI governance.
Hosted on Acast. See acast.com/privacy for more information.
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs.
We discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.
These three are prolific researchers. Give them a follow to find their latest works.
Anton: @akorinek on X
Nathan: @ngoldschlag and @InnovateEconomy on X
Bharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar
Hosted on Acast. See acast.com/privacy for more information.
Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it’s widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.”
Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, senior editor at Lawfare, spoke with Brett Goldstein, special advisor to the chancellor on national security and strategic initiatives at Vanderbilt University; Brett Benson, associate professor of political science at Vanderbilt University; and Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown University's McCourt School of Public Policy.
The conversation covered the evolution of influence operations from crude Russian troll farms to sophisticated AI systems using large language models; the discovery of GoLaxy documents revealing a "Smart Propaganda System" that collects millions of data points daily, builds psychological profiles, and generates resilient personas; operations targeting Hong Kong's 2020 protests and Taiwan's 2024 election; the fundamental challenges of measuring effectiveness; GoLaxy's ties to Chinese intelligence agencies; why detection has become harder as platform integrity teams have been rolled back and multi-stakeholder collaboration has broken down; and whether the United States can get ahead of this threat or will continue the reactive pattern that has characterized cybersecurity for decades.
Mentioned in this episode:
Hosted on Acast. See acast.com/privacy for more information.
California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.
The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C.
Hosted on Acast. See acast.com/privacy for more information.
Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI.
They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs.
Leo Wu provided excellent research assistance on this podcast.
Read more from Mosharaf:
https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
Read more from Dan:
https://arxiv.org/abs/2310.03003’
https://arxiv.org/abs/2301.11581
Hosted on Acast. See acast.com/privacy for more information.
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI.
They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals.
You’ll “like” (bad pun intended) this one.
Leo Wu provided excellent research assistance to prepare for this podcast.
Read more from David:
https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/
https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/
Read more from Ravi:
Read more from Kevin:
https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.
The trio recorded this podcast live at the Institute for Humane Studies’s Technology, Liberalism, and Abundance Conference in Arlington, Virginia.
Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-tower
Learn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/
Hosted on Acast. See acast.com/privacy for more information.
Hosted on Acast. See acast.com/privacy for more information.
On today's Scaling Laws episode, Alan Rozenshtein sat down with Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley, School of Law, to discuss the rapidly evolving legal landscape at the intersection of generative AI and copyright law. They dove into the recent district court rulings in lawsuits brought by authors against AI companies, including Bartz v. Anthropic and Kadrey v. Meta. They explored how different courts are treating the core questions of whether training AI models on copyrighted data is a transformative fair use and whether AI outputs create a “market dilution” effect that harms creators. They also touched on other key cases to watch and the role of the U.S. Copyright Office in shaping the debate.
Mentioned in this episode:
Hosted on Acast. See acast.com/privacy for more information.
Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education.
Select works by Gans include:
A Quest for AI Knowledge (https://www.nber.org/papers/w33566)
Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)
How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105)
Hosted on Acast. See acast.com/privacy for more information.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.
You can read Steven’s Substack here: https://stevenadler.substack.com/
Thanks to Leo Wu for research assistance!
Hosted on Acast. See acast.com/privacy for more information.