Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
History
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/6e/e3/95/6ee39578-d477-b358-f5b9-cec7ad10f081/mza_8422032406383167466.jpg/600x600bb.jpg
Into AI Safety
Jacob Haimes
25 episodes
1 month ago
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/
Show more...
Technology
Science,
Mathematics
RSS
All content for Into AI Safety is the property of Jacob Haimes and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/
Show more...
Technology
Science,
Mathematics
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/6e/e3/95/6ee39578-d477-b358-f5b9-cec7ad10f081/mza_8422032406383167466.jpg/600x600bb.jpg
INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk
Into AI Safety
2 hours 58 minutes
1 year ago
INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk

The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?

If you're interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn't published yet).

Because the full show notes have a whopping 115 additional links, I'll highlight some that I think are particularly worthwhile here:

  • The best article you'll ever read on Open Source AI
  • The best article you'll ever read on emergence in ML
  • Kate Crawford's Atlas of AI (Wikipedia)
  • On the Measure of Intelligence
  • Thomas Piketty's Capital in the Twenty-First Century (Wikipedia)
  • Yurii Nesterov's Introductory Lectures on Convex Optimization

Chapters

  • (02:32) - Introducing Igor
  • (10:11) - Aside on EY, LW, EA, etc., a.k.a. lettersoup
  • (18:30) - Igor on AI alignment
  • (33:06) - "Open Source" in AI
  • (41:20) - The story of infinite riches and suffering
  • (59:11) - On AI threat models
  • (01:09:25) - Representation in AI
  • (01:15:00) - Hazard fishing
  • (01:18:52) - Intelligence and eugenics
  • (01:34:38) - Emergence
  • (01:48:19) - Considering externalities
  • (01:53:33) - The shape of an argument
  • (02:01:39) - More eugenics
  • (02:06:09) - I'm convinced, what now?
  • (02:18:03) - AIxBio (round ??)
  • (02:29:09) - On open release of models
  • (02:40:28) - Data and copyright
  • (02:44:09) - Scientific accessibility and bullshit
  • (02:53:04) - Igor's point of view
  • (02:57:20) - Outro


Links

Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. All references, including those only mentioned in the extended version of this episode, are included.

  • Suspicious Machines Methodology, referred to as the "Rotterdam Lighthouse Report" in the episode
  • LIONS Lab at EPFL
  • The meme that Igor references
  • On the Hardness of Learning Under Symmetries
  • Course on the concept of equivariant deep learning
  • Aside on EY/EA/etc.
    • Sources on Eliezer Yudkowski
      • Scholarly Community Encyclopedia
      • TIME100 AI
      • Yudkowski's personal website
      • EY Wikipedia
      • A Very Literary Wiki -TIME article: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down documenting EY's ruminations of bombing datacenters; this comes up later in the episode but is included here because it about EY.
    • LessWrong
      • LW Wikipedia
    • MIRI
    • Coverage on Nick Bostrom (being a racist)
      • The Guardian article: ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute
      • The Guardian article: Oxford shuts down institute run by Elon Musk-backed philosopher
    • Investigative piece on Émile Torres
    • On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
    • NY Times article: We Teach A.I. Systems Everything, Including Our Biases
    • NY Times article: Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.
    • Timnit Gebru's Wikipedia
    • The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence
    • Sources on the environmental impact of LLMs
      • The Environmental Impact of LLMs
      • The Cost of Inference: Running the Models
      • Energy and Policy Considerations for Deep Learning in NLP
      • The Carbon Impact of AI vs Search Engines
  • Filling Gaps in Trustworthy Development of AI (Igor is an author on this one)
  • A Computational Turn in Policy Process Studies: Coevolving Network Dynamics of Policy Change
  • The Smoothed Possibility of Social Choice, an intro in social choice theory and how it overlaps with ML
  • Relating to Dan Hendrycks
    • Natural Selection Favors AIs over Humans
      • "One easy-to-digest source to highlight what he gets wrong [is] Social and Biopolitical Dimensions of Evolutionary Thinking" -Igor
    • Introduction to AI Safety, Ethics, and Society, recently published textbook
    • "Source to the section [of this paper] that makes Dan one of my favs from that crowd." -Igor
    • Twitter post referenced in the episode<...
Into AI Safety
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/