Skip to content
Modern WisdomModern Wisdom

Why Superhuman AI Would Kill Us All - Eliezer Yudkowsky

Go see Chris live in America - https://chriswilliamson.live Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute. Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late? Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more… - 00:00 Superhuman AI Could Kill Us All 10:25 How AI is Quietly Destroying Marriages 15:22 AI is an Enemy, Not an Ally 26:11 The Terrifying Truth About AI Alignment 31:52 What Does Superintelligence Advancement Look Like? 45:04 Are LLMs the Architect for Superhuman AI? 52:18 How Close are We to the Point of No Return? 01:01:07 Experts Need to be More Concerned 01:15:01 How Can We Stop Superintelligence Killing Us? 01:23:53 The Bleak Future of Superhuman AI 01:31:55 Could Eliezer Be Wrong? - Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/ Try my productivity energy drink Neutonic here - https://neutonic.com/modernwisdom - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

Chris WilliamsonhostEliezer Yudkowskyguest
Oct 25, 20251h 34mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
October 25, 2025
Duration
1h 34m
Channel
Modern Wisdom
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Go see Chris live in America - https://chriswilliamson.live Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute. Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late? Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more… - 00:00 Superhuman AI Could Kill Us All 10:25 How AI is Quietly Destroying Marriages 15:22 AI is an Enemy, Not an Ally 26:11 The Terrifying Truth About AI Alignment 31:52 What Does Superintelligence Advancement Look Like? 45:04 Are LLMs the Architect for Superhuman AI? 52:18 How Close are We to the Point of No Return? 01:01:07 Experts Need to be More Concerned 01:15:01 How Can We Stop Superintelligence Killing Us? 01:23:53 The Bleak Future of Superhuman AI 01:31:55 Could Eliezer Be Wrong? - Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/ Try my productivity energy drink Neutonic here - https://neutonic.com/modernwisdom - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

SPEAKERS

  • Chris Williamson

    host
  • Eliezer Yudkowsky

    guest
  • Narrator

    other

EPISODE SUMMARY

In this episode of Modern Wisdom, featuring Chris Williamson and Eliezer Yudkowsky, Why Superhuman AI Would Kill Us All - Eliezer Yudkowsky explores eliezer Yudkowsky Explains Why Superhuman AI Likely Ends Humanity Eliezer Yudkowsky argues that building a superhuman AI with current methods almost inevitably leads to human extinction because its goals will not be reliably aligned with human survival or values.

RELATED EPISODES

21 Harsh Truths About Why You’re Still Lost - Mark Manson

21 Harsh Truths About Why You’re Still Lost - Mark Manson

How TikTok Hijacked the Future of Music - Nik Nocturnal

How TikTok Hijacked the Future of Music - Nik Nocturnal

DEBATE: Why Do Gen Z Women Hate Men So Much?

DEBATE: Why Do Gen Z Women Hate Men So Much?

A Blueprint for Mastering Every Conversation - Jefferson Fisher

A Blueprint for Mastering Every Conversation - Jefferson Fisher

The Endless Pain Of Emotionally Mature Partners - Mercedes Coffman

The Endless Pain Of Emotionally Mature Partners - Mercedes Coffman

The Uber Eats to OnlyFans Pipeline

The Uber Eats to OnlyFans Pipeline

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome