Skip to content
Lex Fridman PodcastLex Fridman Podcast

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors: - Linode: https://linode.com/lex to get $100 free credit - House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order - InsideTracker: https://insidetracker.com/lex to get 20% off EPISODE LINKS: Eliezer's Twitter: https://twitter.com/ESYudkowsky LessWrong Blog: https://lesswrong.com Eliezer's Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky Books and resources mentioned: 1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities 2. Adaptation and Natural Selection: https://amzn.to/40F5gfa PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 0:43 - GPT-4 23:23 - Open sourcing GPT-4 39:41 - Defining AGI 47:38 - AGI alignment 1:30:30 - How AGI may kill us 2:22:51 - Superintelligence 2:30:03 - Evolution 2:36:33 - Consciousness 2:47:04 - Aliens 2:52:35 - AGI Timeline 3:00:35 - Ego 3:06:27 - Advice for young people 3:11:45 - Mortality 3:13:26 - Love SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Eliezer YudkowskyguestLex Fridmanhost
Mar 30, 20233h 17mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
March 30, 2023
Duration
3h 17m
Channel
Lex Fridman Podcast
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:

EPISODE LINKS: Eliezer's Twitter: https://twitter.com/ESYudkowsky LessWrong Blog: https://lesswrong.com Eliezer's Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky Books and resources mentioned:

  1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
  2. Adaptation and Natural Selection: https://amzn.to/40F5gfa

PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 0:43 - GPT-4 23:23 - Open sourcing GPT-4 39:41 - Defining AGI 47:38 - AGI alignment 1:30:30 - How AGI may kill us 2:22:51 - Superintelligence 2:30:03 - Evolution 2:36:33 - Consciousness 2:47:04 - Aliens 2:52:35 - AGI Timeline 3:00:35 - Ego 3:06:27 - Advice for young people 3:11:45 - Mortality 3:13:26 - Love SOCIAL:

SPEAKERS

  • Eliezer Yudkowsky

    guest
  • Lex Fridman

    host
  • Narrator

    other

EPISODE SUMMARY

In this episode of Lex Fridman Podcast, featuring Eliezer Yudkowsky and Lex Fridman, Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368 explores eliezer Yudkowsky Warns: Misaligned Superintelligence Likely Ends Humanity Soon Lex Fridman and Eliezer Yudkowsky discuss the rapid progress of large language models like GPT‑4 and why Eliezer believes current AI development is on track to destroy human civilization. Eliezer argues that alignment must work on the first critical try with smarter‑than‑human systems, unlike normal science where multiple failures are tolerated, because a single failure with superintelligence is fatal and irreversible. He is deeply pessimistic about our current trajectory: capabilities are racing ahead while alignment science and interpretability lag far behind, and institutional or market forces are not set up to prioritize safety. They explore questions of consciousness, deception, self‑improvement (“foom”), the limits of open‑sourcing, and what—if anything—young people or billionaires could still do to change the game board.

RELATED EPISODES

Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33

Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64

Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64

Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Christof Koch: Consciousness | Lex Fridman Podcast #2

Christof Koch: Consciousness | Lex Fridman Podcast #2

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome