Skip to content
Dwarkesh PodcastDwarkesh Podcast

Eliezer Yudkowsky — Why AI will kill us, aligning LLMs, nature of intelligence, SciFi, & rationality

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong. We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more. If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely. 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky * Apple Podcasts: https://apple.co/3mcPjON * Spotify: https://spoti.fi/3KDFzX9 * Follow me on Twitter: https://twitter.com/dwarkesh_sp 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - TIME article 00:09:06 - Are humans aligned? 00:37:35 - Large language models 01:07:15 - Can AIs help with alignment? 01:30:17 - Society’s response to AI 01:44:42 - Predictions (or lack thereof) 01:56:55 - Being Eliezer 02:13:06 - Othogonality 02:35:00 - Could alignment be easier than we think? 03:02:15 - What will AIs want? 03:43:54 - Writing fiction & whether rationality helps you win

Dwarkesh PatelhostEliezer Yudkowskyguest
Apr 6, 20234h 3mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 9:06

    TIME article

  2. 9:06 – 37:35

    Are humans aligned?

  3. 37:35 – 1:07:15

    Large language models

  4. 1:07:15 – 1:30:17

    Can AIs help with alignment?

  5. 1:30:17 – 1:44:42

    Society’s response to AI

  6. 1:44:42 – 1:56:55

    Predictions (or lack thereof)

  7. 1:56:55 – 2:13:06

    Being Eliezer

  8. 2:35:00 – 3:02:15

    Could alignment be easier than we think?

  9. 3:02:15 – 3:43:54

    What will AIs want?

  10. 3:43:54 – 4:03:24

    Writing fiction & whether rationality helps you win

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome