Dwarkesh PodcastEliezer Yudkowsky — Why AI will kill us, aligning LLMs, nature of intelligence, SciFi, & rationality
Episode Details
EPISODE INFO
- Released
- April 6, 2023
- Duration
- 4h 3m
- Channel
- Dwarkesh Podcast
- Watch on YouTube
- ▶ Open ↗
EPISODE DESCRIPTION
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong. We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more. If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely. 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒
- Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky
- Apple Podcasts: https://apple.co/3mcPjON
- Spotify: https://spoti.fi/3KDFzX9
- Follow me on Twitter: https://twitter.com/dwarkesh_sp
𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - TIME article 00:09:06 - Are humans aligned? 00:37:35 - Large language models 01:07:15 - Can AIs help with alignment? 01:30:17 - Society’s response to AI 01:44:42 - Predictions (or lack thereof) 01:56:55 - Being Eliezer 02:13:06 - Othogonality 02:35:00 - Could alignment be easier than we think? 03:02:15 - What will AIs want? 03:43:54 - Writing fiction & whether rationality helps you win
SPEAKERS
Dwarkesh Patel
hostEliezer Yudkowsky
guestNarrator
other
EPISODE SUMMARY
In this episode of Dwarkesh Podcast, featuring Dwarkesh Patel and Eliezer Yudkowsky, Eliezer Yudkowsky — Why AI will kill us, aligning LLMs, nature of intelligence, SciFi, & rationality explores eliezer Yudkowsky explains why advanced AI likely ends humanity soon Eliezer Yudkowsky argues that current AI progress, especially large language models, is on track to produce superintelligence that will almost certainly disempower or kill humanity if not stopped. He believes alignment is vastly harder than most assume, cannot be safely outsourced to AIs themselves, and that present techniques like RLHF only superficially shape behavior while leaving dangerous underlying motivations untouched.
RELATED EPISODES
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome




