Lex Fridman PodcastRoman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431
Roman YampolskiyguestLex Fridmanhost
CHAPTERS
- 0:00 – 2:20
Introduction
- 2:20 – 8:32
Existential risk of AGI
- 8:32 – 16:44
Ikigai risk
- 16:44 – 20:19
Suffering risk
- 20:19 – 24:51
Timeline to AGI
- 24:51 – 30:14
AGI turing test
- 30:14 – 43:06
Yann LeCun and open source AI
- 43:06 – 45:33
AI control
- 45:33 – 48:06
Social engineering
- 48:06 – 57:57
Fearmongering
- 57:57 – 1:04:30
AI deception
- 1:04:30 – 1:11:29
Verification
- 1:11:29 – 1:23:42
Self-improving AI
- 1:23:42 – 1:29:59
Pausing AI development
- 1:29:59 – 1:39:43
AI Safety
- 1:39:43 – 1:45:05
Current AI
- 1:45:05 – 1:52:24
Simulation
- 1:52:24 – 1:53:57
Aliens
- 1:53:57 – 2:00:17
Human mind
- 2:00:17 – 2:09:23
Neuralink
- 2:09:23 – 2:13:18
Hope for the future
- 2:13:18 – 2:15:38
Meaning of life
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome