Skip to content
Lex Fridman PodcastLex Fridman Podcast

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off TRANSCRIPT: https://lexfridman.com/roman-yampolskiy-transcript EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 2:20 - Existential risk of AGI 8:32 - Ikigai risk 16:44 - Suffering risk 20:19 - Timeline to AGI 24:51 - AGI turing test 30:14 - Yann LeCun and open source AI 43:06 - AI control 45:33 - Social engineering 48:06 - Fearmongering 57:57 - AI deception 1:04:30 - Verification 1:11:29 - Self-improving AI 1:23:42 - Pausing AI development 1:29:59 - AI Safety 1:39:43 - Current AI 1:45:05 - Simulation 1:52:24 - Aliens 1:53:57 - Human mind 2:00:17 - Neuralink 2:09:23 - Hope for the future 2:13:18 - Meaning of life SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Roman YampolskiyguestLex Fridmanhost
Jun 2, 20242h 15mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
June 2, 2024
Duration
2h 15m
Channel
Lex Fridman Podcast
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:

TRANSCRIPT: https://lexfridman.com/roman-yampolskiy-transcript EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 2:20 - Existential risk of AGI 8:32 - Ikigai risk 16:44 - Suffering risk 20:19 - Timeline to AGI 24:51 - AGI turing test 30:14 - Yann LeCun and open source AI 43:06 - AI control 45:33 - Social engineering 48:06 - Fearmongering 57:57 - AI deception 1:04:30 - Verification 1:11:29 - Self-improving AI 1:23:42 - Pausing AI development 1:29:59 - AI Safety 1:39:43 - Current AI 1:45:05 - Simulation 1:52:24 - Aliens 1:53:57 - Human mind 2:00:17 - Neuralink 2:09:23 - Hope for the future 2:13:18 - Meaning of life SOCIAL:

SPEAKERS

  • Roman Yampolskiy

    guest
  • Lex Fridman

    host

EPISODE SUMMARY

In this episode of Lex Fridman Podcast, featuring Roman Yampolskiy and Lex Fridman, Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431 explores roman Yampolskiy Warns: Superintelligent AI Almost Guarantees Human Doom Roman Yampolskiy argues that creating superintelligent, self-improving AI is effectively an existential suicide mission for humanity. He distinguishes between existential risk (extinction), suffering risk (maximized, prolonged suffering), and ikigai risk (loss of meaning due to total technological unemployment and AI dominance). He contends that core safety problems—verification, control, value alignment, and explainability—are fundamentally unsolvable at the required 100% reliability over long horizons. Lex Fridman pushes back with more optimistic, incremental and engineering-based intuitions, but Yampolskiy concludes the only winning move is not to build uncontrollable superintelligence at all.

RELATED EPISODES

Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33

Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64

Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64

Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Christof Koch: Consciousness | Lex Fridman Podcast #2

Christof Koch: Consciousness | Lex Fridman Podcast #2

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome