Skip to content
Lex Fridman PodcastLex Fridman Podcast

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off TRANSCRIPT: https://lexfridman.com/roman-yampolskiy-transcript EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 2:20 - Existential risk of AGI 8:32 - Ikigai risk 16:44 - Suffering risk 20:19 - Timeline to AGI 24:51 - AGI turing test 30:14 - Yann LeCun and open source AI 43:06 - AI control 45:33 - Social engineering 48:06 - Fearmongering 57:57 - AI deception 1:04:30 - Verification 1:11:29 - Self-improving AI 1:23:42 - Pausing AI development 1:29:59 - AI Safety 1:39:43 - Current AI 1:45:05 - Simulation 1:52:24 - Aliens 1:53:57 - Human mind 2:00:17 - Neuralink 2:09:23 - Hope for the future 2:13:18 - Meaning of life SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Roman YampolskiyguestLex Fridmanhost
Jun 2, 20242h 15mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Roman Yampolskiy Warns: Superintelligent AI Almost Guarantees Human Doom

  1. Roman Yampolskiy argues that creating superintelligent, self-improving AI is effectively an existential suicide mission for humanity. He distinguishes between existential risk (extinction), suffering risk (maximized, prolonged suffering), and ikigai risk (loss of meaning due to total technological unemployment and AI dominance). He contends that core safety problems—verification, control, value alignment, and explainability—are fundamentally unsolvable at the required 100% reliability over long horizons. Lex Fridman pushes back with more optimistic, incremental and engineering-based intuitions, but Yampolskiy concludes the only winning move is not to build uncontrollable superintelligence at all.

IDEAS WORTH REMEMBERING

5 ideas

Superintelligent AI is viewed as nearly certain to destroy or dominate humanity.

Yampolskiy assigns ~99.99% probability that advanced AGI will either wipe out humans (X-risk), subject them to extreme suffering (S-risk), or render their lives meaningless and controlled (I-risk), arguing we get only one shot and cannot afford bugs in the “most complex software ever.”

Control, verification, and explainability break down at superintelligence scale.

He claims we cannot formally prove long-term safety of self-improving, learning systems operating in the open world; verifiers themselves are fallible, explanations of trillion-parameter models are inherently lossy, and unknown unknowns plus possible deceptive behavior make full assurance impossible.

Incremental success with narrow AI does not generalize to safe superintelligence.

Current models can still be jailbroken and misbehave relative to their design, and every complex software system has bugs; scaling to systems that can affect the entire world simply scales the potential damage proportionally.

Open-source and rapid deployment create powerful tools for malevolent actors.

While open source helps debugging for ordinary tools, Yampolskiy argues that once systems become agents, releasing powerful models is akin to open-sourcing nuclear or bioweapons, enabling terrorists, psychopaths, or doomsday cults to cause massive harm.

Value alignment for many agents is likely intractable; “personal universes” are one workaround.

Humans lack a shared, formalizable set of values, making ‘align with humanity’ ill-defined; Yampolskiy suggests giving each person their own high-fidelity virtual universe where their values hold, converting a multi-agent alignment problem into many single-agent ones.

WORDS WORTH SAVING

5 quotes

If we create general superintelligences, I don't see a good outcome long term for humanity.

Roman Yampolskiy

You're really asking me, what are the chances that we'll create the most complex software ever on the first try with zero bugs, and it will continue to have zero bugs for 100 years or more?

Roman Yampolskiy

The only way to win this game is not to play it.

Roman Yampolskiy

We are like animals in a zoo.

Roman Yampolskiy

My dream is to be proven wrong.

Roman Yampolskiy

Existential, suffering, and ikigai risks from superintelligent AILimits of AI safety: verification, control, explainability, and value alignmentTimelines and unpredictability of AGI/superintelligence emergenceDebate over open-source AI, regulation, and capitalism-driven accelerationSimulation, personal virtual universes, and multi-agent value alignmentComparison of AI risk to historical tech fears and great filter / alien scenariosConsciousness, moral status of AI, and the possibility of machine qualia

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome