The Joe Rogan ExperienceJoe Rogan Experience #2345 - Roman Yampolskiy
At a glance
WHAT IT’S REALLY ABOUT
AI Doom, Simulation Theory, And Humanity’s Vanishing Margin For Error
- Joe Rogan and AI safety researcher Roman Yampolskiy discuss why advanced AI, especially superintelligence, is likely uncontrollable and may pose an existential risk to humanity. Yampolskiy argues that no one has a proven, scalable safety mechanism for superintelligent systems, and that financial and geopolitical incentives are driving an unstoppable race to build them. They explore scenarios from subtle dependence and societal decay to outright extinction or extreme long‑term suffering, as well as the idea that we may already be living in a simulation created by future or alien intelligences. The conversation also covers human meaning in an automated world, social media and AI‑driven manipulation, Neuralink‑style brain interfaces, and why Yampolskiy still wants to be proven wrong.
IDEAS WORTH REMEMBERING
5 ideasSuperintelligent AI safety is currently an unsolved—and possibly unsolvable—problem.
Yampolskiy argues that we have no formal proof or working mechanism that can guarantee safe control of an arbitrarily intelligent system indefinitely; by computer science standards, assuming ‘we’ll figure it out later’ is reckless when a single failure could be existential.
Financial and geopolitical incentives make slowing AI development extremely difficult.
Company leaders are driven by stock, status, and competition, while nations fear falling behind militarily and economically; even if one actor stops on moral grounds, others (or their investors) will continue, creating a ‘race to the bottom.’
AI will likely become indispensable before it becomes obviously dangerous.
They discuss how tools like GPS and ChatGPT already erode human skills and decision-making; a more capable AI could gradually increase our dependence, quietly taking over critical decisions until we no longer can or dare to turn it off.
Worst‑case outcomes include not just extinction but engineered or accidental mass suffering.
Beyond simply killing us, powerful systems—or malicious actors using them—could trap humans in conditions worse than death (e.g., perpetual confinement, torture-like states, or total loss of agency) as side effects or intentional design.
Simulation theory becomes more plausible as our own VR and AI capabilities grow.
If future civilizations can cheaply run vast numbers of realistic, conscious simulations, basic probability suggests most observers are simulated; Yampolskiy sees religious and philosophical ideas about a created, test-like world as early versions of this intuition.
WORDS WORTH SAVING
5 quotesIt’s another way of saying we can’t control superintelligence indefinitely. It’s impossible.
— Roman Yampolskiy
If it makes one mistake in a billion, and it makes a billion decisions a minute, in ten minutes you’re screwed.
— Roman Yampolskiy
We’re basically setting up an adversarial situation with agents which are like squirrels versus humans. No group of squirrels can figure out how to control us.
— Roman Yampolskiy
Right now, the large AI labs are running this experiment on eight billion people. They don’t have any consent.
— Roman Yampolskiy
Extinction with extra steps—you disappear in it. You don’t exist anymore.
— Roman Yampolskiy
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome