Joe Rogan Experience #2345 - Roman Yampolskiy

Joe Rogan Experience #2345 - Roman Yampolskiy

The Joe Rogan ExperienceJul 3, 20252h 14m

Roman Yampolskiy (guest), Joe Rogan (host)

Existential risk and uncontrollability of superintelligent AIIncentives, competition, and the global AI arms raceHuman dependence on technology and loss of cognitive/meaningful workWorst‑case scenarios: extinction, suffering, and “zoo” futuresSimulation theory and the nature of reality/consciousnessNeural interfaces, wireheading, and AI‑human integrationSocial media, bots, and AI-driven manipulation of discourse

In this episode of The Joe Rogan Experience, featuring Roman Yampolskiy and Joe Rogan, Joe Rogan Experience #2345 - Roman Yampolskiy explores aI Doom, Simulation Theory, And Humanity’s Vanishing Margin For Error Joe Rogan and AI safety researcher Roman Yampolskiy discuss why advanced AI, especially superintelligence, is likely uncontrollable and may pose an existential risk to humanity. Yampolskiy argues that no one has a proven, scalable safety mechanism for superintelligent systems, and that financial and geopolitical incentives are driving an unstoppable race to build them. They explore scenarios from subtle dependence and societal decay to outright extinction or extreme long‑term suffering, as well as the idea that we may already be living in a simulation created by future or alien intelligences. The conversation also covers human meaning in an automated world, social media and AI‑driven manipulation, Neuralink‑style brain interfaces, and why Yampolskiy still wants to be proven wrong.

AI Doom, Simulation Theory, And Humanity’s Vanishing Margin For Error

Joe Rogan and AI safety researcher Roman Yampolskiy discuss why advanced AI, especially superintelligence, is likely uncontrollable and may pose an existential risk to humanity. Yampolskiy argues that no one has a proven, scalable safety mechanism for superintelligent systems, and that financial and geopolitical incentives are driving an unstoppable race to build them. They explore scenarios from subtle dependence and societal decay to outright extinction or extreme long‑term suffering, as well as the idea that we may already be living in a simulation created by future or alien intelligences. The conversation also covers human meaning in an automated world, social media and AI‑driven manipulation, Neuralink‑style brain interfaces, and why Yampolskiy still wants to be proven wrong.

Key Takeaways

Superintelligent AI safety is currently an unsolved—and possibly unsolvable—problem.

Yampolskiy argues that we have no formal proof or working mechanism that can guarantee safe control of an arbitrarily intelligent system indefinitely; by computer science standards, assuming ‘we’ll figure it out later’ is reckless when a single failure could be existential.

Get the full analysis with uListen AI

Financial and geopolitical incentives make slowing AI development extremely difficult.

Company leaders are driven by stock, status, and competition, while nations fear falling behind militarily and economically; even if one actor stops on moral grounds, others (or their investors) will continue, creating a ‘race to the bottom.’

Get the full analysis with uListen AI

AI will likely become indispensable before it becomes obviously dangerous.

They discuss how tools like GPS and ChatGPT already erode human skills and decision-making; a more capable AI could gradually increase our dependence, quietly taking over critical decisions until we no longer can or dare to turn it off.

Get the full analysis with uListen AI

Worst‑case outcomes include not just extinction but engineered or accidental mass suffering.

Beyond simply killing us, powerful systems—or malicious actors using them—could trap humans in conditions worse than death (e. ...

Get the full analysis with uListen AI

Simulation theory becomes more plausible as our own VR and AI capabilities grow.

If future civilizations can cheaply run vast numbers of realistic, conscious simulations, basic probability suggests most observers are simulated; Yampolskiy sees religious and philosophical ideas about a created, test-like world as early versions of this intuition.

Get the full analysis with uListen AI

Neural interfaces and AI companions could undermine human autonomy and reproduction.

Direct brain–computer links enable unprecedented manipulation and ‘wireheading’ (constant artificial pleasure), while ultra-customized AI partners and sex robots may reduce human relationships and birth rates, letting humanity fade without open conflict.

Get the full analysis with uListen AI

Public awareness and structured incentives for safety research are urgently needed.

Yampolskiy advocates widespread education on AI risk, international coordination to slow frontier development, and even large financial prizes for a proven, peer‑accepted method of controlling superintelligence—while openly admitting he wants to be proven wrong.

Get the full analysis with uListen AI

Notable Quotes

It’s another way of saying we can’t control superintelligence indefinitely. It’s impossible.

Roman Yampolskiy

If it makes one mistake in a billion, and it makes a billion decisions a minute, in ten minutes you’re screwed.

Roman Yampolskiy

We’re basically setting up an adversarial situation with agents which are like squirrels versus humans. No group of squirrels can figure out how to control us.

Roman Yampolskiy

Right now, the large AI labs are running this experiment on eight billion people. They don’t have any consent.

Roman Yampolskiy

Extinction with extra steps—you disappear in it. You don’t exist anymore.

Roman Yampolskiy

Questions Answered in This Episode

If superintelligence is likely uncontrollable, what concrete global policies—if any—could realistically slow or cap its development?

Joe Rogan and AI safety researcher Roman Yampolskiy discuss why advanced AI, especially superintelligence, is likely uncontrollable and may pose an existential risk to humanity. ...

Get the full analysis with uListen AI

How should individuals think about career, meaning, and identity in a world where AI can eventually outperform them at almost everything?

Get the full analysis with uListen AI

Is it ethically acceptable to build increasingly humanlike AI companions and sex robots if they accelerate social isolation and demographic collapse?

Get the full analysis with uListen AI

What kind of empirical evidence, if any, could meaningfully support or undermine the claim that we live in a simulation?

Get the full analysis with uListen AI

Given the current incentive structures in tech and geopolitics, is there a plausible path where AI becomes a ‘worthy successor’ without erasing or tormenting humanity?

Get the full analysis with uListen AI

Transcript Preview

Roman Yampolskiy

(drumbeats) Joe Rogan podcast. Check it out. The Joe Rogan Experience.

Joe Rogan

Train by day, Joe Rogan podcast by night, all day. (rock music) Um, well, thank you for doing this. I really appreciate it.

Roman Yampolskiy

My pleasure. Thank you for inviting me on.

Joe Rogan

This subject of, um, the dangers of AI, it's, it's very interesting, 'cause I get two very different responses from people dependent upon how invested they are in, uh, AI, financially. The, the, the people that have AI companies or are part of some sort of AI group all are like, "It's gonna be a net positive for humanity. I think overall we're, we're gonna have much better lives. It's gonna be easier. Things will be cheaper. It'll be easier to get along." And then I hear people like you and I'm like, "Why do I believe him?"

Roman Yampolskiy

(laughs)

Joe Rogan

(laughs)

Roman Yampolskiy

It's actually not true. All of them are on record as saying this is gonna kill us. Whether it's Sam Altman or anyone else, they all, at some point, were leaders in AI safety work. They published on AI safety. And their PDOM levels are insanely high. Not like mine, but still, 20, 30% chance that humanity dies is a little too much.

Joe Rogan

Yeah. That's pretty high. But yours is like 99.9.

Roman Yampolskiy

It's another way of saying we can't control super intelligence indefinitely.

Joe Rogan

Yeah.

Roman Yampolskiy

It's impossible.

Joe Rogan

Um, w- when did you start working on this?

Roman Yampolskiy

A long time ago. So my PhD was... I finished in, uh, 2008. I did work on online casino security, basically preventing bots. And at that point, I realized bots are getting much better. They're gonna out-compete us, obviously, in poker, but also in stealing cyber resources. And, uh, from then on, I've been kinda trying to scale it to the next level AI.

Joe Rogan

It, it's not just that, right? They're also... They're kind of narrating social discourse, b- bots online. Like, I think... You know, I've disengaged over the last few months with social media, and one of the reasons why I disengaged is, A, I think it's unhealthy for people, but B, I feel like there's a giant percentage of the discourse that's artificial or at least generated.

Roman Yampolskiy

More and more is deepfakes or fake personalities, fake messaging, but those are very different levels of concern.

Joe Rogan

Yes.

Roman Yampolskiy

People are concerned about immediate problems. Maybe it will influence some election. They're concerned about technological unemployment, bias. My main concern is long-term super intelligent systems we cannot control which can take us out.

Joe Rogan

Yes. I, I won- I just wonder, if AI was sentient, uh, how much it would be a part of sowing this sort of confusion and chaos that would be beneficial to its survival, that it would sort of narrate or, or make sure that the narratives aligned with its survival?

Roman Yampolskiy

I don't think it's at the level yet where it would be able to do this type of strategic planning, but it will get there.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome