Joe Rogan Experience #1350 - Nick Bostrom

Joe Rogan Experience #1350 - Nick Bostrom

The Joe Rogan ExperienceSep 12, 20192h 32m

Joe Rogan (host), Nick Bostrom (guest), Narrator, Narrator

Superintelligent AI: potential benefits, existential risks, and alignment with human valuesTechnological acceleration, historical context, and how unprecedented our era isHuman obsolescence, bio-enhancement, and genetic selection (including ethical concerns)Existential risks: nuclear weapons, autonomous weapons, and global coordination failuresThe simulation argument: three core options and anthropic reasoningPhilosophical implications of living in a simulation versus base realityHuman future scenarios: superintelligence, space colonization, and technological maturity

In this episode of The Joe Rogan Experience, featuring Joe Rogan and Nick Bostrom, Joe Rogan Experience #1350 - Nick Bostrom explores nick Bostrom and Joe Rogan Confront AI, Extinction, and Simulation Reality Nick Bostrom and Joe Rogan explore the potential of artificial intelligence as both humanity’s greatest hope and its most serious existential risk. They discuss AI alignment, technological acceleration, and how superintelligence could either solve global problems or render humans obsolete. The conversation then moves into genetic enhancement, nuclear weapons, robot warfare, and humanity’s chronic inability to coordinate on dangerous technologies. In the final third, Bostrom lays out his famous simulation argument, debating with Rogan whether we are more likely in base reality or in a computer-generated universe.

Nick Bostrom and Joe Rogan Confront AI, Extinction, and Simulation Reality

Nick Bostrom and Joe Rogan explore the potential of artificial intelligence as both humanity’s greatest hope and its most serious existential risk. They discuss AI alignment, technological acceleration, and how superintelligence could either solve global problems or render humans obsolete. The conversation then moves into genetic enhancement, nuclear weapons, robot warfare, and humanity’s chronic inability to coordinate on dangerous technologies. In the final third, Bostrom lays out his famous simulation argument, debating with Rogan whether we are more likely in base reality or in a computer-generated universe.

Key Takeaways

Treat superintelligence as an inevitable gate, not an optional gadget.

Bostrom argues that almost any path to a truly great future likely runs through developing machine intelligence beyond human level, so the priority should be preparing—doing technical AI safety research and improving global political maturity before we get there.

Get the full analysis with uListen AI

Alignment and misuse are both core AI dangers.

He distinguishes two challenges: ensuring advanced AI is aligned with human values, and ensuring humans don’t then use that power for war, oppression, or shortsighted goals—echoing historical misuse of other powerful technologies.

Get the full analysis with uListen AI

Technological progress is historically extreme and deeply deceptive from the inside.

Viewed over 10,000 years, world GDP and innovation look like a flat line followed by a vertical spike; because we’re born into the spike, it feels normal, but Bostrom suggests we’re in the middle of an ongoing “explosion,” not a steady state.

Get the full analysis with uListen AI

Genetic selection will likely precede brain implants as a practical enhancement path.

Bostrom is skeptical about near-term cognitive enhancement via implants like Neuralink, arguing that embryo selection and future genetic technologies are technically closer and less constrained by messy biology than invasive neural hardware.

Get the full analysis with uListen AI

Global coordination failures drive existential risks as much as the technologies themselves.

From nuclear weapons to potential bioweapons and autonomous killer robots, the recurring problem is that no single country can unilaterally “opt out” without fearing others will forge ahead, making treaties and coordination crucial but fragile.

Get the full analysis with uListen AI

The simulation argument narrows the possibilities, even if it doesn’t answer them.

Bostrom’s trilemma says that at least one must be true: (1) almost all civilizations like ours die before technological maturity; (2) advanced civilizations almost never run ancestor simulations; or (3) we are almost certainly in a simulation—forcing us to take at least one unsettling conclusion seriously.

Get the full analysis with uListen AI

We are probably still missing crucial considerations that would reorder our priorities.

Bostrom suspects future insights—like recognizing AI alignment as central, or something yet undiscovered—could radically change what we think most matters, implying our current attempts to “optimize” the future are made from a position of deep ignorance.

Get the full analysis with uListen AI

Notable Quotes

I see [AI] not as something that should be avoided, nor something we should be completely gung-ho about, but more like a kind of gate through which we will have to pass at some point.

Nick Bostrom

What I’m worried about more than anything is that human beings are gonna become obsolete, that we're going to invent something that's the next stage of evolution.

Joe Rogan

If you look at world GDP over 10,000 years, what you see is just a flat line and then a vertical line… it doesn’t look like we are in a static period right now. It looks like we’re in the middle of some kind of explosion.

Nick Bostrom

With all of these powerful technologies we are developing, the ideal course would be that we would first gain a bit more wisdom, and then we would get all of these powerful tools. But it looks like we're getting the powerful tools before we have really achieved a very high level of wisdom.

Nick Bostrom

By the time they had the technology to do [simulations], they would also have enhanced themselves in many different ways… I’d imagine they’d be post-human creatures.

Nick Bostrom

Questions Answered in This Episode

If superintelligent AI is effectively inevitable, what concrete governance structures or institutions should we start building now to handle it safely?

Nick Bostrom and Joe Rogan explore the potential of artificial intelligence as both humanity’s greatest hope and its most serious existential risk. ...

Get the full analysis with uListen AI

How much AI risk is truly about misaligned machines versus dangerous human motivations amplified by those machines?

Get the full analysis with uListen AI

Are the ethical concerns around embryo selection and genetic enhancement fundamentally different from the inequality and bias problems we already tolerate today?

Get the full analysis with uListen AI

Does taking the simulation argument seriously change how we should live—morally, politically, or personally—or is it mainly an intellectual curiosity?

Get the full analysis with uListen AI

Given Bostrom’s claim that we’re probably missing crucial considerations, how should policymakers and technologists design strategies that remain robust under deep uncertainty about the future?

Get the full analysis with uListen AI

Transcript Preview

Joe Rogan

And here we go. All right, Nick. This is, uh, one of the things that scares people more than anything, is the idea that we're creating something, or someone's gonna create something, that's gonna be smarter than us, that's gonna replace us. Is that something we should really be concerned about?

Nick Bostrom

I presume you're referring to babies.

Joe Rogan

(laughs)

Narrator

(laughs)

Joe Rogan

I'm referring to artificial intelligence.

Nick Bostrom

Ah, yes.

Joe Rogan

Ugh.

Nick Bostrom

Well, it's the, the big fear and the big hope, I think.

Joe Rogan

Both?

Nick Bostrom

At the same time, yeah.

Joe Rogan

How is it the big hope?

Nick Bostrom

Well, there are a lot of things wrong with the world as it is now.

Joe Rogan

I'm trying to pull this up to your face, if you would.

Nick Bostrom

Um, all, all the problems we have, uh, most of them could be solved if we were smarter or if we had somebody on our side who are a lot smarter with better technology and so forth. Um, also, I think if we wanna imagine some really grand future where humanity or our descendants one day go out and colonize the universe, I think that's likely to happen, if it's gonna happen at all, after we have superintelligence that then develops the technology to make that possible.

Joe Rogan

The real question is whether or not we would be able to harness this intelligence, or whether it would dominate.

Nick Bostrom

Yeah, that certainly is one question. Um, not the only. You could imagine that we harness it, but then use it for bad purposes as we have a lot of other technologies through history.

Joe Rogan

Yeah.

Nick Bostrom

So I think there are really two challenges we need to meet. One, one is to make sure we can align it with human values, and then make sure that we, together, do something better with it than fighting wars or oppressing one another.

Joe Rogan

I think... Well, what I'm worried about more than anything is that human beings are gonna become obsolete, that we're going to invent something that's the next stage of evolution. I'm, I'm really concerned with that. I'm really concerned with if we look back on ancient hominids, uh, Australopithecus, just think of some primitive ancestor of man, we don't wanna go back to that. Like, that, that's a terrible way to live. I'm worried that what we're creating is the next thing.

Nick Bostrom

I think we don't necessarily want, or at least I wouldn't be totally thrilled with, with a future where humanity as it is now was, was the last and final word, the pa- like, ultimate version beyond.

Joe Rogan

Right.

Nick Bostrom

I, I think there's a lot of room for improvement.

Joe Rogan

Sure.

Nick Bostrom

But not anything that is different is an improvement.

Joe Rogan

Right.

Nick Bostrom

So, so the key would be, I think, to find some path forward where the best in us, uh, can continue to exist and develop, uh, to even greater levels. And maybe at the end of that path, it looks nothing like we do now. Maybe it's not two-legged, two-armed creatures running around with three pounds of thinking matter, right? It might be something quite different. But as long as it... what, what we value is, is present there, and ideally in a much higher degree than in the current world, then that could count as a success.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome