Joe Rogan Experience #2076 - Tristan Harris & Aza Razkin

Joe Rogan Experience #2076 - Tristan Harris & Aza Razkin

The Joe Rogan ExperienceJun 27, 20242h 31m

Tristan Harris (guest), Aza Raskin (guest), Narrator, Joe Rogan (host), Narrator, Joe Rogan (host), Joe Rogan (host), Narrator, Narrator

Social media as ‘first contact’ with AI and the lesson of misaligned incentivesTransformers, emergent AI capabilities, and runaway scaling dynamicsAI’s dual‑use potential: from tutoring and medicine to bioweapons and cyberattacksOpen models, security risks, and the difficulty of containing powerful AI systemsBiological and information security: DNA printers, bio‑risk, and deepfakesGeopolitical AI arms race: U.S., China, UAE, and game‑theoretic pressuresGovernance, incentives, and possible paths to humane, defense‑dominant AI deployment

In this episode of The Joe Rogan Experience, featuring Tristan Harris and Aza Raskin, Joe Rogan Experience #2076 - Tristan Harris & Aza Razkin explores aI’s Runaway Race: Power, Profit, and Humanity’s Survival Crossroads Tristan Harris and Aza Raskin argue that modern AI is repeating and massively amplifying the harms of social media, driven by a profit-and-power race among tech giants and nations. They explain how transformer-based AI systems gain unpredictable, emergent capabilities as they scale, collapsing the distance between dangerous intent and real-world action in areas like bioweapons, cyberattacks, persuasion, and mass deception.

AI’s Runaway Race: Power, Profit, and Humanity’s Survival Crossroads

Tristan Harris and Aza Raskin argue that modern AI is repeating and massively amplifying the harms of social media, driven by a profit-and-power race among tech giants and nations. They explain how transformer-based AI systems gain unpredictable, emergent capabilities as they scale, collapsing the distance between dangerous intent and real-world action in areas like bioweapons, cyberattacks, persuasion, and mass deception.

Using examples from social media addiction, infinite scroll, beauty filters, and political polarization, they frame social platforms as “first contact” with AI, a warning of what happens when incentives are misaligned with human well‑being. They warn that open, powerful models and DNA printers could enable small groups or individuals to cause catastrophic harm faster than governments and institutions can respond.

Despite the risks, they emphasize AI’s potential for breakthroughs in medicine, climate, and governance, and argue the core issue is incentives and coordination, not the technology itself. They call for a global shift from a race to deploy offensive AI capabilities to a race to build secure, defense‑dominant, humane systems that strengthen democracy and shared reality.

They close by urging public awareness and political pressure to change AI’s incentive structures, likening this era to a civilizational rite of passage where humanity must mature—embracing our cognitive limits, upgrading our institutions, and taking responsibility for the ‘shadow’ side of our technologies—or face collapse or permanent authoritarian control.

Key Takeaways

Incentives, not features, determine whether technologies help or harm societies.

Social media shows that when systems are optimized for attention and profit, they naturally evolve toward addiction, outrage, polarization, and mental health crises—even if their creators originally intended connection and empowerment.

Get the full analysis with uListen AI

Modern AI systems gain unpredictable, emergent abilities as they scale.

Transformers trained simply to predict the next word or character spontaneously learn sentiment analysis, chemistry, theory of mind, and code exploitation, meaning creators cannot fully enumerate or anticipate a model’s capabilities before deployment.

Get the full analysis with uListen AI

AI collapses the distance between dangerous intent and dangerous action.

Unlike static Google search results, interactive AI tutors can iteratively guide users through complex tasks—like building explosives, designing bioweapons, or evading controls—making sophisticated harm accessible to less skilled actors.

Get the full analysis with uListen AI

Open, powerful AI models are not just insecure—they become insecurable.

Once model weights for a capable system are released (like LLaMA or Falcon), anyone can fine‑tune away safety constraints, proliferate the model globally, and use it as a “teaching tool” to jailbreak more powerful closed systems, with no practical way to revoke or recall it.

Get the full analysis with uListen AI

AI is amplifying an information and reality crisis that threatens democracy.

As generative models begin producing most online content—voices, faces, text, music, and video—people, institutions, and law enforcement will struggle to distinguish truth from fabrication, overwhelming governance and eroding shared reality.

Get the full analysis with uListen AI

The AI race is currently offense‑dominant and strategically irrational.

Labs and nations are driven by fear of falling behind and by potential economic gains, pushing them to scale models faster than they can make them secure, even though powerful models are cheaper to steal than to build—undermining the very strategic advantages they’re racing for.

Get the full analysis with uListen AI

Humanity must coordinate globally to change AI’s ‘race destination,’ not just its speed.

Harris and Raskin argue for shifting from a race to deploy ever‑more powerful general models toward a race to secure, defense‑dominant, and democratically aligned AI that strengthens institutions, detects threats, and helps build consensus—requiring new governance, public pressure, and shared understanding of the risks.

Get the full analysis with uListen AI

Notable Quotes

If the technology confers power, you're going to start a race. If you do not coordinate, that race will end in tragedy.

Aza Raskin

Social media is kind of a baby AI... It was first contact between humanity and AI, and humanity lost.

Tristan Harris

As fast as everything is moving now, unless we do something, this is the slowest it will move in our lifetimes.

Aza Raskin

We are heading into the largest election cycle the world has ever seen at the exact moment we are deploying the biggest, baddest new technology.

Aza Raskin

It's not about being optimistic or pessimistic. It's about opening your eyes as wide as possible so you can show up and do something about it.

Aza Raskin

Questions Answered in This Episode

If emergent AI capabilities are fundamentally unpredictable, what minimum safety and security standards should be mandatory before any large model is deployed to the public?

Tristan Harris and Aza Raskin argue that modern AI is repeating and massively amplifying the harms of social media, driven by a profit-and-power race among tech giants and nations. ...

Get the full analysis with uListen AI

How realistic is global coordination on AI, given current geopolitical tensions, and what concrete first steps could the U.S., EU, and key chip‑making countries take together right now?

Using examples from social media addiction, infinite scroll, beauty filters, and political polarization, they frame social platforms as “first contact” with AI, a warning of what happens when incentives are misaligned with human well‑being. ...

Get the full analysis with uListen AI

Where should we draw the line between open and closed AI models—what capabilities, if any, should never be open‑sourced?

Despite the risks, they emphasize AI’s potential for breakthroughs in medicine, climate, and governance, and argue the core issue is incentives and coordination, not the technology itself. ...

Get the full analysis with uListen AI

What new forms of democratic governance and public participation could AI itself help build to ensure AI development serves broad human interests rather than narrow corporate or national ones?

They close by urging public awareness and political pressure to change AI’s incentive structures, likening this era to a civilizational rite of passage where humanity must mature—embracing our cognitive limits, upgrading our institutions, and taking responsibility for the ‘shadow’ side of our technologies—or face collapse or permanent authoritarian control.

Get the full analysis with uListen AI

How should we weigh AI’s potential to solve existential risks like climate change and disease against its potential to create new, even faster‑moving existential threats such as engineered pandemics or systemic information collapse?

Get the full analysis with uListen AI

Transcript Preview

Tristan Harris

(drumbeats) Joe Rogan podcast, check it out.

Aza Raskin

The Joe Rogan Experience.

Narrator

Train by day, Joe Rogan podcast by night. All day. (rock music)

Joe Rogan

Joe, what's going on, man? How are you guys?

Narrator

All right.

Aza Raskin

Doing okay.

Joe Rogan

A little apprehensive. There's a little tension in the air. (laughs)

Narrator

(laughs)

Tristan Harris

(laughs)

Aza Raskin

(laughs) No, I don't think so.

Joe Rogan

Well, this subject is... Uh, so let's get into it. Um, what's the latest?

Narrator

(laughs)

Joe Rogan

(laughs)

Aza Raskin

(laughs)

Tristan Harris

Uh, let's see. The first time I saw you, Joe, uh, was in 2020, uh, like a month after The Social Dilemma-

Joe Rogan

Yeah.

Tristan Harris

... came out. And, um, so that was, you know, w- we think of that as kind of first contact between humanity and AI. Before I say that, I should introduce, uh, Aza, uh, is the co-founder of the Center for Human Technology. We did The Social Dilemma together.

Joe Rogan

Mm-hmm.

Tristan Harris

We're both in The Social Dilemma, um, and, uh, Aza also has a project that is using AI to translate animal communication, uh, called Earth Species Project.

Joe Rogan

I was just reading something about whales yesterday.

Narrator

Mm-hmm.

Joe Rogan

Is that re- regarding that?

Aza Raskin

Yeah, we, I mean, we work across a number of different species, dolphins, whales, orangutans, crows. And, uh, I think the reason why Tristan is bringing it up is because we're... Like, this conversation, uh, we're just gonna sort of dive into, like, which way is AI taking us as a species, as a civilization? Um, and it can be easy to hear just critiques as coming from critics, but we've both been builders, and I've been working on AI, uh, since, you know, really thinking about it, since 2013, but, like, building since 2017.

Joe Rogan

Hmm. So this thing that I was reading about with whales, that there's some-

Aza Raskin

Mm-hmm.

Joe Rogan

... new scientific breakthrough-

Aza Raskin

Mm-hmm.

Joe Rogan

... where they're understanding patterns in the whale's language.

Aza Raskin

Mm-hmm.

Joe Rogan

And what they were saying was the next step would be to have AI work on this and try to break it down, and break it down into pronouns, nouns, verbs, or whatever they're using-

Aza Raskin

Mm-hmm.

Narrator

Mm-hmm.

Joe Rogan

... and deciphers some sort of language out of it.

Narrator

Mm-hmm.

Aza Raskin

Yeah, that, that's exactly right. And what most people don't realize is the amount that we actually already know. So dolphins, for instance, have names that they call each other by.

Joe Rogan

Wow.

Aza Raskin

Parrots, turns out, also have names that their... Like, the mother will, like, whisper in each different child's ear and, like, teach them their name. They go back and forth until the child gets it.

Joe Rogan

Oh.

Aza Raskin

Um, d- one of my favorite examples is actually off the coast of Norway every year. There's a group of false killer whales that speak one way and a group of dolphins that speak another way, and they come together in a super pod and hunt. And when they do, they speak a third different thing.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome