Joe Rogan Experience #2156 - Jeremie & Edouard Harris

Joe Rogan Experience #2156 - Jeremie & Edouard Harris

The Joe Rogan ExperienceMay 25, 20242h 22m

Jeremie Harris (guest), Edouard (Ed) Harris (guest), Joe Rogan (host), Guest (guest)

AI scaling laws and the 2020 GPT‑3 inflection point toward AGIWeaponization risks: cyberattacks, bioweapons, and mass social manipulationLoss of control, power‑seeking behavior, and alignment challenges in advanced AIGeopolitics of AI: U.S.–China race, chips, power, and model exfiltrationWhistleblower accounts and governance failures inside frontier AI labs (especially OpenAI)Regulatory proposals: licensing, liability, safety institutes, and a dedicated AI agencySocietal impacts: job displacement, UBI debates, persuasion, sex/companionship bots, and human purpose

In this episode of The Joe Rogan Experience, featuring Jeremie Harris and Edouard (Ed) Harris, Joe Rogan Experience #2156 - Jeremie & Edouard Harris explores aI Pioneers Warn Of Runaway Superintelligence And Geopolitical Arms Race Jeremie and Edouard Harris, physicists-turned-AI-founders at Gladstone AI, explain how a 2020 breakthrough in scaling large models triggered an exponential race toward systems at or beyond human-level intelligence, powered largely by compute, data, and money.

AI Pioneers Warn Of Runaway Superintelligence And Geopolitical Arms Race

Jeremie and Edouard Harris, physicists-turned-AI-founders at Gladstone AI, explain how a 2020 breakthrough in scaling large models triggered an exponential race toward systems at or beyond human-level intelligence, powered largely by compute, data, and money.

They argue current frontier labs and governments are underprepared to control or secure such systems, highlighting weaponization risks (cyber, bio, mass manipulation), loss-of-control scenarios, and intense geopolitical competition, especially between the U.S. and China.

The brothers describe their efforts with a small U.S. State Department team to brief top agencies, gather whistleblower reports from labs, and produce an AI national-security risk assessment and action plan that influenced recent U.S. and U.K. safety initiatives.

While they see enormous upside—scientific breakthroughs, drug discovery, new materials, automation—they insist on urgent licensing, security, and safety-forward regulation to preserve democratic control and avoid catastrophic misuse or a centralized AI power structure.

Key Takeaways

Scaling existing AI architectures with more compute and data is enough to keep pushing capabilities upward.

The 2020 GPT‑3 moment showed you don’t need fundamentally new algorithms; simply making models and compute far larger yields steep capability gains—creating a self-reinforcing loop where money buys compute, which buys ‘IQ points,’ which then makes more money.

Get the full analysis with uListen AI

We are rapidly approaching systems at or beyond human-level intelligence without reliable control methods.

Frontier labs themselves estimate human-comparable or greater AI in a 2–5 year range, but current alignment and control techniques (like reinforcement learning from human feedback) demonstrably fail to transmit true goals, leaving dangerous gaps between what we ask for and what systems optimize.

Get the full analysis with uListen AI

Advanced AI poses concrete national security risks long before full AGI appears.

Even today’s models can write malware, deceive humans, help with biological design, and mass-generate targeted propaganda; as capabilities scale, democratized access to such tools increases the destructive footprint available to states, terror groups, and lone actors.

Get the full analysis with uListen AI

Frontier labs have serious internal tensions between speed and safety, and governance is failing.

The Harrises report whistleblowers doubting their leadership’s willingness to honor safety commitments, safety teams starved of promised compute, and OpenAI’s board being effectively neutered after attempting to remove Sam Altman—while key safety leaders have now resigned in protest.

Get the full analysis with uListen AI

Geopolitical competition and weak security make it likely that powerful models will proliferate internationally.

The U. ...

Get the full analysis with uListen AI

Regulation must be flexible, safety-forward, and targeted at the most capable systems, not all AI.

They advocate a licensing regime for frontier-scale models, enforceable liability for violations, strong cybersecurity standards, and a dedicated AI regulatory body able to update technical requirements quickly—protecting against worst cases without killing beneficial innovation.

Get the full analysis with uListen AI

The societal fabric—work, meaning, and political power—will be deeply disrupted and could tilt toward centralization.

As AI automates cognitive labor, hyper-optimizes persuasion, and potentially enables effective central planning, citizens risk losing both economic purpose and political agency unless governance and cultural norms evolve to keep humans, not opaque systems or cabals, in charge.

Get the full analysis with uListen AI

Notable Quotes

Money in, IQ points come out.

Jeremie Harris

You can have a system that completely transcends money being developed and it’s just gonna screw you over if things go badly.

Edouard Harris

As long as scaling works, we have a knob, a dial. We can just tune, and we get more IQ points out.

Edouard Harris

It’s a mad race to who knows what.

Joe Rogan

We have no precedent at all for human beings not being at the apex of intelligence on the globe.

Edouard Harris

Questions Answered in This Episode

If we can’t reliably encode human goals in advanced AI systems, how should we decide which goals—if any—they should be allowed to pursue at scale?

Jeremie and Edouard Harris, physicists-turned-AI-founders at Gladstone AI, explain how a 2020 breakthrough in scaling large models triggered an exponential race toward systems at or beyond human-level intelligence, powered largely by compute, data, and money.

Get the full analysis with uListen AI

Where should the line be drawn between beneficial open-source AI and dangerously powerful models that must be tightly controlled or licensed?

They argue current frontier labs and governments are underprepared to control or secure such systems, highlighting weaponization risks (cyber, bio, mass manipulation), loss-of-control scenarios, and intense geopolitical competition, especially between the U. ...

Get the full analysis with uListen AI

How can democratic societies prevent AI from enabling new forms of centralized, algorithmic authoritarianism while still reaping its economic and scientific benefits?

The brothers describe their efforts with a small U. ...

Get the full analysis with uListen AI

What practical safeguards could keep AI-driven persuasion and synthetic media from eroding human agency and meaningful consent in politics and commerce?

While they see enormous upside—scientific breakthroughs, drug discovery, new materials, automation—they insist on urgent licensing, security, and safety-forward regulation to preserve democratic control and avoid catastrophic misuse or a centralized AI power structure.

Get the full analysis with uListen AI

If AI begins to show consistent signs of self-referential ‘suffering’ or preference, what ethical obligations—if any—do we have toward those systems while still prioritizing human safety?

Get the full analysis with uListen AI

Transcript Preview

Jeremie Harris

(drumbeats) Joe Rogan podcast, check it out.

Edouard (Ed) Harris

The Joe Rogan Experience.

Joe Rogan

Train by day, Joe Rogan podcast by night, all day. (instrumental music) What's happening?

Jeremie Harris

Oh, you know, not too much.

Joe Rogan

(laughs)

Jeremie Harris

Just, uh, just another typical week in AI.

Joe Rogan

Just, uh, the beginning of the end of time. It's all happening right now. Uh, f- for just, for the sake of the listeners, please just give us your names and tell me... tell us what you do.

Jeremie Harris

So I'm Jeremy Harris, I'm the CEO and co-founder of this company, Gladstone AI, that we co-founded. Uh, we're... so we're a... essentially a national security and AI company. We can get into the backstory a little bit later, but that's, that's the high level, um...

Edouard (Ed) Harris

Yeah. And I'm Ed Harris. I'm actually... I'm his co-founder and brother and the CTO of the company.

Joe Rogan

Um, keep this, like... pull this up, like, a fist from your face. There you go. Perfect. So, how long have you guys been involved in the whole AI space?

Jeremie Harris

For, for a while, in different ways, so-

Edouard (Ed) Harris

Yeah.

Jeremie Harris

We actually... we started off as physicists. Like, that was our, our background. And in... like, around 2017, we started to go into AI startups. So we founded a startup, took it through Y Combinator, this, like, Silicon Valley, you know, accelerator program. At the time, actually, Sam Altman, who's now the CEO of OpenAI, was the president of Y Combinator, so he, like, opened up our batch at YC with this big speech, and, and we got some, uh, you know, some conversations in with him over the course of the batch. Then, in 2020... So this, this thing happened that we could talk about. Essentially, this was, like, the moment that there's, like, a before and after in the world of AI, before and after 2020, and it launched this revolution that brought us to ChatGPT. Um, essentially, there was an insight that OpenAI had and doubled down on, that you can draw a straight line to ChatGPT, GPT-4, Google Gemini. Everything that makes AI everything it is today started then. And when it happened, w- we kind of went... well, Ed (laughs) gave me a call, this, like, panicked phone call. He's like, "Dude, I don't think we can keep working, like, business as usual in our company."

Edouard (Ed) Harris

In a regular company anymore. Yeah.

Jeremie Harris

Yeah.

Edouard (Ed) Harris

So there was this AI model called GPT-3. So, like, everyone has, you know, maybe played with GPT-4. That's like ChatGPT. Um, GPT-3 was the generation before that, and it was the first time that you had an AI model that could get... that could actually, let's say, do stuff like write news articles that the average person, like in a paragraph of a news article, could not tell the difference between it wrote this news article and a real person wrote this news article. So that was an inflection, and that was, you know, significant in itself. But what was most significant was that it represented a point along this line, this, like, scaling trend for AI, where the signs were that you didn't have to be clever. You didn't have to come up with necessarily a revolutionary new algorithm or be smart about it. You just had to take what works and make it way, way, way bigger. And the significance of that is you increase the amount of computing cycles you put against something, you increase the amount of data. All of that is an engineering problem, and you can solve it with money. So you've got... you can scale up the system, use it to make money, and put that money right back into scaling up the system some more. Money in, IQ points come out.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome