Skip to content
The Joe Rogan ExperienceThe Joe Rogan Experience

Joe Rogan Experience #2311 - Jeremie & Edouard Harris

Jeremie Harris is the CEO and Edouard Harris the CTO of Gladstone AI, a company dedicated to promoting the responsible development and adoption of artificial intelligence. https://superintelligence.gladstone.ai/

Joe RoganhostJeremie HarrisguestEdouard Harrisguest
Apr 25, 20252h 47mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    (drum music) Joe Rogan podcast,…

    1. JR

      (drum music) Joe Rogan podcast, check it out.

    2. NA

      The Joe Rogan Experience.

    3. JR

      Train by day, Joe Rogan podcast by night, all day. (rock music) All right, so if there's a Doomsday Clock for AI, and we're- we're-we're fucked, what- what- what time is it?

    4. JH

      Oh, wow. We're-

    5. JR

      If midnight is, we're fucked.

    6. JH

      We're getting-

    7. EH

      Getting right into it.

    8. JH

      You're- you're not even gonna ask us what we had for breakfast, like what-

    9. JR

      No, no, no, no, no, no, no. (laughs)

    10. JH

      (laughs) Jesus. Okay.

    11. EH

      Let's- (laughs)

    12. JH

      (laughs)

    13. EH

      ... let's get freaked out.

    14. JH

      Well, okay, so- so there's one, um, without speaking, like, (laughs) the- the fucking Doomsday dimension right out the gate-

    15. JR

      (laughs)

    16. JH

      ... there's a question about, like, where are we at in terms of AI capabilities right now, and what do those timelines look like?

    17. JR

      Right.

    18. JH

      There's a bunch of disagreement. Um, one of the most concrete pieces of evidence that we have recently came out of a- a lab, an- an AI kind of evaluation lab called Meter. And they put together this- this test. Basically, it's like, you ask the question, um, pick a task that takes a certain amount of time, like an hour, that takes, like, a human a certain amount of time. And then see, like, how likely, uh, the best AI system is to solve for that task. Then try a longer task. See, like, a 10-hour task, can it do that one? And so right now, what they're finding is, um, when it comes to AI research itself, so basically, like, automate the work of an AI researcher, you're hitting 50% success rates for these AI systems for tasks that take an hour long. And that is doubling every, right now, it's, like, every four months.

    19. EH

      S- so-

    20. JR

      Hm.

    21. EH

      ... like, you had tasks that you could do, you know, a person does in five minutes, like, you know, uh, ordering an Uber Eats or, like, something that takes, like, 15 minutes, like maybe booking a flight or something like that. And it's a question of, like, how much can these AI agents do, right? Like from five minutes to 15 minutes to 30 minutes. And in some of these spaces, like research, software engineering.

    22. JR

      Mm-hmm.

    23. EH

      And it's getting further and further and further and doubling, it looks like, every four months. So it's like-

    24. JH

      If you- if you-

    25. EH

      Yeah.

    26. JH

      ... extrapolate that, uh, you basically get to tasks that take a month to complete, like by 2027, tasks that take an AI researcher a month to complete, these systems will be completing with like a 50% success rate. That's where this goes.

    27. EH

      So you'll be able to have an AI on your show and ask it what the Doomsday Clock is like by then.

    28. JR

      Uh, it probably won't laugh. (laughs)

    29. JH

      (laughs)

    30. EH

      (laughs) That's gonna be part of the problem.

  2. 15:0030:00

    And so one- one…

    1. EH

      kinda hard. Like, you can- you can outthink a toddler pretty much like any day of the week. And so, uh, superintelligence gets us at these levels where you can potentially do things that are completely different, and basically, you know, new scientific theories. And we ... Last time we talked about, um, you know, new- new stable forms of matter that were being discovered by these kind of narrow systems. But now you're talking about a system that is like ... has that intuition combined with the ability to talk to you as a human, and to just have really good like rapport with you, but can also do math. It can also write code. It can also like solve quantum mechanics and has that all kinda wrapped up in the same package.

    2. JH

      And so one- one of the things too that, uh, by definition, if you build a human-level AI, one of the things it must be able to do as well as humans is AI research itself.

    3. EH

      Yeah.

    4. JH

      Or at least the- the parts of AI research that you can do in just like software, like i- i- you know, in- in- by coding or whatever these- these systems are designed to do. Um, and so- so one implication of that is you now have automated AI researchers. And if you have automated AI researchers, uh, that means you have AI systems that can automate the development of the next level of their own capabilities.

    5. JR

      Right. Right.

    6. JH

      And now you're getting into that whole, you know, singularity thing, where it's an exponential that just builds on itself and builds on itself, which is kind of why, um, you know, i- i- a lot of people argue that, like, if you build human-level AI, superintelligence can't be that far away. You've basically unlocked everything, 'cause-

    7. JR

      And we kind of have gotten very close, right? Like, it's- it's past the- the Fermi ... not the Fermi paradox. The, um, t- uh, what- what is it? The-

    8. JH

      Oh. Uh, yeah, yeah. The- the, um-

    9. JR

      God dammit. I was just talking about him the other day.

    10. JH

      Yeah, the test. The, um ...

    11. EH

      Oh, the Turing test?

    12. JH

      Turing test.

    13. JR

      Turing test.

    14. EH

      Yeah.

    15. JR

      Thank you. We were just talking about how horrible (clears throat) what happened to him was, you know, they, uh-

    16. JH

      Yeah.

    17. JR

      ... chemically castrated-

    18. EH

      Castration.

    19. JR

      ... him because he was gay.

    20. EH

      Yeah.

    21. JR

      Horr- horrific. Winds up kill- killing himself. The- the- the guy who figures out what's the test to figure out whether or not AIs become sentient, and by the way, does this in like, what, 1950s, I think?

    22. JH

      Oh, yeah. Yeah.

    23. JR

      (laughs)

    24. JH

      Alan Turing is a ... Like, he- the guy was a beast, right? I mean-

    25. JR

      But how did he think that through? Like, how did he even know?

    26. EH

      He invented computers. He invented basically the concept that underlies all computers. Like, he was like an absolute beast. He was a code breaker, that he- he broke-

    27. JR

      Yes.

    28. EH

      ... the Nazi codes, right? And the ... Yeah.

    29. JH

      He also wasn't even the first person to come up with this idea of machines building machines and there being implications like human disempowerment. So if you go back to ... I think it was like the late 1800s, and I- I don't remember the guy's name. But he sort of like came up with this ... He was observing the Industrial Revolution and the mechanization of labor, and kind of starting to see ... More and more, like if you zoom out, it- it's almost like you have an- humans are an ant colony, and the artifacts that that colony is producing that are really interesting are these machines.

    30. JR

      Yeah.

  3. 30:0045:00

    So th- this is…

    1. JR

      That's an automatic... That's how an automatic mechanical watch works. They figured out a way to, just by the subtle swaying of the building in the wind, that was what was powering this listening device.

    2. JH

      So th- this is the thing, right? Like, the-

    3. JR

      I mean, what the fuck?

    4. EH

      Well, and it was... The things that nation states-

    5. JR

      What's up, Jamie?

    6. Google says that's- that's what was powering this thing, the Great Seal Bug, which I think is the thing.

    7. Really?

    8. So, I don't-

    9. There's another one?

    10. No.

    11. JH

      Oh, this is... So you can actually see in that video, I think there was a YouTube. Yeah, so-

    12. JR

      Same kind of thing, Jamie?

    13. I, look, I was just... I typed in Russia spy bug building sway.... the thing is what pops up.

    14. The thing.

    15. Which is what we were just talking about.

    16. Oh, that thing. It- it... So, that's powered the same way by the sway of the building?

    17. I think that's-

    18. EH

      I- I don't see. I don't- I don't...

    19. JH

      I- I think it was powered by radio frequency, um, emission. Uh, so I- there may be another thing related to it. Not- not sure, but-

    20. JR

      Huh.

    21. JH

      ... yeah. The- the only-

    22. JR

      Maybe- maybe Google's a little confused.

    23. Yeah, I don't have it.

    24. Maybe it's the word sway is what's throwing it off.

    25. Sorry, sorry.

    26. JH

      But it- it's, um... No, but it's- it's a great catch and-

    27. JR

      Uh-huh.

    28. JH

      ... the only reason we even know that too is that the- when the U-2s were flying over Russia, they had a U-2 that got shot down in 1960. The Russians go like, "Oh, um, like friggin' Americans like spying on us. What the fuck? I thought we were buddies or..." well, it's the '60s, they obviously didn't think that. But, um, and then the Americans were like, "Uh, okay, bitch. Look at this." And they brought out the- the seal, um, and that's how it became public. It was basically like the response to the Russians saying like, uh, you know-

    29. JR

      Wow.

    30. EH

      Yeah.

  4. 45:001:00:00

    Mm-hmm. …

    1. EH

      you're gonna, like, you're gonna catch and stop, like, levels one through 10.

    2. JR

      Mm-hmm.

    3. EH

      And then you're gonna be like, you're gonna be aware of, like, level 11, 12, 13, like, you're working against it. And you're, you know, may- maybe you're, you're starting to think about level 16. And you, you imagine, like, you know about level 18 or whatever. But they're, like, they're above you, below you, all around you. They're, they're incredibly, incredibly resourced. And this is something that came, like, came, came through very strongly for us.

    4. JR

      W- you guys have seen the Yuri Bezmenov video from 1984, where he's talking about how the, uh, all our educational institutions have been captured by Soviet propaganda.

    5. JH

      By the Long March thing?

    6. JR

      It was talking about Marxism has been injected into school systems, and how you have e- essentially two decades before you're completely-

    7. EH

      Yeah.

    8. JR

      ... captured by these ideologies. And it's gonna permeate and destroy all of your confidence in democracy. And, and he was 100% correct. And this is before these kind of tools.

    9. EH

      Yeah.

    10. JR

      Before... 'Cause, like, the vast majority of those ... The exchanges of information right now are taking place on social media.

    11. EH

      Yeah.

    12. JR

      The vast majority of debating about things, arguing, all taking place on social media. And if that FBI analyst is correct, 80% of it's bullshit.

    13. EH

      Yeah.

    14. JR

      Which is really wild.

    15. JH

      Well, and you look at, like, some of the, the documents that have come out. I think it was, like, um, the, uh ... I think it was the CIA game plan, right? For regime change, or, like, undermining. Like, how do you do it?

    16. JR

      Yeah.

    17. JH

      Right? Have multiple decision-makers at every level.

    18. JR

      Right, right, right.

    19. JH

      Have, you know, all these things. And, like, what a surprise. That's exactly what, like, the US bureaucracy looks like today.

    20. JR

      Exactly.

    21. JH

      It's like-

    22. JR

      Slow everything down, make change impossible.

    23. JH

      Yeah.

    24. JR

      Make it so that everybody gets frustrated with it and they give up hope. Th- they decided to do that to other countries.

    25. JH

      Yeah.

    26. JR

      Like, for sure they do that here.

    27. JH

      Open society, right? I mean, that's-

    28. JR

      Yeah.

    29. JH

      ... that's part of the trade-off. And that's actually a big, big part of the challenge, too. So when, when we're working on this, right? Like, one of the things Ed was talking about, these, like, th- 30 different layers of security access or whatever. One of the consequences is you bump into a team at ... So, so the, like, the teams we ended up working with on this project were folks that we bumped into after the end of our, our last investigation, who kind of were like, "Oh, uh ..."

    30. EH

      We talked about last year, yeah.

  5. 1:00:001:08:02

    No, 100- …

    1. EH

      The- the- really, that's the issue.

    2. JH

      No, 100-

    3. EH

      It's like data nerds get really involved in jujitsu.

    4. JH

      That's true.

    5. EH

      And jujitsu is data.

    6. JH

      Uh, but here's the thing. So- so that's exactly it, right? So if- if I told you, "I bet I can tap you out," right? And-

    7. EH

      I'd be like, "Where have you been training?"

    8. JH

      Well, right. But- and- and you're- if you're like, "Oh, im-" if my answer was, "Oh, I've just read a bunch of books."

    9. EH

      Oh, okay.

    10. JH

      You'd be like, "Oh, cool. Let's go."

    11. EH

      (laughs) Yeah.

    12. JH

      Like, right? 'Cause making contact with reality-

    13. EH

      Right.

    14. JH

      ... is where the fucking learning happens.

    15. EH

      Exactly.

    16. JH

      You can sit there and think all you want-

    17. EH

      Right.

    18. JH

      But unless you've actually played the chess match, unless you've reached out, touched, seen what the re- reaction is, and all this stuff, you don't actually know what you think you know. And that's actually extra dangerous. If you're sitting on a bunch of capabilities and you have this like unearned sense of superiority-

    19. EH

      Right.

    20. JH

      ... 'cause you haven't used those exquisite tools-

    21. EH

      Right.

    22. JH

      ... like it's a challenge.

    23. EH

      And then you've got people that are head of departments, CEOs of corporations, everyone has an ego. We've got it.

    24. JH

      Yeah. Yep.

    25. EH

      And- and this ties into like how exactly how basically the international order and quasi-stability actually gets maintained. So there's like above threshold stuff, which is like you actually do wars for borders, and, you know, there's the potential for nuclear exchange or whatever, like that's like all stuff that can't be hidden, right?

    26. JH

      War games.

    27. EH

      Exactly, like all the war games type shit. But then there's below threshold stuff, the stuff that's like you're- it's- it's always like the stuff that's like, "Hey, I'm gonna try to like poke you. Are you gonna react? What- what are you gonna do?" And then if- if you do nothing here, then I go like, "Okay, what's the next level? I can poke you. I can poke you." 'Cause like one of the things that- that we almost have an intuition for that's- that's mistaken, that comes from kind of historical experience, is like this idea that, you know, that countries can actually really defend their citizens in a meaningful way. So, like if you think back to World War I, the most sophisticated advanced nation states on the planet-... could not get past a line of dudes in a trench. Like, that was like, that was the m- and they tried, like, thing after thing. "Let's try tanks, let's try aircraft, let's try fucking hot air balloons, infiltration t-" And it literally, like, one side pretty much just ran out of dudes in that end of the war to put in their trench. And so we have this thought that like, oh, you know, countries can actually put, put boundaries around themselves and actually ... But the reality is, you can, you c- there's so many surfaces. The surface area for attacks is just too great. And so there's, there's stuff like, you can actually, like, um, there's the, the Havana Syndrome stuff, where-

    28. JR

      Hmm.

    29. EH

      ... you look at this, like, ratcheting escalation, like, "Oh, let's, like, fry a couple of, uh, embassy staff's brains in Havana, Cuba." What are they gonna do about it? Nothing? Okay. Let's move on to Vienna, Austria, something a little bit more Western, a little bit more orderly, let's see what they do there. Still nothing. Okay. What if we move on to frying, like, Americans' brains on US soil, baby? And they, and they went and did that. And so this is one of these things where, like, stability in reality in the world is not maintained through defense, but it's literally like, you have, like, the Crips and the Bloods with different territories, and it, it's stable, and it looks quiet. But the reason is that if you, like, beat the shit out of one of my, one of my guys for no good reason, I'm just gonna find one of your guys and I'll blow his fucking head off. And that keeps peace and stability on the surface, but that's the reality of sub-threshold competition between nation states. It's like, you come in and, like, fuck with my boys, I'm gonna fuck with your boys right back.

    30. JR

      And y-

Episode duration: 2:47:55

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Wwp1BFEw3cA

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome