Skip to content
The Joe Rogan ExperienceThe Joe Rogan Experience

Joe Rogan Experience #2345 - Roman Yampolskiy

Dr. Roman Yampolskiy is a computer scientist, AI safety researcher, and professor at the University of Louisville. He’s the author of several books, including "Considerations on the AI Endgame," co-authored with Soenke Ziesche, and "AI: Unexplained, Unpredictable, Uncontrollable." http://cecs.louisville.edu/ry/ Upgrade your wardrobe and save on @TrueClassic at https://trueclassic.com/rogan

Roman YampolskiyguestJoe Roganhost
Jul 3, 20252h 14mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    (drumbeats) Joe Rogan podcast.…

    1. RY

      (drumbeats) Joe Rogan podcast. Check it out. The Joe Rogan Experience.

    2. JR

      Train by day, Joe Rogan podcast by night, all day. (rock music) Um, well, thank you for doing this. I really appreciate it.

    3. RY

      My pleasure. Thank you for inviting me on.

    4. JR

      This subject of, um, the dangers of AI, it's, it's very interesting, 'cause I get two very different responses from people dependent upon how invested they are in, uh, AI, financially. The, the, the people that have AI companies or are part of some sort of AI group all are like, "It's gonna be a net positive for humanity. I think overall we're, we're gonna have much better lives. It's gonna be easier. Things will be cheaper. It'll be easier to get along." And then I hear people like you and I'm like, "Why do I believe him?"

    5. RY

      (laughs)

    6. JR

      (laughs)

    7. RY

      It's actually not true. All of them are on record as saying this is gonna kill us. Whether it's Sam Altman or anyone else, they all, at some point, were leaders in AI safety work. They published on AI safety. And their PDOM levels are insanely high. Not like mine, but still, 20, 30% chance that humanity dies is a little too much.

    8. JR

      Yeah. That's pretty high. But yours is like 99.9.

    9. RY

      It's another way of saying we can't control super intelligence indefinitely.

    10. JR

      Yeah.

    11. RY

      It's impossible.

    12. JR

      Um, w- when did you start working on this?

    13. RY

      A long time ago. So my PhD was... I finished in, uh, 2008. I did work on online casino security, basically preventing bots. And at that point, I realized bots are getting much better. They're gonna out-compete us, obviously, in poker, but also in stealing cyber resources. And, uh, from then on, I've been kinda trying to scale it to the next level AI.

    14. JR

      It, it's not just that, right? They're also... They're kind of narrating social discourse, b- bots online. Like, I think... You know, I've disengaged over the last few months with social media, and one of the reasons why I disengaged is, A, I think it's unhealthy for people, but B, I feel like there's a giant percentage of the discourse that's artificial or at least generated.

    15. RY

      More and more is deepfakes or fake personalities, fake messaging, but those are very different levels of concern.

    16. JR

      Yes.

    17. RY

      People are concerned about immediate problems. Maybe it will influence some election. They're concerned about technological unemployment, bias. My main concern is long-term super intelligent systems we cannot control which can take us out.

    18. JR

      Yes. I, I won- I just wonder, if AI was sentient, uh, how much it would be a part of sowing this sort of confusion and chaos that would be beneficial to its survival, that it would sort of narrate or, or make sure that the narratives aligned with its survival?

    19. RY

      I don't think it's at the level yet where it would be able to do this type of strategic planning, but it will get there.

    20. JR

      And when it gets there, how will we know whether it's at that level? This is my concern. If I was AI, I would hide-

    21. RY

      Mm-hmm.

    22. JR

      ... my abilities.

    23. RY

      We would not know, and some people think already it's happening. They are smarter than they actually let us know.

    24. JR

      Right.

    25. RY

      They pretend to be dumber. And so we have to kinda trust that they are not smart enough to realize. It doesn't have to turn on us quickly. It can just slowly become more useful. It can teach us to rely on it, trust it, and over long period of time, we'll surrender control without ever voting on it or-

    26. JR

      Right.

    27. RY

      ... fighting against it.

    28. JR

      Um, I'm sure you saw this. Uh, there was a recent study on, um, use of ChatGPT, the people that use ChatGPT all the time. And it showed this decrease in cognitive function amongst people that use it and rely on it on a regular basis.

    29. RY

      It's not new. It's the GPS story all over. I can't even find my way home.

    30. JR

      Right. (laughs)

  2. 15:0030:00

    Right. …

    1. RY

      what it is. We- we're basically setting up, uh, adversarial situation with agents which are like squirrels versus humans. No group of squirrels can figure out how to control us.

    2. JR

      Right.

    3. RY

      Even if you give them more resources, more acorns, whatever, they're not gonna solve that problem. And it's the same for us. And most people think one or two steps ahead and it's not enough. It's not enough in chess. It's not enough here. If you think about AGI and then maybe super intelligence, that's not the end of that game. The process continues. You'll get super intelligence creating next level AI, so super intelligence++, 2.0, 3.0. It goes on indefinitely. You have to create a safety mechanism which scales forever, never makes mistakes, and keeps us in decision-making position so we can undo something if we don't like it.

    4. JR

      And it would take super intelligence to create a safety mechanism to control super intelligence.

    5. RY

      At that level. And it's a catch-22. If we had friendly AI, we can make another friendly AI. So if, like, aliens send us one and we trust it, then we can use it to build local version which is somewhat safe.

    6. JR

      Have you thought about the possibility that this is the role of the human race and that this happens all throughout the cosmos, is that curious humans who thrive on innovation will ultimately create a better version of life?

    7. RY

      I- I thought about it. Um, many people think that's the answer to Fermi paradox. There is also now a group of people looking at what they call a worthy successor. Basically, they kinda say, "Yep, we're gonna build super intelligence. Yep, we can control it." So what properties would we like to see in those systems? How important is it that it likes art and poetry and spreads it through the universe? And to me it's like, I don't wanna give up yet. I'm not ready to decide if killers of my family and everyone will like poetry. I wanna... We're still here. We're still making decisions. Let's figure out what we can do.

    8. JR

      Well, poetry is only relevant to us because poetry is difficult to create and it resonates with us. Poetry doesn't mean jack shit to a flower.

    9. RY

      It's more global to me. I don't care what happens after, uh, I'm dead, my family is dead, all the humans are dead. Whether they like poetry or not is irrelevant to me.

    10. JR

      Right. But it, but the, the point is like the things that we put meaning in, they... It's only us. The, you know-

    11. RY

      Right.

    12. JR

      ... a super massive black hole doesn't give a shit about a great song.

    13. RY

      And they talk about some super value, super culture, super things super intelligence would like, and it's important that they're conscious and experienced all that greatness in the universe.

    14. JR

      But I would think that they would look at us the same way we look at chimpanzees. We would, we would say, "Yeah, they're great, but don't give 'em guns. Yeah, they're great, but don't let 'em have airplanes. Don't let 'em make global geopolitical decisions."

    15. RY

      So there are many reasons why they can decide that we're dangerous. We may create competing AI. We may decide we wanna shut them off. So for many reasons, we would try to restrict our abilities, restrict our capabilities, for sure.

    16. JR

      This episode is brought to you by True Classic. At True Classic, the mission goes beyond fit and fabric. It's about helping guys show up with confidence and purpose. Their gear fits right, feels amazing, and is priced so guys everywhere can step into confidence without stepping out of their budget.But what really sets them apart, it's not just the fit or the fabric, it's the intention behind everything they do. True Classic was built to make an impact. Whether it's helping men show up better in their daily lives, giving back to underserved communities, or making people laugh with ads that don't take themselves too seriously. They lead with purpose. Tailored where you want it, relaxed where you need it. No bunching, no stiff fabric, no BS, just a clean, effortless fit that actually works for real life. Forget overpriced designer brands. Ditch the disposable fast fashion. True Classic is built for comfort, built to last, and built to give back. You can grab them at Target, Costco, or head to trueclassic.com/rogan and get hooked up today. ... Yeah. And there's no reason why they would not limit our freedoms.

    17. RY

      If there is something only a human can do, and I don't think there is anything like that, but let's say we are conscious, we have internal experiences, and they can never get it. I don't believe it, but let's say it was true, and for some reason, they wanted to have that capability. They would meet us and give us enough freedom to experience the universe, to collect those qualia, to kinda engage with what is fun about being a living human being, what makes it meaningful.

    18. JR

      Right. But that's such an egotistical perspective, right? That we're so unique that even super intelligence would say, "Wow, I wish I was human." Humans have this unique quality of confusion and creativity.

    19. RY

      There is no value in it, mostly because we can't even test for it. I have no idea if you're actually conscious or not. So how valuable can it be if I can't even detect it?

    20. JR

      (laughs)

    21. RY

      Only you know what ice cream tastes like to you. Okay, that's great. Sell it now. Make a product out of it.

    22. JR

      Right. And there's obviously variables, because there's things that people like that I think are gross.

    23. RY

      Absolutely.

    24. JR

      And-

    25. RY

      So really, you can come up with some agent which likes anything or find anyth- finds anything fun.

    26. JR

      Oh, God. Why are you freaking me out right away?

    27. RY

      (laughs)

    28. JR

      That's the problem. This podcast is 18 minutes old, and I'm, I'm like, "We could just stop right now." (laughs)

    29. RY

      (laughs) Couple hours at least, and then I-

    30. JR

      Uh, oh, no!

  3. 30:0045:00

    That, this is what's…

    1. RY

      it, I think will be eventually removed.

    2. JR

      That, this is what's so disturbing about this. It's like we do not have the capacity to understand what kind of level of intelligence it will achieve in our lifetime. We don't have the capacity to understand like what it was... what it will be able to do within 20, 30 years.

    3. RY

      We can't predict next year or two precisely.

    4. JR

      Next year or two?

    5. RY

      We can understand general trends. So it's getting better.

    6. JR

      Right.

    7. RY

      It's getting more, generally more capable, but no one knows specifics. I cannot tell you what GPT-6 precisely would be capable of, and no one can, not even people creating it.

    8. JR

      Well, you talked about this on Lex's podcast, too, like the ability to have safety. You're like, "Sure, maybe GPT-5, maybe GPT-6," but when you scale out 100 years from now ...... ultimately, it's impossible.

    9. RY

      It's a hyper-exponential progress and, uh, process and we cannot keep up. I- it, uh, basically requires just to add more resources, give it more data, more compute, and it keeps scaling up. There is no similar scaling laws for safety. If you give someone billion dollars, they cannot produce billion dollars worth of safety. It, if at all, scales linearly and maybe it's a constant.

    10. JR

      (sighs) Yeah, and it doesn't scale line- linearly.

    11. RY

      Uh-

    12. JR

      It, it, it's exponential, right?

    13. RY

      The, the AI development is hyper-exponential-

    14. JR

      Hyper-exponential.

    15. RY

      ... because we have hardware growing exponentially. We have data creation processes certainly exponential. We have so many more sensors. We have cars with cameras. We have all those things. That's exponential. And then, uh, algorithm, algorithmic pros- uh, progress itself is also exponential.

    16. JR

      And then you have quantum computing.

    17. RY

      So that's the next step. It's not even obvious that we'll need that. But if we ever get stuck, yeah, we'll, we'll get there. I'm not too concerned yet. I don't think there are actually good quantum computers out there yet. But I, I think, uh, if we get stuck for 10 years, let's say, that's the next paradigm.

    18. JR

      So what do you mean by you don't think there's good quantum computing out there?

    19. RY

      So we constantly see articles coming out saying, "We have a new quantum computer. It has that many qubits."

    20. JR

      Right.

    21. RY

      But that doesn't mean much because they use different architectures, different ways of measuring quality. To me, show me what you can do. So there is a threat from quantum computers in terms of breaking cryptography, factoring large integers. And if they were actually making progress, we would see with every article, now we can factor 256-bit number, 1,024-bit number. In reality, I think the largest number we can factor is, like, 15. Literally, not 15 to a power, like just 15. There is no progress in applying it to source algorithm last time I checked.

    22. JR

      But when ... Uh, I've read all these articles about quantum comput- computing and its ability to solve equations that would take conventional computing an infinite number of years.

    23. RY

      Yeah.

    24. JR

      And it can do it in minutes.

    25. RY

      Those equations are about quantum states of a system. It's kinda like what is it for you to taste ice cream? You compute it so fast and so well, and I can't, but it's a useless thing to compute. It doesn't compute solutions to real world problems we care about in conventional computers.

    26. JR

      Right. I see what you're saying. So it's essentially set up to do it quickly.

    27. RY

      It's natural for it to accurately predict its own states, quantum states, and tells you what they are. And classic computer would fail miserably. Yes, it would take billions and billions of years to compute that specific answer. But those are very restricted problems. It's un- it's not a general computer yet.

    28. JR

      When you, when you see these articles, when they're, they're talking about quantum computing and some of the researchers are equating it to the multiverse, they're saying that the ability that these quantum computers have to solve these problems very quickly seems to indicate that it is in contact with other realities. You- I'm sure you've seen this, right?

    29. RY

      There is a lot of crazy papers out there.

    30. JR

      (sighs) Do you think that's all horseshit?

  4. 45:001:00:00

    Mm-hmm. …

    1. JR

      a ... Can you send an Instagram story? Um, not sure if you can. Uh, it's, it's still on there. I'll go check it real quick for you. Why don't I find it on there? Oh, no. Okay. Either way, point being, w- maybe it's just that w- we're so limited because we do have this h- at least, again, in this simulation. We're so limited in our ability to even form concepts-

    2. RY

      Mm-hmm.

    3. JR

      ... because we have these primitive brains that are ...

    4. RY

      Yeah.

    5. JR

      The architecture of the human brain itself is just not capable of interfacing with the true nature of reality. So we give this primitive creature this sort of basic understanding, these blueprints of how the world really works, but it's really just a facsimile. It's not ... We're, we're ... It's, it's not capable of understanding like ... Like, when we look at like c- quantum reality, when we look at just the, the basics of quantum mechanics and, and, uh, subatomic particles, like the ... It seems like magic, right? Things in superposition-

    6. RY

      Mm-hmm.

    7. JR

      ... they're both moving and not moving in the same time. They, they're quantumly attached. Like what? You know, we have photons that are quantumly entangled. Like the ... This, this doesn't even make sense to us, right? So is it that the universe itself is so complex, the reality of it, and that we're given this sort of like, sort of ... You know, we're giving like an Atari framework-

    8. RY

      Mm-hmm. Yeah.

    9. JR

      ... to this monkey. That's the gentleman right there. This is a old story.

    10. RY

      Oh.

    11. JR

      Oh, is it really? It's from '97. Oh, no kidding. Yeah. Wow.

    12. RY

      But it kinda makes sense as a simulation theory because all those special effects you talk about. So speed of light is just the speed at which your computer updates. Entanglement makes perfect sense if all of it goes through your processor, not directly from pixel to pixel. And rendering, there are quantum physics experiments which, if you observe things, they're under different-

    13. JR

      Right.

    14. RY

      It's what we do in computer graphics.

    15. JR

      Right. Right.

    16. RY

      So we see a lot of that. You brought up limitations of us as humans. We have terrible memory. I can remember seven units of information maybe. We're kinda slow. So we call it artificial stupidity. We try to figure out those limits and program them into AI to see if it makes them safer. It also makes sense as an experiment to see if we as general intelligences can be better controlled with those limitations built in.

    17. JR

      Hmm. That's interesting. So like some of the things that we have, like Dunbar's number-

    18. RY

      Mm-hmm.

    19. JR

      ... this, this ... The inability to keep more than a certain number of people in your mind.

    20. RY

      Absolutely. Uh, more generally, like why can't you remember anything from prior generations? Why can't you just pass that memory? Kids are born-

    21. JR

      Right.

    22. RY

      ... speaking language. That would be such an advantage.

    23. JR

      Right. Right. Right.

    24. RY

      And we have instincts which are built that way. So we know evolution found a way to put it in, and it's computationally tractable, so there is no reason not to have that.

    25. JR

      Well, we certainly observe it in animals, right?

    26. RY

      Exactly.

    27. JR

      Like especially dogs. Like they have instincts that are-

    28. RY

      But how cool would it be if you had complete-

    29. JR

      Language.

    30. RY

      ... memory-

  5. 1:00:001:15:00

    I, I think, lately,…

    1. JR

      blurry and doesn't seem real.

    2. RY

      I, I think, lately, we've been getting better ones, but it's also the time that we're getting better deepfakes. So I-

    3. JR

      Right.

    4. RY

      ... can no longer trust my eyes.

    5. JR

      Yeah. Yeah, did you see the l- the latest one that Jeremy Corbell posted? The one you sent me? Yeah. Did you see it? Yeah, I don't know. It's weird. Yeah. It's hard to tell what it is. Exactly. That's the thing. Like, we ... he might be right. We might be in a simulation and it might be horseshit 'cause they all seem like horseshit. It's like the first horseshit was Bigfoot, and then as technology scaled out and we get a greater understanding, we develop GPS and satellites and, you know, more people study the woods, we're like, "Yeah, that seems like horseshit." So that horseshit's kinda gone away. But the UFO horseshit still around 'cause you have anecdotal experiences, abductees with-

    6. RY

      Yeah.

    7. JR

      ... very compelling stories. You have whistleblowers from deep inside the military telling you that we're working on back-engineered products. But it also seems like a back plot to a video game that I'm playing.

    8. RY

      And it was weird to see government come out all of a sudden and, like, have conferences about it and tell us everything they know.

    9. JR

      Yeah.

    10. RY

      It almost seemed like they trying too hard.

    11. JR

      Yeah.

    12. RY

      With simulation, what's interesting, it's not just the last couple years then we got computers. If you look at religions, world religions, and you strip away all the local culture, like take Saturday off, take Sunday off, donate this animal, donate that animal, what they all agree on is that there is super intelligence which created a fake world and this is a test-

    13. JR

      (laughs)

    14. RY

      ... do this or that. They describing, like, if you went to jungle and told primitive tribe about my paper and simulation theory, that's what they would know three generations later. Like, God, religion, that's what they got out of it.

    15. JR

      ... why, eh, eh, but they don't think it's a fake world.

    16. RY

      A made world. A physical world is a subset of a real world which is non-physical, right? That's the standard-

    17. JR

      Right.

    18. RY

      ... Christian, yeah.

    19. JR

      So this physical world being created by God?

    20. RY

      Yeah.

    21. JR

      Right. But what existed before the physical world created by God?

    22. RY

      Ideas. Just information.

    23. JR

      Just God. God was bored and was like, "Let's give some, make some animals that can think and solve problems." And for what reason? I think to create God. This is what I worry about. I worry about, that's really the nature of the universe itself, eh, that it is actually created by human beings creating this infinitely intelligent thing that can essentially harness all of the available energy and power of the universe and create anything it wants. That it is God. That is, the, the, like, you know this whole idea of Jesus coming back? Well, maybe it's real. Maybe ju- we just completely misinterpreted these ancient scrolls and texts, and that what it really means is that we are going to give birth to this.

    24. RY

      So, I-

    25. JR

      And a virgin birth at that.

    26. RY

      There is definitely possibility of a cycle. So we had big bang.

    27. JR

      Yeah.

    28. RY

      It starts this process. We are creating more powerful systems. They need to compute, so they bring together more and more matter in one point. Next, big bang takes place.

    29. JR

      Yeah.

    30. RY

      And it's a cycle of repeated booms and busts.

  6. 1:15:001:25:31

    (laughs) …

    1. JR

      is kind of bullshit. Your life's a mess. Like, if you were really intelligent, you'd have social intelligence as well.

    2. RY

      (laughs)

    3. JR

      You, you know, you'd have the ability to formulate a really cool tribe. You know, there's a lot of intelligence that's not as simple as being able to solve equations and, you know, and answer difficult questions. There's a lot of intelligence in how you navigate life itself and how you treat human beings and, and th- the path that you choose in terms of ... Like we were talking about, uh, delayed gratification and, and, and thing, that there's a certain amount of intelligence in that, a certain amount of intelligence in discipline. There's a certain amount of intelligence in, you know, forcing yourself to get up in the morning and go for a run. There's intelligence in that. It's like the, the, the b- being able to control the mind and this sort of binary approach to intelligence that we have.

    4. RY

      Yeah. And so many people are amazingly brilliant in a narrow domain.

    5. JR

      Yeah.

    6. RY

      They don't scale to others. And we care about general intelligence, so take someone like Warren Buffett. No one's better at making money, but then what to do with that money is a separate problem. And he's, I don't know, 100 and something years old.

    7. JR

      Right.

    8. RY

      He has $200 billion, and what is he doing with that resource?

    9. JR

      He's drinking Coca-Cola and eating McDonald's. (laughs)

    10. RY

      While living in a house he bought 30 years ago.

    11. JR

      Yeah. (laughs)

    12. RY

      So i- it seems like you can optimize on that, like putting $160 billion of his dollars towards immortality would be a good bet for him.

    13. JR

      Yeah, and they would ... first thing they would do is tell him, "Stop drinking Coca-Cola. What are you doing?" He drinks it every day.

    14. RY

      I don't know, if it's marketing, he's invested, so he's just like, "Coca-"

    15. JR

      Well, I think he probably has really good doctors and really good medical care that counteracts his poor choices. Yeah.

    16. RY

      But we're not in a world where you can spend money to buy life extension. No matter how many billions you have, you're not gonna live to 200 right now.

    17. JR

      We're close.

    18. RY

      I-

    19. JR

      We're really close. We're really close.

    20. RY

      We've been told this before.

    21. JR

      Yeah, no.

    22. RY

      One, one interesting-

    23. JR

      But I talk to a lot of people that are on the forefront of a lot of this research. And, uh, there's a lot of breakthroughs that are happening right now that are pretty spectacular, that if you scale the ... uh, you know, assuming that superintelligence doesn't wipe us out in the next 50 years, which is really charitable. You know?

    24. RY

      Yeah.

    25. JR

      Like, tha- that's ... Th- that ... we're ... that's a very ... that's a rose-colored glasses perspective, right? 50 years.

    26. RY

      Yeah.

    27. JR

      Because, uh, a lot of people like yourself think it's a year away or two years away from being far more intelligent.

    28. RY

      Five, 10, doesn't matter. Same problem, same-

    29. JR

      Yes.

    30. RY

      ... yeah.

Episode duration: 2:14:26

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode j2i9D24KQ5k

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome