Skip to content
The Joe Rogan ExperienceThe Joe Rogan Experience

Joe Rogan Experience #2117 - Ray Kurzweil

Ray Kurzweil is a scientist, futurist, and Principal Researcher and AI Visionary at Google. He's the author of numerous books, including the forthcoming title "The Singularity is Nearer." Look for it on June 25, 2024. www.thekurzweillibrary.com

Ray KurzweilguestJoe Roganhost
Mar 12, 20242h 3mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    (drumbeats) Joe Rogan podcast,…

    1. NA

      (drumbeats) Joe Rogan podcast, check it out.

    2. RK

      The Joe Rogan Experience.

    3. JR

      Train by day, Joe Rogan podcast by night, all day. (instrumental music) Good to see you, sir.

    4. RK

      Great to see you.

    5. JR

      I was sta- telling you before, I'm admiring your suspenders, and you told me you have how many pairs of these things?

    6. RK

      30 of them, yeah.

    7. JR

      How did you-

    8. RK

      I wear them every day.

    9. JR

      Do you really?

    10. RK

      Yeah.

    11. JR

      Every day?

    12. RK

      Yeah.

    13. JR

      Why, why do you like suspenders?

    14. RK

      Um...

    15. JR

      Practicality thing?

    16. RK

      No, it's, uh... expresses my personality.

    17. JR

      Mm.

    18. RK

      And different ones have different, uh... different personalities that express how I feel that day, so.

    19. JR

      I see. So, it's just another style point.

    20. RK

      Yeah.

    21. JR

      See, the reason why I was asking-

    22. RK

      But, but you don't see any, uh, hand-painted suspenders. Have you ever seen one?

    23. JR

      Uh, I don't know and I would've not noticed. I only noticed-

    24. RK

      Hm.

    25. JR

      ... 'cause you were here (laughs) . I'm not really a suspender aficionado.

    26. RK

      Yeah, well-

    27. JR

      But the reason why I'm asking is 'cause you're, you know, basically a technologist. I mean, you know a lot about technology. And you would think that suspenders are kinda outdated tech. (laughs)

    28. RK

      Uh... Well, people like them.

    29. JR

      Clearly.

    30. RK

      Yeah. And I'm surprised they haven't caught on.

  2. 15:0030:00

    Right. …

    1. RK

      to happen, like, five years ago.

    2. JR

      Right.

    3. RK

      And we had them two years ago, but they didn't work very well. So it began little less than two years ago that we could actually do large language models. Uh, and, and that was very much a surprise to everybody. Uh, so that, th- that's probably the primary example of exponential growth.

    4. JR

      We had Sam Altman on. One of the things that he and I were talking about was that AI figured out a way to lie, that they used AI to go through a CAPTCHA system, and the AI told the system that it was vision-impaired, which is not technically a lie, but it used it to bypass-

    5. RK

      Well-

    6. JR

      ... are you a robot?

    7. RK

      What we don't know now is f- for large language models to say they don't know something. So you ask it a question, and if that, the answer to that question is not in the system, it still comes up with an answer. So it'll look at everything and give you its best answer. And if the, the best answer is not there, it still gives you an answer, but that's, uh, considered a h- hallucination. And we know-

    8. JR

      A hallucination?

    9. RK

      Yeah, that's what it's called.

    10. JR

      Really?

    11. RK

      So-

    12. JR

      A AI hallucination? So they cannot be wrong. They have to be able to answer things.

    13. RK

      So far, we're, we're actually working on being able to tell if it doesn't know something. So if you ask it something, it says, "Oh, I, I don't know that." Right now, it can't do that.

    14. JR

      Oh, wow. That's interesting.

    15. RK

      So it, it gives you some answer. Um, and if the answer's not there, it just, like, makes something up. It's the best answer, but the best answer isn't very good-

    16. JR

      Mm-hmm.

    17. RK

      ... 'cause it doesn't know the answer. And the way to fix hallucinations is to actually give it more capabilities to memorize things and, uh, and give it more information so it knows the answer to it. If you, if you tell, uh, an answer to a question, it will remember that and give you that correct answer. Um, but these models are not... we don't know everything. And it, it has to... we have to be able to scan an answer to every single question, uh, which we can't quite do. And it'd be actually better if it could actually answer, "Well, gee, I don't know that."

    18. JR

      Right. Like, uh, and particularly, like, say when it comes to, um, exploration of the universe, if there's a certain amount of, I mean, vast amount of the universe we have not explored. So if it has to answer questions about that, it would just come up with an answer.

    19. RK

      Right. And it, and it's, right, it'll just come up with an answer-

    20. JR

      Interesting.

    21. RK

      ... which will likely be wrong.

    22. JR

      Hmm, that's interesting. But that, that would be a real problem if someone was counting on the AI to have a solution for something too soon, right?

    23. RK

      Right. They, they don't know everything. Uh, search engines actually know, are pretty well vetted. And if it actually answers something, it'll, i- it's usually correct. Um...

    24. JR

      Unless it's curated.

    25. RK

      But large language models don't have that capability. Uh, so it'd be good actually if they knew that, that they were wrong. That also tells...... what we have to fix.

    26. JR

      What about the, the idea that A- AI models are influenced by ideology, that AI models have been programmed with certain ideologies?

    27. RK

      I mean, they do learn from people.

    28. JR

      Yeah.

    29. RK

      And people have ideologies.

    30. JR

      Right.

  3. 30:0045:00

    We used to be…

    1. RK

      that, that's something that's positive, really. Um, I mean, if, if we were like mice today, um, and we had the opportunity to become like humans, w- we wouldn't object to that. In fact, we are humans, and we don't object to that.

    2. JR

      We used to be shrews.

    3. RK

      (laughs) Um, and this is gonna basically make us smarter. Uh, eventually, we'll be much smarter than we are today. And, uh, and that's a positive thing. We'll be able to do things that are t- today that we find bothersome, uh, in a way that's much more palatable.

    4. JR

      The idea of us getting smarter sounds great. Great. It'd be great to be smarter. But-

    5. RK

      Right. But people object to that-

    6. JR

      ... the concerns-

    7. RK

      ... because it's, uh, it's like competition.

    8. JR

      Hmm? In what way?

    9. RK

      Well, I mean, Google has, I don't know, 60,000, 70,000 programmers? And how many programmers, uh, exist in the world? How, how much longer is that gonna be a viable career?

    10. JR

      Hmm.

    11. RK

      Uh, because, uh, large-

    12. JR

      The AI program.

    13. RK

      ... language models already can code.

    14. JR

      Yeah.

    15. RK

      Not quite as good as, uh, a real expert coder, uh, but how, how long is that gonna be?

    16. JR

      Right.

    17. RK

      It's not g- it's not gonna be 100 years. It's gonna be a, a few years. Um, so people see it as competition. I have a slightly different view of that. I see these things, uh, as actually adding to our own intelligence, and we're merging with these kinds of computers and making ourselves smarter by merging with it, and eventually, it'll go inside our brain and be able to make us smarter i- instantly, uh, just like we had more connections inside our own brain.

    18. JR

      Well, I think people have reservations always when it comes to great change, and this is probably the greatest change. The, the greatest change we've ever experienced in our lifetimes, for sure, has been the internet. And this will make that look like nothing. It'll, it'll, it'll change everything, and it seems inevitable. Um, I understand that people are upset about it, but it just seems like what human beings were sort of designed to do.

    19. RK

      Right. We're the only animal that actually creates technology.

    20. JR

      Yeah.

    21. RK

      And it's a combination of our brain and something else, which is our thumb. So I c- I can imagine something. Oh, if I take that-

    22. JR

      You can manipulate it, yeah.

    23. RK

      ... leaf from the tree, I could create a, uh, a tool with it. Uh, other animals have actually a bigger brain, like the whale.

    24. JR

      Dolphins.

    25. RK

      Uh, dolphins, um, elephants. They have a larger brain than we do, but they don't have something equivalent to the thumb.

    26. JR

      Right.

    27. RK

      Monkey has a thing that looks like the thumb, but it's actually an inch down, and it doesn't actually work very well. So they can actually create a tool, but they don't create a tool that's powerful enough to create the next tool.

    28. JR

      Hmm.

    29. RK

      So we're ac- actually able to cr- use our tools and create something that's that much more significant. Um, so we can create tools, and that's really part of who we are. Um, it, it makes us that much more intelligent, and that's a good thing. Uh, I mean, here's... So here's US personal income per capita. So this is the average amount that we make, uh, per person in constant dollars, and it's just-

    30. JR

      There it is right here. It's on the screen.

  4. 45:001:00:00

    Uh, I mean, we'll…

    1. JR

    2. RK

      Uh, I mean, we'll be able to create, I mean, the singularity is when we multiply our intelligence a million-fold, and that's 2045. So that's not that long from now. That's like 20 years from now.

    3. JR

      Right.

    4. RK

      Um, uh, and therefore most of your int- intelligence will be, uh, handled by the computer part of ourselves. Um, the only thing that won't be c- captured is what comes with our body originally. We'll ultimately be able to do that as well. It'll take a little longer, but we'll be able to actually capture what comes with our normal body, uh, and be able to re- recreate that. So, that also has to do with, uh, h- how long we live because if, if everything is backed up... I mean, right now, any time you put anything into a phone or any kind of electronics, it's backed up. So, I mean, I could loo- this has a lot of data. I could flip it a- and it ends up in, uh, a river and we can't capture it anymore. I can recreate it 'cause it's all backed up.

    5. JR

      Right. And you think that's gonna be the case with consciousness?

    6. RK

      Th- that's gonna be the case of our normal, uh, biological body as well.

    7. JR

      What's to stop someone like Donald Trump from just making 100,000 versions of himself? Like, if you can back someone up, could you duplicate it? Couldn't you have three or four of them? Couldn't you have a bunch of them? Couldn't you live multiple lives?

    8. RK

      Yes, um, uh-

    9. JR

      Would you be interacting with each other while you're living multiple lives, having consultations about, "What is St. Louis Ray doing? Oh, I don't know, let's talk to San Francisco Ray. San Francisco Ray is talking to Florida Ray."

    10. RK

      Uh, it, it's basically a matter of increasing our intelligence and being able to multiply Donald Trump, for example. That, that comes with that.

    11. JR

      Do you think there'll be regulations on that to stop people from making 100,000 versions of themselves that operate a city?

    12. RK

      Th- there'll be lots of regulations. There's lots of regulations we have already. You can't just, like, create a medication and sell it to people that it cures this disease.

    13. JR

      Right.

    14. RK

      We have tremendous nu- amount of regulation on that.

    15. JR

      Sure, but we don't, really, with phones.

    16. RK

      Yeah.

    17. JR

      Like, with your phone, you could, uh, essentially, if you had the money, you could make as many copies of that as you wanted.

    18. RK

      Yes. Uh, um, there are some regulations. We, we have, we regulate everything, but...

    19. JR

      Yeah.

    20. RK

      But you're right. Generally, electronics is, uh, doesn't have as much regulation as-

    21. JR

      Right. And when you get to a certain point, we will be electronics.

    22. RK

      Yes. Yes, I mean, certainly if we multiply our intelligence a million-fold, everything of that additional million-fold of yours is, is not regulated.

    23. JR

      Right. When you think about the, the concept of integration and technological integration, when do you think that will start taking place, and what will be the initial usage of it? Like, what will be the first versions, and, and what would, what would they provide that-

    24. RK

      Well, we, we have it now. Large language models are pretty impressive. And if you look at what they can do-

    25. JR

      But I mean, I mean, I'm talking about physical integration with the human body, like a Neuralink type thing.

    26. RK

      Right. Some people feel that we could actually understand what's going on in your brain and actually put things into your brain without actually going into the brain, uh, with something like Neuralink.

    27. JR

      So something that, like, sits on the outside of your head?

    28. RK

      Yeah. Uh, it's not clear to me tha- if that's feasible or not. I've, I've been assuming that you ac- have to actually go in. Now, Neuralink isn't exactly where we want because it's too slow, uh, and it actually will do what it's advertised to do, like if... I actually know some people like this who were active people and they completely lost the ability, uh, to speak and to understand language and so on, um, and so they can't actually say anything to you. Um, and we can use something like Neuralink to actually, uh, have them express something. They could think something and then have it be expressed to you.

    29. JR

      Right. And they're doing that, right? They had the first patient, the first patient that was-

    30. RK

      Yeah.

  5. 1:00:001:15:00

    Right. Right. …

    1. RK

      two atomic weapons within a week, uh, 80 years ago, what, what's the likelihood that we're gonna go another 80 years, uh, and not, and not have that happen again? Everybody would say zero.

    2. JR

      Right. Right.

    3. RK

      But it actually has happened.

    4. JR

      Shockingly.

    5. RK

      Yeah.

    6. JR

      Yeah.

    7. RK

      Uh, and I think there's actually some message there. Um...

    8. JR

      Mutually assured destruction. But the thing is, would-

    9. RK

      But, but, but-

    10. JR

      ... Artificial General Intelligence...

    11. RK

      But that, but that has not happened.

    12. JR

      Right.It has not happened yet. But would artificial general intelligence, in the control of the wrong people, negate that mutually assured destruction that keeps people from doing things? Obviously, we did drop bombs on Hiroshima and Nagasaki.

    13. RK

      Right.

    14. JR

      We did. We did indiscriminately kill, who knows how many hundreds of thousands of people with those weapons. We did it. And if human beings were capable of doing it because no one else had it, if artificial general intelligence reaches that sentient level and is in control of the wrong people, what's to stop them from doing... Th- th- there's no mutually assured destruction if you're the one who's got it. You're the only one who's got it, and you possibly... My concern is that whoever gets it could possibly stop it from being spread everywhere else, and, and control it completely, and then you're looking at a completely dystopian world.

    15. RK

      Right. So that's... If you ask me what I'm concerned about, it, it, it's along those lines.

    16. JR

      It's that. Along those lines.

    17. RK

      (laughs)

    18. JR

      Yeah. That's what... 'Cause that's what I always want to get out of you guys, because there's so many people that are, um, rightfully so, so high on this technology and the possibilities for enhancing our lives. But, uh, the concern that a lot of people have is that at what cost and what are we signing up for?

    19. RK

      Right. But, uh, I mean, if we wanna, for example, live indefinitely, this is what we need to do. We, we can't do... We can't-

    20. JR

      What if you're denying yourself heaven? Have you ever thought of that possibility? I know that's a ridiculous abstract concept, but if heaven is real, if the idea of the afterlife is real, and it's, uh, the next level of existence, and you're constantly going through these cycles of life, what if you're stepping in, artificially denying that?

    21. RK

      That's hard to imagine. I mean-

    22. JR

      It is hard to imagine, but so is life.

    23. RK

      I-

    24. JR

      So is the universe itself. So is the-

    25. RK

      Right.

    26. JR

      ... Big Bang.

    27. RK

      My, my father-

    28. JR

      So is black holes.

    29. RK

      My father died when I was 22, uh, so it's more than 50, 60 years ago. Um, and, uh, it's hard f- And he was actually a great musician, and he great, created, uh, fantastic music, but he hasn't done that since he died. Um, and there's nothing that exists, uh, that is at all creative, um, based on him. We have his memories. Uh, I actually created a large language model that represented him. I can actually talk to him.

    30. JR

      You do that now?

  6. 1:15:001:30:00

    They certainly do, but…

    1. RK

      Can they actually... I mean, they can talk.

    2. JR

      They certainly do, but would you wanna be one?

    3. RK

      Uh, are we different than that, than that?

    4. JR

      Yeah, we're people. We shake hands. I give you a hug. You pet my dog. You listen to music. You have, you have-

    5. RK

      Yeah, we'll be able to do all of that as well.

    6. JR

      Right, but will you want to? Will you even care? The thing is, like, a lot of what gives us joy in life is biological motivations. There's human reward systems that are put in place that allow us to-

    7. RK

      Well, it's gonna be part of who we are.

    8. JR

      Right.

    9. RK

      It'll be just like-

    10. JR

      But that-

    11. RK

      ... a person, and we'll also have our physical bodies as well, and that'll also be able to be backed up. And we'll be doing the things that we do now, except we'll be able to have them continue.

    12. JR

      So if you get hit by a car and you die, there's another Ray that just pops up. "Oh, we got the back-up Ray." And the back-up Ray will have no feelings at all about how it had died and come back to life?

    13. RK

      Well, that's a question. Uh-

    14. JR

      Yeah.

    15. RK

      ... I mean, uh, I mean, why wouldn't it be just like Ray is now?

    16. JR

      It... Why wouldn't it? If it gets to a certain p- if we figure out that if, if biological life is essentially some... a kind of technology that the universe has created, and we can manipulate that to the point where we understand it, we get it, we, we've, uh, we've optimized it and then replicate it. Physically replicate it. Not just, not just replicate it in form of, you know, uh, in a, uh, computer, but an actual physical being.

    17. RK

      Right. Well, that's where we're headed.

    18. JR

      Do you anticipate that people will be happy with whatever they have? 'Cause, uh, if you decide, "I don't like being 5'6". I wish I was 6'6". I don't like being a woman. I like... I wanna be a man. I don't wanna be, uh, Asian. I wanna be," you know, whatever. "I wanna be a Black person. I wanna be..."

    19. RK

      Uh, we'll actually be able to do all of those things, uh, simultaneously and so on. We're not gonna be limited by those kinds of-

    20. JR

      Right.

    21. RK

      ... happen- happenstance.

    22. JR

      Which is gonna be very strange. Like, what will human beings look like if you give people the ability to manipulate your physical form?

    23. RK

      Well, we do things now that, uh, were impossible even 10 years ago.

    24. JR

      We certainly do, but we don't change races, size, sex, gender, height. We don't, we don't do all those... And the, the radical increase in just your intelligence, like, what is that going to look like? What, what kind of an interaction is it gonna be between two human beings when you have a completely new form? You know, you're, you're much different physically than you ever were when you were alive. You're, you're taller, you're stronger, you're smarter, you're faster, you're-

    25. RK

      Well-

    26. JR

      ... you're basically not really a human anymore. You're a new thing.

    27. RK

      Uh, I mean, we're expanding who we are. We already expanded who we are from, you know...

    28. JR

      Sure. Right.

    29. RK

      Uh...

    30. JR

      Over a course of hundreds of thousands of years-

  7. 1:30:001:31:40

    Um, because it's based…

    1. JR

      by mere human creations, creativity, all of these different things, why would it even have the ambition to do any sort of galaxy-wide engineering? Why would it want to?

    2. RK

      Um, because it's based on us, I mean.

    3. JR

      It is based on us until it decides it's not based on us anymore. That's my point. If it realized that like if we're based on a w- a very violent chimpanzee and we say, "You know what? There's a lot of what we are because of our genetics that really are a problem and this is what's causing all of our violence, all of our crime, all of our war," if we just step in and put a stop to all that, will we also-

    4. RK

      Well, I- I would-

    5. JR

      ... put a stop to our ambition?

    6. RK

      I would maintain that we're actually moving away from that and the s- s-

    7. JR

      We are moving away from that.

    8. RK

      Yeah.

    9. JR

      But- but that's just natural, right? That's natural with our more- uh, our understanding and our-... mitigations of these social problems.

    10. RK

      Right. So if we expand that even more, we'll be even more in that direction.

    11. JR

      As long as we're still we. But as soon as you become something different, why would it even have the desire to expand? If it was infinitely intelligent, why would it even wanna physically go anywhere? Why would it want to? What's the reason for our reason- uh, our, our, our motivation to expand? What is it? It's human. These are like, the same humans that were tribal creatures that roamed, the same humans that stole resources from neighboring villages. This is our genes, right? This is what made us, that got us to this point. If we create a sentient artificial intelligence that's far superior to us, and it can create its own version of artificial intelligence, the first thing it's gonna engineer out is all these stupid emotions that get us in trouble.

Episode duration: 2:03:02

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode w4vrOUau2iY

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome