The Diary of a CEODr. Roman Yampolskiy: Why AGI safety has no clean fix
How AI capability is racing past safety research while labs keep scaling; Yampolskiy on AGI by 2027, humanoid robots soon after, and 99% unemployment.
EVERY SPOKEN WORD
150 min read · 30,120 words- 0:00 – 2:28
Intro
- SBSteven Bartlett
You've been working on AI safety for two decades at least.
- RYDr. Roman Yampolskiy
Yeah. I was convinced we can make safe AI, but the more I looked at it, the more I realized it's not something we can actually do.
- SBSteven Bartlett
You have made a series of predictions about a variety of different dates. So, what is your prediction for 2027?
- RYDr. Roman Yampolskiy
(sighs)
- SBSteven Bartlett
Dr. Roman Yampolskiy is a globally recognized voice on AI safety, and associate professor of computer science. He educates people on the terrifying truth of AI... And what we need to do to save humanity.
- RYDr. Roman Yampolskiy
In two years, the capability to replace most humans and most occupations will come very quickly. And then in five years, we're looking at a world where we have levels of unemployment we've never seen before. Not talking about 10%, but 99%.
- NANarrator
(beep) .
- RYDr. Roman Yampolskiy
And that's without super intelligence, a system smarter than all humans in all domains. So, it would be better than us at making new AI, but it's worse than that. We don't know how to make them safe, and yet we still have the smartest people in the world competing to win the race to super intelligence.
- SBSteven Bartlett
But what do you make of people like Sam Altman's journey with AI?
- RYDr. Roman Yampolskiy
So, a decade ago we published guardrails for how to do AI right. They violated every single one, and he's gambling eight billion lives on getting richer and more powerful. So, I guess some people want to go to Mars, others want to control the universe. But it doesn't matter who builds it. The moment you switch to super intelligence, we will most likely regret it terribly.
- SBSteven Bartlett
And then by 2045...
- RYDr. Roman Yampolskiy
Now, this is where it gets interesting.
- SBSteven Bartlett
Dr. Roman Yampolskiy. Let's talk about simulation theory.
- RYDr. Roman Yampolskiy
I think we are in one, and there is a lot of agreement on this. And this is what you should be doing in it, so we don't shut it down. First...
- SBSteven Bartlett
I see messages all the time in the comments section that some of you didn't realize you didn't subscribe, so if you could do me a favor and double-check if you're a subscriber to this channel, that would be tremendously appreciated. It's the simple, it's the free thing that anybody that watches this show frequently can do to help us here to keep everything going in this show and the trajectory it's on. So, please do double-check if you've subscribed, and, uh, thank you so much. Because in a strange way, you are- you're part of our history, and you're on this journey with us, and I appreciate you for that. So, yeah, thank you. Dr. Roman Yampolskiy. What is the mission that you're currently on, 'cause it's quite clear to me that you are on a bit of a mission, and you've been on this mission for, I think, the best part of two decades at least.
- RYDr. Roman Yampolskiy
I'm hoping to make
- 2:28 – 4:35
How to Stop AI From Killing Everyone
- RYDr. Roman Yampolskiy
sure that super intelligence we're creating right now does not kill everyone.
- SBSteven Bartlett
Give me some, give me some context on that statement, 'cause it's quite a shocking statement.
- RYDr. Roman Yampolskiy
Sure. So, in the last decade, we actually figured out how to make artificial intelligence better. Turns out if you add more compute, more data, it just kind of becomes smarter. And so now, smartest people in the world, billions of dollars, all going to create the best possible super intelligence we can. Unfortunately, while we know how to make those systems much more capable, we don't know how to make them safe. How to make sure they don't do something we will regret. And that's the state of the art right now. When we look at just prediction markets, how soon will we get to advanced AI? The timelines are very short, couple of years, two, three years, according to prediction markets, according to CEOs of top labs. And at the same time, we don't know how to make sure that those systems are aligned with our preferences. So, we're creating this alien intelligence. If aliens were coming to Earth and you, you had three years to prepare, you would be panicking right now. But most people don't, don't even realize this is happening.
- SBSteven Bartlett
So, some of the counterarguments might be, "Well, these are very, very smart people. These are very big companies with lots of money. They have a obligation and, a moral obligation, but also just, uh, uh, a legal obligation to make sure they do no harm, so I'm sure it'll be fine."
- RYDr. Roman Yampolskiy
The only obligation they have is to make money for their investors. That's the legal obligation they have. They have no moral or ethical obligations. Also, according to them, they don't know how to do it yet. The state of the art answers are, "We'll figure it out when we get there," or, "AI will help us control more advanced AI." That's insane.
- SBSteven Bartlett
In terms
- 4:35 – 4:57
What's the Probability Something Goes Wrong?
- SBSteven Bartlett
of probability, what do you think is the probability that something goes catastrophically wrong?
- RYDr. Roman Yampolskiy
So, nobody can tell you for sure what's going to happen. But if you're not in charge, you're not controlling it, you will not get outcomes you want. The space of possibilities is almost infinite. The space of outcomes we will like is tiny.
- SBSteven Bartlett
And
- 4:57 – 8:15
How Long Have You Been Working on AI Safety?
- SBSteven Bartlett
who are you and how long have you been working on this?
- RYDr. Roman Yampolskiy
I'm a computer scientist by training. I have a PhD in computer science and engineering. I probably started work in AI safety, mildly defined as control of bots at the time, uh, 15 years ago.
- SBSteven Bartlett
15 years ago? So you've been working on AI safety before it was cool?
- RYDr. Roman Yampolskiy
Before the term existed. I coined the term "AI safety."
- SBSteven Bartlett
So you're the founder of the term "AI safety"?
- RYDr. Roman Yampolskiy
The term, yes. Not the field. There are other people who did brilliant work before I got there.
- SBSteven Bartlett
Why were you thinking about this 15 years ago? Because most people have only been talking about the term "AI safety" for the last two or three years.
- RYDr. Roman Yampolskiy
Yeah, it started very mildly, just as a security project. I was looking at poker bots, and I realized that the bots are getting better and better. And if you just project this forward enough, they're gonna get better than us. Smarter, more capable. And it happened. They are playing poker way better than average players. But more generally, it will happen with all other domains. All the other cyber resources.I wanted to make sure AI is a technology which was beneficial for everyone, so I started work on making AI safer.
- SBSteven Bartlett
Was there a particular moment in your career where you thought, "Oh my god."?
- RYDr. Roman Yampolskiy
First five years at least, I was working on solving this problem. I was convinced we can make this happen, we can make safe AI, and that was the goal. But the more I looked at it, the more I realized every single component of that equation is not something we can actually do. And the more you zoom in, it's like a fractal. You go in and you find 10 more problems, and then 100 more problems. And all of them are not just difficult, they're impossible to solve. There is no seminal work in this field where like, we solved this, we don't have to worry about this. There are patches, there are little fixes we put in place, and quickly people find ways to work around them, they jailbreak whatever safety mechanisms we have. So, while progress in AI capabilities is exponential or maybe even hyper-exponential, progress in AI safety is linear or constant. The gap is increasing.
- SBSteven Bartlett
The gap between the rate-
- RYDr. Roman Yampolskiy
How capable the systems are and how well we can control them, predict what they're gonna do, explain their decision-making.
- SBSteven Bartlett
I think this is quite an important point because y- you said that we're basically patching over the issues that we find. So, we're developing this, this core intelligence, and then to stop it doing things or to stop it showing some of its unpredictability or its threats, the companies that are developing this AI are programming in code over the top to say, "Okay, don't swear. Don't say that rude word. Don't do that bad thing."
- RYDr. Roman Yampolskiy
Exactly, and you can look at other examples of that. So, HR manuals, right? We have those humans, they're general intelligences, but you want them to behave in a company. So they have a policy, no sexual harassment, no this, no that. But if you're smart enough you always find a workaround, so you're just pushing behavior into a different, not yet restricted subdomain.
- SBSteven Bartlett
We,
- 8:15 – 9:54
What Is AI?
- SBSteven Bartlett
we should probably define some terms here. So there's narrow intelligence which can play chess or whatever. There's artificial general intelligence which can operate across domains, and then superintelligence which is smarter than all humans in all domains. And where are we?
- RYDr. Roman Yampolskiy
So that's a very fuzzy boundary, right? We definitely have many excellent narrow systems. No question about it, and they are super intelligent in that narrow domain. So, uh, protein folding is a problem which was solved using narrow AI, and it's superior to all humans in that domain. In terms of AGI, again, I said if we showed what we have today to a scientist from 20 years ago, they would be convinced we have full-blown AGI. We have systems which can learn, they can perform in hundreds of domains and be better than human in many of them.
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
So, you can argue we have a weak version of AGI. Now, we don't have superintelligence yet. We still have brilliant humans who are completely dominating AI, especially in science and engineering, but that gap is closing so fast. You can see, especially in the domain of mathematics. Three years ago, large language models couldn't do basic algebra, multiplying three-digit numbers was a challenge. Now, they helping with mathematical proofs, they winning mathematics Olympiads, competitions. They are working on solving millennial problems, hardest problems in mathematics. So in three years, we closed the gap from subhuman performance to better than most mathematicians in the world, and we see the same process happening in science and engineering.
- 9:54 – 11:38
Prediction for 2027
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
You have made a series of predictions, and they correspond to a variety of different dates, and I have those dates in front of me here. What is your prediction for the year 2027?
- RYDr. Roman Yampolskiy
We're probably looking at AGI as predicted by prediction markets on tops of the labs.
- SBSteven Bartlett
So we'd have artificial general intelligence by 2027? And how would that make the world different to how it is now?
- RYDr. Roman Yampolskiy
So, if you have this concept of a drop-in employee, you have free labor, physical and cognitive, trillions of dollars of it, it makes no sense to hire humans for most jobs. If I can just get, you know, a $20 subscription or free model to do what an employee does, first anything on a computer will be automated, and next, I think humanoid robots are maybe five years behind. So, in five years, all the physical labor can also be automated. So, we're looking at a world where we have levels of unemployment we've never seen before. Not talking about 10% unemployment, which is scary, but 99%. All you have left is jobs where, for whatever reason, you prefer another human would do it for you. But anything else can be fully automated. It doesn't mean it will be automated in practice. A lot of times technology exists, but it's not deployed. Video phones were invented in the '70s. Nobody had them until iPhones came around. So, we may have a lot more time with jobs and with world which looks like this-
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
... but capability to replace most humans in most occupations will come very quickly.
- 11:38 – 14:27
What Jobs Will Actually Exist?
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
Hmm. Okay, so let's try and drill down into that and, and stress test it. So, a podcaster like me, would you need a podcaster like me?
- RYDr. Roman Yampolskiy
So let's look at what you do.
- SBSteven Bartlett
Yeah.
- RYDr. Roman Yampolskiy
You prepare.
- SBSteven Bartlett
Yeah.
- RYDr. Roman Yampolskiy
You ask questions.
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
You ask follow-up questions, and you look good on camera.
- SBSteven Bartlett
Thank you so much.
- RYDr. Roman Yampolskiy
Let's see what we can do. Large language model today can easily read everything I wrote-
- SBSteven Bartlett
Yeah.
- RYDr. Roman Yampolskiy
... and have very solid understanding, but I, I assume you haven't read every single one of my books.
- SBSteven Bartlett
No, yeah.
- RYDr. Roman Yampolskiy
That thing would do it. It can train on every podcast you ever did, so it knows exactly your style, the types of questions you ask. It can also-... find correspondence between what worked really well, like this type of question really increased views, this type of topic was very promising, so we can optimize, I think, better than you can-
- SBSteven Bartlett
Yeah.
- RYDr. Roman Yampolskiy
... because you don't have a data set. Of course, visual simulation is trivial at this point.
- SBSteven Bartlett
So, it can make a video within seconds of me sat here and...
- RYDr. Roman Yampolskiy
So, we can generate videos of you interviewing anyone on any topic very efficiently, and we just have to get likeness approval, whatever.
- SBSteven Bartlett
Are there many jobs that you think would remain in a world of AGI? If you're saying AGI is potentially gonna be here, whether it's deployed or not, by 2027, what ki- a- and- and then, okay, so let's take out of this any physical labor jobs for a second! Are there any jobs that you think a human would be able to do better in a world of AGI... still?
- RYDr. Roman Yampolskiy
So, that's the question I often ask people. In a world with AGI... and I think almost immediately we'll get super intelligence as a side effect, so the question really is, in a world of super intelligence, which is defined as better than all humans in all domains, what can you contribute? And so, you know better than anyone what it's like to be you. You know what ice cream tastes to you. Can you get paid for that knowledge? Is someone interested in that? Maybe not, not a big market. There are jobs where you want a human. Maybe you're rich and you want a human accountant for whatever historic reasons. Old people like traditional ways of doing things. Warren Buffett would not switch to AI, he would use his human accountant. But it's a tiny subset of a market. Today, we have products which are man-made in U.S. as opposed to mass-produced in China, and some people pay more to have those. But it's a small subset. It's a, almost a fetish.
- SBSteven Bartlett
(clears throat)
- RYDr. Roman Yampolskiy
There is no practical reason for it, and I think anything you can do on a computer could be automated using that technology.
- 14:27 – 18:49
Can AI Really Take All Jobs?
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
You must hear a lot of rebuttals to when- this when you say it, because people experience a huge amount of mental discomfort when they hear that their job, their career, the thing they got a degree in, the thing they-
- RYDr. Roman Yampolskiy
Mm-hmm.
- SBSteven Bartlett
... invested $100,000 into is gonna be taken away from them, so their natural reaction s- for some people is that cognitive dissonance, that, "No, you're wrong. AI can't be creative, it's not this, it's not that. It'll never be interested in my job. I'll be fine because..." You hear these arguments all the time, right?
- RYDr. Roman Yampolskiy
It's really funny. I ask people, and I ask people in different occupations. I've asked my Uber driver, "Are you worried about self-driving cars?" And they go, "No. No one can do what I do. I know the streets of New York, I can navigate like no AI. I'm safe." And it's true for any job. Professors are saying this to me, "Oh, nobody can lecture like I do. Like, this is so special." But you understand it's ridiculous, we already have self-driving cars replacing drivers. That is not even a question if it's possible. It's like, how soon before you're fired?
- SBSteven Bartlett
Yeah, I mean, I've just been in LA y- uh, yesterday, and, uh, my car drives itself. So, I get in the car, I t- set the put in where I want to go, and then I don't touch the steering wheel or the brake pedals. And it takes me from A to B even if it's an hour-long drive without any intervention at all. I actually, um, still park it, but other than that, I'm not, I'm not driving the car at all.
- RYDr. Roman Yampolskiy
Yeah.
- SBSteven Bartlett
I mean, obviously, in LA we also have Waymo now, which means you order on your phone and it shows up with no driver in it and takes you to where you want to go.
- RYDr. Roman Yampolskiy
Oh, yeah.
- SBSteven Bartlett
So, it's quite clear to see how that is potentially a matter of time. Uh, for those people, 'cause we do have some of those people listening to this conversation right now, that their occupation is driving, to offer them a... and I think driving is the biggest emp- occupation in the world, if I'm correct. I f- I'm pretty sure it is the biggest occupation in the world.
- RYDr. Roman Yampolskiy
Could be one of the top ones, yeah.
- SBSteven Bartlett
What would you say to those people? What, what should they be doing with their lives? What- should they, should they be retraining in something, or what time frame?
- RYDr. Roman Yampolskiy
So, that's the paradigm shift here. Before we always said this job is going to be automated, retrain to do this other job. But if I'm telling you that all jobs will be automated, then there is no plan B. You cannot retrain. (inhales) Look at computer science. Two years ago, we told people, "Learn to code."
- SBSteven Bartlett
Hmm.
- RYDr. Roman Yampolskiy
"You are an artist, you cannot make money, learn to code." Then we realized, "Oh, AI kinda knows how to code and getting better. Become a prompt engineer. You can engineer prompts for AIs, it's gonna be a great job. Get a four-year degree in it." But then we're like, "AI is way better at designing prompts for other AIs than any human." So, that's gone. So, I can't really tell you, "Right now the hottest thing is design AI agents for practical applications," and guarantee you in a year or two it's gonna be gone just as well. So, I don't think there is a, "This occupation needs to learn to do this instead." I think it's more like, we as a humanity, then we all lose our jobs. What do we do? What do we do financially? Who's paying for us? And what do we do in terms of meaning? What do I do with my extra 60, 80 hours a week?
- SBSteven Bartlett
You've th- thought around this corner, haven't you?
- RYDr. Roman Yampolskiy
A little bit.
- SBSteven Bartlett
What is around that corner, in your view?
- RYDr. Roman Yampolskiy
So, the economic part seems easy. If you create a lot of free labor, you have a lot of free wealth, abundance, things which are right now not very affordable become dirt cheap, and so you can provide for everyone basic needs. Some people say you can provide beyond basic needs, you can provide very good existence for everyone. The hard problem is, what do you do with all that free time? For a lot of people their jobs are what gives them meaning in their life, so they would be kind of lost. We see it with people who, uh, retire or do early retirement. And for so many people who hate their jobs, they'll be very happy not working, but now you have people who are chilling all day-... what happens to society? How does that impact crime rate, pregnancy rate? All sorts of issues nobody thinks about. Governments don't have programs prepared to deal with 99% unemployment.
- SBSteven Bartlett
Hmm. What do you think that world looks like?
- 18:49 – 20:32
What Happens When All Jobs Are Taken?
- SBSteven Bartlett
- RYDr. Roman Yampolskiy
Again, I, I think-
- SBSteven Bartlett
That you're gonna be doing?
- RYDr. Roman Yampolskiy
... the very important part to understand here is the unpredictability of it. We cannot predict what a smarter than us system will do, and the point when we get to that is often called singularity, by analogy with physical singularity. You cannot see beyond the event horizon. I can tell you what I think might happen, but that's my prediction. It is not what actually is going to happen, because I just don't have cognitive ability to predict a much smarter agent impacting this world. When you read science fiction, there is never a super intelligence in it actually doing anything, because nobody can write believable science fiction at that level. They either bend AI, like Dune, because this way you can avoid writing about it. Or it's like Star Wars, you have this really dumb bots, but n- nothing super intelligent, ever. 'Cause by definition, you cannot predict at that level.
- SBSteven Bartlett
Because by definition of it being super intelligent, it will make its own mind up, or-
- RYDr. Roman Yampolskiy
By definition, if it was something you could predict, you would be operating at the same level of intelligence, violating our assumption that it is smarter than you. If I'm playing chess with super intelligence and I can predict every move, I'm playing at that level.
- SBSteven Bartlett
It's kinda like my French bulldog trying to predict exactly what I'm thinking and what I'm gonna do.
- RYDr. Roman Yampolskiy
That's a good cognitive gap, and it's not just, he can predict you going to work, you coming back, but he cannot understand why you're doing a podcast. That is something completely outside of his model of the world.
- SBSteven Bartlett
(laughs) Yeah, he doesn't even know that I go to work. He just knows that I leave the house and doesn't know where I go.
- RYDr. Roman Yampolskiy
Buy food for him.
- SBSteven Bartlett
What, what's
- 20:32 – 22:04
Is There a Good Argument Against AI Replacing Humans?
- SBSteven Bartlett
the most persuasive argument against your own perspective here? That you think-
- RYDr. Roman Yampolskiy
That we will not have unemployment due to advanced technology?
- SBSteven Bartlett
That there won't be this French bulldog/human gap in understanding and I guess, like, power and control.
- RYDr. Roman Yampolskiy
So, some people think that we can enhance human minds, either through combination with hardware, so something like Neuralink, or through genetic re-engineering to where we make smarter humans.
- SBSteven Bartlett
Yeah.
- RYDr. Roman Yampolskiy
It may give us a little more intelligence. I don't think we're still competitive in biological form with silicon form. Silicon substrate is much more capable for intelligence. It's faster, it's more resilient, more energy efficient in many ways.
- SBSteven Bartlett
Which is what computers are made out of, versus-
- RYDr. Roman Yampolskiy
Exactly.
- SBSteven Bartlett
... the brain. Yeah.
- RYDr. Roman Yampolskiy
So I don't think we can keep up just with improving our biology. Some people think maybe, and this is very speculative, we can upload our minds into computers, so scan your brain, kind of comb of your brain, and have a simulation running on a computer, and you can speed it up, give it more capabilities. But to me, that feels like you no longer exist. We just created software by different means, and now you have AI based on biology and AI based on some other forms of training. You can have evolutionary algorithms, you can have many paths to reach AGI, but at the end none of them are humans.
- 22:04 – 23:58
Prediction for 2030
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
I have another date here, which is 2030. What's your prediction for 2030? What will the world look like?
- RYDr. Roman Yampolskiy
So we probably will have, uh, humanoid robots with enough flexibility and dexterity to compete with humans in all domains, including plumbers. We can make artificial plumbers.
- SBSteven Bartlett
Not the plumbers we're ... Th- that was, that felt like the last bastion of, uh, human employment. So 2030, 5 years from now, humanoid robots, so, so many of the companies, the leading companies including Tesla are developing humanoid robots at light speed and they're getting increasingly more effective. And these humanoid robots will be able to move through physical space, thought- you know, make an omelet, do anything humans can do, but obviously have, be connected to AI as well. So they can think, talk.
- RYDr. Roman Yampolskiy
Right. They're controlled by AI, they're always connected to the network, so they are already dominating in many ways.
- SBSteven Bartlett
Our world will look remarkably different when humanoid robots are functional and effective, because that's really when, you know, I start thinking, Christ, like, the combination of intelligence and physical ability is really, really doesn't leave much, does it, for us, um, human beings.
- RYDr. Roman Yampolskiy
Not much. So today, if you have intelligence through internet, you can hire humans to do your bidding for you. You can pay them in Bitcoin, so you can have bodies just not directly controlling them, so it's not a huge game changer to add direct control of physical bodies. Intelligence is where it's at. The important component is definitely higher ability to optimize, to solve problems, to find patterns people cannot see.
- SBSteven Bartlett
And
- 23:58 – 25:37
What Happens by 2045?
- SBSteven Bartlett
then by 2045, I guess the world looks even m- even more, um... Which is 20 years from now.
- RYDr. Roman Yampolskiy
So if it's still around.
- SBSteven Bartlett
If it's still around.
- RYDr. Roman Yampolskiy
Ray Kurzweil predicts that that's the year for the singularity. That's the year where progress becomes so fast, so this AI doing science and engineering work makes improvements so quickly, we cannot keep up anymore. That's the definition of singularity, point beyond which we cannot see, understand, predict.
- SBSteven Bartlett
See and understand, predict the intelligence itself? Or-
- RYDr. Roman Yampolskiy
What is happening in the world? The technology is being developed. So right now, if I have an iPhone, I can look forward to a new one coming out next year, and I'll understand it has slightly better camera. Imagine now this process of researching and developing this phone is automated. It happens every six months, every three months, every month, week, day, hour, minute, second. You cannot keep up with 30 durations of iPhone in one day. You don't understand what capabilities it has, what proper controls are. It just escapes you. Right now, it's hard for any researcher in AI to keep up with the state of the art. While I was doing this interview with you, a new model came out, and I'll no longer know what the state of the art is. Every day, as a percentage of total knowledge, I get dumber. I may still know more because I keep reading, but as a percentage of overall knowledge, we're all getting dumber. And then you take it to extreme values, you have zero knowledge, zero understanding of the world around you.
- 25:37 – 28:51
Will We Just Find New Careers and Ways to Live?
- SBSteven Bartlett
So some of the arguments against this eventuality are that when you look at other technologies, like the Industrial Revolution, people just found new ways to, to work and new careers that we could never have imagined at the time were created. How would you respond to that in a world of superintelligence?
- RYDr. Roman Yampolskiy
It's a paradigm shift. We always had tools, new tools which allowed some job to be done more efficiently. So instead of having 10 workers, you could have two workers, and eight workers had to find a new job. And there was another job. Now you can supervise those workers or do something cool. If you're creating a meta invention, you're inventing intelligence, you're inventing a worker, an agent, then you can apply that agent to the new job. There is not a job which cannot be automated. That never happened before. All the inventions we previously had were kinda a tool for doing something. So we invented fire. Huge game-changer, but that's it, it stops with fire. We invent the wheel. Same idea. Huge implications, but wheel itself is not an inventor. Here, we're inventing a replacement for human mind, a new inventor capable of doing new inventions. It's the last invention we ever have to make. At that point, it takes over, and the process of doing science, research, even ethics research, morals, all that is automated at that point.
- SBSteven Bartlett
Do you sleep well at night?
- RYDr. Roman Yampolskiy
Really well.
- SBSteven Bartlett
Even though you, you spent the last, what, 15, 20 years of your life working on AI safety, and it's suddenly among us in a, in a way that I don't think anyone could've predicted five years ago? When I say among us, I really mean that the amount of funding and talent that is now focused on reaching superintelligence faster has made it feel more inevitable and more soon than a- any of us could've possibly imagined.
- RYDr. Roman Yampolskiy
We as humans have this built-in bias about not thinking about really bad outcomes and things we cannot prevent. So all of us are dying. Your kids are dying, your parents are dying, everyone's dying, but you still sleep well, you still go on with your day. Even 95-year-olds are still doing games and playing golf and whatnot, 'cause we have this ability to not think about the worst outcomes, especially if we cannot actually modify the outcome. So that's the same infrastructure being used for this. Yeah, there is humanity-level death-like event. We're happening to be close to it probably, but unless I can do something about it, I, I can just keep enjoying my life. In fact, maybe knowing that you have limited amount of time left gives you more reason to have a better life. You cannot waste any.
- SBSteven Bartlett
And that's a survival trait of evolution, I guess, because those of my ancestors that spent all their time worrying wouldn't have spent enough time having babies and hunting to survive.
- RYDr. Roman Yampolskiy
It's suicidal ideation. People who really start thinking about how horrible the world is usually escape pretty soon.
- SBSteven Bartlett
Mm.
- 28:51 – 30:07
Is Anything More Important Than AI Safety Right Now?
- SBSteven Bartlett
One of the... You co-authored this paper, um, analyzing the key arguments people make against the importance of AI safety, and one of the arguments in there is that there's other things that are of bigger importance right now. It might be world wars, it could be nuclear containment, it could be other things. There's other things that the governments and podcasters like me should be talking about that are more important. What's your rebuttal to that argument?
- RYDr. Roman Yampolskiy
So superintelligence is the meta solution. If we get superintelligence right, it will help us with climate change. It will help us with wars. It can solve all the other existential risks. If we don't get it right, it, uh, dominates. If climate change will take 100 years to boil us alive, and superintelligence kills everyone in five, I don't have to worry about climate change. So either way, either it solves it for me, or it's not an issue.
- SBSteven Bartlett
So you think it's the most important thing to be working on?
- RYDr. Roman Yampolskiy
Without question, there is nothing more important than getting this right. And I know everyone says if you take any class with... You take English professor's class and he tells you, "This is the most important class you'll ever take," but, uh, you can see the meta-level differences with this one.
- 30:07 – 31:32
Can't We Just Unplug It?
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
A- another argument in that paper is that we'll be in control, and that the danger is not AI. Um, this particular argument asserts that AI is just a tool. Humans are the real actors that present danger, and we can always main- maintain control by simply turning it off. Can't we just pull the plug out? I see that every time we have a conversation on this show about AI. Someone says, "Can't we just unplug it?"
- RYDr. Roman Yampolskiy
Yeah. I get those comments on every podcast I make, and I always wanna, like, get in touch with the guy and say, "This is brilliant. I never thought of it. We're gonna write a paper together and get a Nobel Prize for it." This is, like... Let's do it."... because it's so silly. Like, can you turn off a virus? You have a computer virus you don't like, turn it off. How about Bitcoin? Turn off Bitcoin network. Go ahead, I'll wait. This is silly. Those are distributor systems, you cannot turn them off. And on top of it, they're smarter than you. They made multiple backups, they predicted what you're going to do. They will turn you off before you can turn them off. The idea that we will be in control applies only to pre-superintelligence levels, basically what we have today. Today, humans with AI tools are dangerous, they can be hackers, malevolent actors, absolutely. But the moment superintelligence becomes smarter, dominates, they're no longer the important part of that equation. It is the higher intelligence I'm concerned about, not the human who may add additional malevolent payload, but at the end, still doesn't control it.
- 31:32 – 37:20
Do We Just Go With It?
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
It is tempting... to follow your ne- the next argument that I saw in that paper, which basically says, "Listen, this is inevitable, so there's no point fighting against it because there's really no hope here, so we should probably give up even trying and be faithful that it'll work itself out." Because everything you've said sounds really inevitable, and if, with, with China working on it, I'm sure Putin's got some secret division, I'm sure Iran are doing some bits and pieces, every European country's trying to get ahead of AI, the United States is leading the way. So it's, it's inevitable, so we probably should just have faith and pray?
- RYDr. Roman Yampolskiy
Well, prayer is always good, but incentives matter. If you are looking at what drives these people, so yes, money is important, so there is a lot of money in that space, and so everyone's trying to be there and develop this technology. But if they truly understand the argument, they understand that you will be dead, no amount of money will be useful to you, that incentive switch, they would want to not be dead. A lot of them are young people, rich people, they have their whole lives ahead of them. I think they would be better off not building advanced superintelligence, concentrating on narrow AI tools for solving specific problems, okay? My company cures breast cancer. That's all. We make billions of dollars, everyone's happy, everyone benefits, it's a win. We are still in control today. It's not over until it's over. We can decide not to build general superintelligences.
- SBSteven Bartlett
I mean, the United States might be able to conjure up enough enthusiasm for that, but if the United States doesn't build general superintelligences, then China are gonna have the big advantage, right?
- RYDr. Roman Yampolskiy
So right now, at those levels, whoever has more advanced AI has more advanced military, no question. We see it with existing conflicts. But the moment you switch to superintelligence and control superintelligence, it doesn't matter who builds it, us or them. And if they understand this argument, they also would not build it. It's a mutually assured destruction on both ends.
- SBSteven Bartlett
Is this technology different than, say, nuclear weapons, which require a huge amount of investment, and you have to, like, enrich the uranium, and you need billions of dollars potentially to even build a nuclear weapon? But it feels like this technology is much cheaper to get to superintelligence potentially, or at least it will become cheaper. I wonder if it's possible that some, some guy, some startup is gonna be able to build superintelligence in, you know, a couple of years without the need of-
- RYDr. Roman Yampolskiy
No.
- SBSteven Bartlett
... bi- you know, billions of dollars of compute and, or, or electricity power.
- RYDr. Roman Yampolskiy
That's a great point. So every year, it becomes cheaper and cheaper to train sufficiently large model. If today it would take a trillion dollars to build superintelligence, next year it could be 100 billion and so on. At some point, a guy and a laptop could do it. But you don't wanna wait four years for make it affordable, so that's why so much money is pouring in. Somebody wants to get there this year and lock in all the winnings, Lightcone level award. So in that regard, they're both very expensive projects, like Manhattan-level projects.
- SBSteven Bartlett
Which was the nuclear bomb projects.
- RYDr. Roman Yampolskiy
Right. The difference between the two technologies is that nuclear weapons are still tools. Some dictators, some countries, someone has to decide to use them, deploy them. Whereas superintelligence is not a s- it's not a tool, it's an agent. It makes its own decisions, and no one is controlling it. I cannot take out this dictator, and now superintelligence is safe. So that's a fundamental difference to me.
- SBSteven Bartlett
But if you're saying that it is gonna get inc- incrementally cheaper, like, I think it's Moore's Law, isn't it, that technology gets cheaper? (laughs)
- RYDr. Roman Yampolskiy
It is.
- SBSteven Bartlett
Then there is a future where some guy and his laptop is gonna be able to create superintelligence without oversight or regulation or employees, et cetera.
- RYDr. Roman Yampolskiy
Yeah. That's why a lot of people suggesting we need to build something like a surveillance planet, where you are monitoring who's doing what, and you're trying to prevent people from doing it. Do I think it's feasible? No. At some point, it becomes so affordable and so trivial that it just will happen. But at this point, we're trying to get more time. We don't want it to happen in five years, we want it to happen in 50 years.
- SBSteven Bartlett
I mean, that's not very hopeful. So you-
- RYDr. Roman Yampolskiy
Depends on how old you are.
- SBSteven Bartlett
(laughs) Depends on how old you are. (laughs) I mean, as, if you're saying that you believe in the future people will be able to make superintelligence without the resources that are required today, then it is just a matter of time.
- RYDr. Roman Yampolskiy
Yeah, but so will be true for many other technologies. We're getting much better in synthetic biology, where today someone with a bachelor's degree in biology can probably create a new virus. This will also become cheaper, other technologies like that. So we are approaching a point where i- it's very difficult to make sure no technological breakthrough is the last one.So essentially, in many directions, we have this, uh, pattern of making it easier in terms of resources, in terms of intelligence, to destroy the world. If you look at, uh, I don't know, 500 years ago, the worst dictator with all the resources could kill a couple million people. He couldn't destroy the world. Now we know with nuclear weapons, we can blow up the whole planet, multiple times over. Synthetic biology, we saw with COVID, you can very easily create a combination virus which impacts billions of people. And all of those things are becoming easier to do.
- SBSteven Bartlett
In the near term, you talk about
- 37:20 – 39:45
What Is Most Likely to Cause Human Extinction?
- SBSteven Bartlett
extinction being a real risk, human extinction being a real risk. Of all the- the pathways to human extinction that you think are most likely, what- what is the leading pathway? 'Cause I know you talk about there being some issue predeployment of these AI tools, like, you know, uh, someone makes a mistake, um, when they're, uh, designing a model, or other issues postdeployment. When I say postdeployment, I mean once a ChatGPT or something, an A- an agent's released into the world and someone hacking into it and changing it and reprogram- reprogramming it to be malicious. Of all these potential paths to human extinction, which one do you think is the highest probability?
- RYDr. Roman Yampolskiy
So I can only talk about the ones I can predict myself. So I can predict even before we get to super intelligence, someone will create a very advanced biological tool, create a novel virus, and that virus gets everyone or most everyone. I can envision it, I can understand the pathway, I can say that.
- SBSteven Bartlett
So just to zoom in on that then, that would be using an AI to make a virus and then releasing it.
- RYDr. Roman Yampolskiy
Yeah.
- SBSteven Bartlett
And would that be intentional or...
- RYDr. Roman Yampolskiy
There is a lot of psychopaths, a lot of terrorists, a lot of doomsday cults. We've seen historically, again, they try to kill as many people as they can. They usually fail. They kill hundreds of thousands. But if they get technology to kill millions or billions, they would do that, gladly. The point I'm trying to emphasize is that it doesn't matter what I can come up with. I am not a malevolent actor you're trying to defeat here. It's the super intelligence which can come up with completely novel ways of doing it. Again, you brought up example of your dog. Your dog cannot understand all the ways you can take it out. It can maybe think you'll bite it to death or something, but that's all. Whereas you have infinite supply of resources. So if I asked your dog exactly how you're going to take it out, it would not give you a meaningful answer. It can talk about biting. And this is what we know. We know viruses, we experienced viruses, we can talk about them. But what an AI system capable of doing novel physics research can come up with is beyond me.
- SBSteven Bartlett
One of the things that I think most people don't understand is how little we understand about how these AIs are actually working. 'Cause one would assume,
- 39:45 – 41:30
No One Knows What's Going On Inside AI
- SBSteven Bartlett
you know, with computers, we kind of understand how a computer works. We, we know that it's doing this and then this, and it's running on code. But from reading your work, you describe it as being a black box.
- RYDr. Roman Yampolskiy
Mm-hmm.
- SBSteven Bartlett
We act- so in the context of something like ChatGPT or an AI we know, you're telling me that the people that have built that tool don't actually know what's going on inside there?
- RYDr. Roman Yampolskiy
That's exactly right. So even people making those systems have to run experiments on their product to learn what it's capable of. So they train it by giving it all of data, let's say all of internet text. They run it on a lot of computers to learn patterns in that text and then we start experimenting with that model. Oh, do you speak French? Or, can you do mathematics? Or, are you lying to me now? And so maybe it takes a year to train it and then six months to get some fundamentals about what it's capable of, some safety overhead, but we still discover new capabilities in old models. If you ask a question in a different way, it becomes smarter. So it's no longer engineering how it was the first 50 years where someone was a knowledge engineer programming an expert system AI to do specific things. It's a science. We are creating this artifact, growing it, it's like a alien plant, and then we study it to see what it's doing. And just like with plants, we don't have 100% accurate knowledge of biology, we don't have full knowledge here. We kinda know some patterns. We know, okay, if we add more compute, it gets smarter most of the time, but nobody can tell you precisely what the outcome is going to be given a set of inputs.
- 41:30 – 42:32
Ads
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
I've watched so many entrepreneurs treat sales like a performance problem when it's often down to visibility, because when you can't see what's happening in your pipeline, what stage each conversation is at, what's stalled, what's moving, you can't improve anything and you can't close the deal. Our sponsor, Pipedrive, is the number one CRM tool for small to medium businesses. Not just a contact list, but an actual system that shows your entire sales process end to end, everything that's live, what's lagging, and the steps you need to take next. All of your teams can move smarter and faster. Teams using Pipedrive are on average closing three times more deals than those that aren't. It's the first CRM made by salespeople for salespeople that over 100,000 companies around the world rely on, including my team who absolutely love it. Give Pipedrive a try today by visiting pipedrive.com/ceo and you can get up and running in a couple of minutes with no payment needed. And if you use this link, you'll get a 30 day free trial.
- 42:32 – 46:24
Thoughts on OpenAI and Sam Altman
- SBSteven Bartlett
What do you make of OpenAI and Sam Altman and what they're doing? And obviously you're, you're aware that one of the co-founders... Was it, um, was, was it Ilya Jut- Ilya Sutskever.I- Ilya, yeah. Ilya left and he started a new company called...
- RYDr. Roman Yampolskiy
Superintelligent Safety.
- SBSteven Bartlett
Superintelligent Safety.
- RYDr. Roman Yampolskiy
Because AI safety wasn't challenging enough, he decided to just jump right to the hard problem.
- SBSteven Bartlett
As an onlooker, when you see that people are leaving OpenAI to s- to start superintelligent safety companies, what was your read on that situation?
- RYDr. Roman Yampolskiy
So, a lot of people who worked with Sam said that maybe he's not the most direct person in terms of being honest with them, and they had concerns about his views on safety. That's part of it. So, they wanted more control, they wanted more concentration on safety. But also, it seems that anyone who leaves that company and starts a new one gets a $20 billion valuation just for having it started. You don't have a product, you don't have customers, but if you wanna make many billions of dollars, just do that. So, it seems like a very rational thing to do for anyone who can. So, I'm not surprised that there is a lot of attrition. Meeting him in person, he's super nice, very smart. Absolutely perfect public interface. You see him testify in the Senate, he says the right thing to the senators. You see him talk to the investors, they get the right message. But if you look at what people who know him personally are saying, it's probably not the right person to be controlling a project of that impact.
- SBSteven Bartlett
Why?
- RYDr. Roman Yampolskiy
He puts safety second.
- SBSteven Bartlett
Second to?
- RYDr. Roman Yampolskiy
Winning this race to superintelligence, being the guy who created God and con- controlling light cone of the universe. He's worse.
- SBSteven Bartlett
Do you suspect that's what he's driven by, is by the- the legacy of being an impactful person that did a remarkable thing, versus the consequence that that might have on s- for society? Because it's interesting that his- his other startup is Worldcoin, which is ba- basically a platform to create universal basic income, i.e. a platform to give us income in a world where people don't have jobs anymore. So, on one hand, you're creating an AI company, on the other hand you're creating a company that is preparing for people not to have employment.
- RYDr. Roman Yampolskiy
It also has other properties. It keeps track of everyone's biometrics. It, uh, keeps you in charge of the world's economy, world's wealth. They are retaining a large portion of worldcoins. So, I- I think it's kinda very reasonable part to integrate with world dominance. If you have a superintelligent system and you control money, you're doing well.
- SBSteven Bartlett
Why would someone want world dominance?
- RYDr. Roman Yampolskiy
People have different levels of ambition. When you are a very young person with billions of dollars, fame, you start looking for more ambitious projects. Some people want to go to Mars, others want to control light cone of the universe.
- SBSteven Bartlett
What- what- what did you say, light coin of the universe?
- RYDr. Roman Yampolskiy
Light cone.
- SBSteven Bartlett
Light cone.
- RYDr. Roman Yampolskiy
So, every part of the universe, light can reach from this point. Meaning, anything accessible, you wanna grab and bring into your control.
- SBSteven Bartlett
You think Sam Altman wants to control every part of the universe?
- RYDr. Roman Yampolskiy
I- I suspect he might, yes.
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
It doesn't mean he doesn't want a side effect of it being a very beneficial technology which makes all the humans happy. Happy humans are good for control.
- 46:24 – 46:56
What Will the World Look Like in 2100?
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
If you had to guess what the world looks like in tw- 2100, if you had to guess.
- RYDr. Roman Yampolskiy
It's either free of human existence or it's completely not comprehensible to someone like us. It's one of those extremes.
- SBSteven Bartlett
So, there's either no humans-
- RYDr. Roman Yampolskiy
It's basically the world is destroyed, or it's so different that I cannot envision those predictions.
- 46:56 – 53:55
What Can Be Done About the AI Doom Narrative?
- SBSteven Bartlett
What can be done to turn this ship to a more certain positive outcome at this point? Is- is there still things that we can do, or is it too late?
- RYDr. Roman Yampolskiy
So, I believe in personal self-interest. If people realize that doing this thing is really bad for them personally, they will not do it. So, our job is to convince everyone with any power in the space, creating this technology, working for those companies, they are doing something very bad for them. Not just... Forget our eight billion people you're experimenting on with no permission, no consent. You will not be happy with the outcome. If we can get everyone to understand that's a default... And it's not just me saying it. You had Geoff Hinton, Nobel Prize winner, founder of a whole machine learning space. He says the same thing. Bengio, dozens of others, top scholars. We had a statement about dangers of AI signed by thousands of scholars, computer scientists. This is basically what we think right now, and we need to make it a universal. No one should disagree with this. And then we may actually make good decisions about what technology to build. It doesn't guarantee long-term safety for humanity, but it means we're not trying to get there as soon as possible to the worst possible outcome.
- SBSteven Bartlett
And do you... Are you hopeful that that's even possible?
- RYDr. Roman Yampolskiy
(sighs) I wanna try. We have no choice but to try.
- SBSteven Bartlett
And what would need to happen, and who would need to act? What, is it government legislation? Is it...
- RYDr. Roman Yampolskiy
Unfortunately, I don't think making it illegal is sufficient. There are different jurisdictions. There is, you know, loopholes and... What are you gonna do if somebody does it? Are you gonna fine them for destroying humanity? Like, very steep fines for it? Like, what are you gonna do? It's not enforceable.If they do create it, uh, now the superintelligence is in charge. So the judicial system we have is not impactful, and all the punishments we have are designed for punishing humans. Prisons, capital punishment doesn't apply to AI.
- SBSteven Bartlett
You know, the problem I have is when I have these conversations, I never feel like I walk away with a hope that something's gonna go well. And what I mean by that is I never feel like I walk away with clear, some kinda clear set of actions that can course correct what might happen here. So, what shou- what should I do? What should the person sat at home listening to this do to-
- RYDr. Roman Yampolskiy
You, you talk to a lotta people who are building this technology.
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
Ask them precisely to explain some of those things they claim to be impossible, how they solved it or going to solve it before they get to where they're going.
- SBSteven Bartlett
Do you know, I don't think Sam Altman wants to talk to me. (laughs)
- RYDr. Roman Yampolskiy
I don't know. He seems to go on a lot of podcasts. Maybe he does.
- SBSteven Bartlett
He wants to go on mine. I, I wonder why that is. (laughs) I wonder why that is. I'd love to speak to him, but I don't, I don't think he wants to... I don't think he wants me to, uh, interview him.
- RYDr. Roman Yampolskiy
Have an open challenge. Maybe money is not the incentive, but whatever attracts people like that, whoever can convince you that it's possible to control and make safe superintelligence gets the prize. They come on your show and prove their case. Anyone. If no one claims the prize or even accepts the challenge after a few years, maybe we don't have anyone with solutions. We have companies valued, again, at billions and billions of dollars working on safe superintelligence. We haven't seen their output yet.
- SBSteven Bartlett
Hmm. Yeah, I'd like to speak to Ilya as well 'cause I know he's, he's working on safe superintelligence too. Like-
- RYDr. Roman Yampolskiy
Notice a pattern too. If you look at history of AI safety organizations or departments within companies, they usually start well, very ambitious, and then they fail and disappear. So OpenAI had superintelligence alignment team. The day they announced it, I think they said they're gonna solve it in four years. Like half a year later, they canceled the team. And there is dozens of similar examples. Creating a perfect safety for superintelligence, perpetual safety as it keeps improving, modifying, interacting with people, you're never gonna get there. It's impossible. There is a big difference between difficult problems in computer science and P-complete problems and impossible problems. And I think control, indefinite control of superintelligence is such a problem.
- SBSteven Bartlett
So what's the point in trying then if it's impossible?
- RYDr. Roman Yampolskiy
Well, I'm trying to prove that it is specifically that. Once we establish something is impossible, fewer people will waste their time claiming they can do it and find, uh, looking for money. So many people going, "Give me a billion dollars and two years, and I'll solve it for you." Well, I don't think you will.
- SBSteven Bartlett
But people aren't gonna stop striving towards it. So if there's no attempts to make it safe and there's more people increasingly striving towards it, then it's inevitable.
- RYDr. Roman Yampolskiy
But it changes what we do. If we know that it's impossible to make it right, to make it safe, then this direct path of just build it as soon as you can becomes suicide mission. Hopefully fewer people will pursue that. They may go in other directions like, again, I'm a scientist and engineer. I love AI. I love technology. I use it all the time. Build useful tools. Stop building agents. Build narrow superintelligence, not a general one. I'm not saying you shouldn't make billions of dollars. I love billions of dollars. But, uh, don't kill everyone, yourself included.
- SBSteven Bartlett
They don't think they're going to, though.
- RYDr. Roman Yampolskiy
Then tell us why. I hear things about intuition. I hear things about, "We'll solve it later." Tell me specifically in scientific terms, publish a peer-reviewed paper explaining how you're going to control superintelligence.
- SBSteven Bartlett
Yeah, strange. It's strange to, it's strange to even bother if there was even a 1% chance of human extinction. It's strange to do something. Like if there was a 1% ch- someone told me there was a 1% chance that if I got in a car, I might not m- I might not be alive. I would not get in the car. If you told me there was a 1% chance that if I drank whatever liquid is in this cup right now I might die, I would not drink the liquid. Even if there was a billion dollars if I survived, so the 99% chance I get a billion dollars, the 1% is I die, I wouldn't drink it. I wouldn't take the chance. But-
- RYDr. Roman Yampolskiy
It's worse than that. Not just you die, everyone dies.
- SBSteven Bartlett
Yeah. Yeah.
- RYDr. Roman Yampolskiy
Now, would we let you drink it at any odds? That's for us to decide. You don't get to make that choice for us. To get consent from human subjects, you need them to comprehend what they are consenting to. If those systems are unexplainable, unpredictable, how can they consent? They don't know what they are consenting to.
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
So it's impossible to get consent by definition. So this experiment can never be run ethically. By definition, they are doing unethical experimentation on human subjects.
- 53:55 – 56:10
Should People Be Protesting?
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
Do you think people should be protesting?
- RYDr. Roman Yampolskiy
There are people protesting. There is Stop AI, there is Pause AI. They block offices of OpenAI. They do it weekly, monthly, uh, quite a few actions and they're recruiting new people.
- SBSteven Bartlett
You think more people should be protesting? Do you think that's an effective solution?
- RYDr. Roman Yampolskiy
If you can get it to a large enough scale to where majority of population is participating, it would be impactful. I don't know if they can scale from current numbers to that, but, uh, I support everyone trying everything peacefully and legally.
- SBSteven Bartlett
And for the, for the person listening at home, what should they, what should they be doing? What, what, what... 'Cause they, they don't wanna feel powerless. No- none of us wanna feel powerless.
- RYDr. Roman Yampolskiy
So it depends on what scale we're asking about, time scale. Are we saying like this year your kid goes to college, what major to pick? Should they go to college at all?
- SBSteven Bartlett
Yeah.
- RYDr. Roman Yampolskiy
Should you switch jobs? Should you go into certain industries? There's questions we can answer. We can talk about immediate future. What should you do in five years with, uh, this being created? For an average person, not much. Just like they can't influence World War III, nuclear holocaust, anything like that. It's not something anyone's gonna ask them about. Today, if you wanna be a part of this movement, yeah, join Pause AI, join Stop AI. There's, uh, organizations currently trying to build up momentum to bring democratic powers to influence those individuals.
- SBSteven Bartlett
So, in the near term, not a huge amount. I was wondering if there, there are any interesting strategies in the near term. Like, should I be thinking differently about my family? About... I mean, you've got kids, right?
- RYDr. Roman Yampolskiy
I do.
- SBSteven Bartlett
You've got three kids?
- RYDr. Roman Yampolskiy
That I know about, yeah.
- SBSteven Bartlett
(laughs) Three kids. How are you thinking about parenting in this world that you see around the corner? How are you thinking about what to say to them, the advice to give them, what they should be learning?
- RYDr. Roman Yampolskiy
So there is general advice, uh, outside of this domain that you should live your every day as if it's your last. It's a good advice no matter what. If you have three years left or 30 years left, you lived your best life. So, try to not do things you hate for too long. Do interesting things, do impactful things. If you can do all that while helping people, do that.
- 56:10 – 1:01:45
Are We Living in a Simulation?
- SBSteven Bartlett
Simulation theory is a interesting, uh, sort of adjacent subject here, because as computers begin to accelerate and get more intelligent and more able to, you know, do things with AI that we could never have imagined in terms of like... Imagine the worlds that we could create with virtual reality. I think it was Google that recently released... What was it called? Uh, um, like the AI Worlds software.
- RYDr. Roman Yampolskiy
Mm-hmm. You take a picture and it generates a whole world.
- SBSteven Bartlett
Yeah, and you can-
- RYDr. Roman Yampolskiy
Yeah.
- SBSteven Bartlett
... move through the world.
- RYDr. Roman Yampolskiy
Yeah.
- SBSteven Bartlett
I'll put it on the screen for people to see. But Google have released this technology which allows you, I think, with a simple prompt actually, to make a three-dimensional world that you can then navigate through. And in that world, it has memory. So in the world, if you paint on a wall-
- RYDr. Roman Yampolskiy
Mm-hmm.
- SBSteven Bartlett
... and turn away, you look back, the wall-
- RYDr. Roman Yampolskiy
It's persistent.
- SBSteven Bartlett
Yeah, it's persistent. And when I saw that, I thought, "God, Jesus, bloody hell, this is, this is like the foothills of being able to create a simulation that's indistinguishable from everything I see here."
- RYDr. Roman Yampolskiy
Right. That's why I think we are in one. That's exactly the reason. AI is getting to the level of creating human agents, human-level agents. And virtual reality is getting to the level of being indistinguishable from ours.
- SBSteven Bartlett
So you think this is a simulation?
- RYDr. Roman Yampolskiy
I'm pretty sure we are in a simulation, yeah.
- SBSteven Bartlett
For someone that isn't familiar with the simulation arguments, what are, what are the first principles here that convince you that we are currently living in a simulation?
- RYDr. Roman Yampolskiy
So, you need certain technologies to make it happen. If you believe we can create human-level AI-
- SBSteven Bartlett
Yeah.
- RYDr. Roman Yampolskiy
... and you believe we can create virtual reality as good as this in terms of resolution, haptics, whatever properties it has, then I commit right now, the moment this is affordable, I'm gonna run billions of simulations of this exact moment, making sure you are statistically in one.
- SBSteven Bartlett
Say that last part again. You're gonna run, you're gonna run?
- RYDr. Roman Yampolskiy
I'm gonna commit right now, when it's very affordable, it's like 10 bucks a month to run it, I'm gonna run a billion simulations of this interview.
- SBSteven Bartlett
Why?
- RYDr. Roman Yampolskiy
Because statistically, that means you are in one right now. The chances of you being in the real one is one in a billion.
- SBSteven Bartlett
Okay, so to make sure I'm clear on this-
- RYDr. Roman Yampolskiy
It's a retroactive placement.
- SBSteven Bartlett
Yeah, so the, the minute it's affordable, then you can run billions of them, and they would feel and appear to be exactly like this interview right now?
- RYDr. Roman Yampolskiy
Right. So assuming the AI has internal states, experiences, qualia, some people argue that they don't, some say they already have it. That's a separate philosophical question. But if we can simulate this, I will.
- SBSteven Bartlett
Some people might misunderstand. You're not... you're not saying that you will. You're saying that someone will sometime.
- RYDr. Roman Yampolskiy
I, I, I can also do it. I don't mind.
- SBSteven Bartlett
Okay.
- RYDr. Roman Yampolskiy
Of course, others will do it before I get there. If I'm getting it for $10, somebody got it for $1,000. That's not the point. If you have technology, we're definitely running a lot of simulations for research, for entertainment, games, uh, all sorts of reasons. And the number of those greatly exceeds the number of real worlds we're in. Look at all the video games kids are playing. Every kid plays 10 different games. There's, you know, a billion kids in the world, so there is 10 billion simulations and one real world.
- 1:01:45 – 1:07:45
How Certain Are You We're in a Simulation?
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
What percentage are you at in terms of believing that we are currently living in a simulation?
- RYDr. Roman Yampolskiy
Mm, very close to certainty.
- SBSteven Bartlett
And what does that mean for the nature of your life? If you're close to 100% certain that we are currently living in a simulation, does that change anything in your life?
- RYDr. Roman Yampolskiy
So, all the things you care about are still the same. Pain still hurts, love's still love, right? Like those things are not different, so it doesn't matter. They're still important, that's what matters. The little 1% different is that I care about what's outside the simulation. I want to learn about it, I write papers about it. So that's the only impact.
- SBSteven Bartlett
And what do you think is outside of the simulation?
- RYDr. Roman Yampolskiy
I don't know. But we can look at this world and derive some properties of the simulators. So clearly brilliant engineer, brilliant scientist, brilliant artist. Not so good with morals and ethics. Room for improvement.
- SBSteven Bartlett
In our view of what morals and ethics should be?
- RYDr. Roman Yampolskiy
Well, w- we know there is suffering in the world, so unless you think it's ethical to torture children, then I'm questioning your approach.
- SBSteven Bartlett
But in terms of incentives, to create a positive incentive you probably also need to create negative incentives. Suffering seems to be one of the negatives in- incentives built into our design to stop me doing things I shouldn't do. So like put my hand in a fire-
- RYDr. Roman Yampolskiy
No.
- SBSteven Bartlett
... it's going to hurt.
- RYDr. Roman Yampolskiy
But it's all about levels, levels of suffering, right? So unpleasant stimuli, negative feedback doesn't have to be at, like, negative infinity hell levels. You don't want to burn alive and feel it. You want to be like, "Oh, this is uncomfortable. I'm going to stop."
- SBSteven Bartlett
It's interesting because we, we assume that they don't have great mor- morals and ethics, but we too would... We take animals and cook them and eat them for dinner and we also take, conduct experiments on mice and rats and-
- RYDr. Roman Yampolskiy
But to get university approval to conduct an experiment, you submit a proposal and there is a panel of ethicists who would say, "You can't experiment on humans, you can't burn babies, you can't eat animals alive." All those things would be banned.
- SBSteven Bartlett
In most parts of the world.
- RYDr. Roman Yampolskiy
Where they have ethical boards.
- SBSteven Bartlett
Yeah.
- RYDr. Roman Yampolskiy
'Cause some places don't bother with it, so they have easier approval process.
- SBSteven Bartlett
It's funny when you talk about the simulation theory. There's a, there's an element of the conversation that makes life feel less meaningful in a weird way. Uh, uh, uh, like, I know it doesn't matter, but whenever I have this conversation with people, not on the podcast, about, "Are we living in a simulation?" You almost see a little bit of meaning come out of their life for a second, and then they forget and then they carry on. But the, the, the thought that this is a simulation almost posits that it's not important, or that it... I think humans want to believe that this is the highest level and we're the most important and we're the... it's all about us. We're quite egotistical by design. And, uh, just an interesting observation I've always had when I have these conversations with people that it seems to strip something out of their life.
- RYDr. Roman Yampolskiy
Do you feel religious people feel that way? They know there is another world and the one that matters is not this one. Do you feel they don't value their lives the same? I guess in some religions.
- SBSteven Bartlett
I think, um, they think that this world is being created for them and that they are going to go to this heaven or, or hell, and that still puts them at the very center of it. But if, but if it's a simulation, you know, we could just be some computer game that a four-year-old alien has, is messing around with while he's got some time to burn.
- RYDr. Roman Yampolskiy
But maybe there is, you know, a test and there is a better simulation you go to than a worse one, maybe. There are different difficulty levels. Maybe you want to play it on a harder setting next time.
- SBSteven Bartlett
I've just invested millions into this and become a co-owner of the company. It's a company called Ketone IQ, and the story is quite interesting. I started talking about ketosis on this podcast and the fact that I'm very low carb, very, very low sugar, and my body produces ketones which have made me incredibly focused, have improved my endurance, have improved my mood, and have made me more capable at doing what I do here. And because I was talking about it on the podcast, a couple of weeks later these showed up on my desk in my HQ in London, these little shots. And, oh my God, the impact it's had on my ability to articulate myself, on my focus, on my workouts, on my mood, on stopping me crashing throughout the day was so profound that I reached out to the founders of the company, and now I'm a co-owner of this business. I highly, highly recommend you look into this. I highly recommend you look at the science behind the product. If you want to try it for yourself visit ketone.com/stephen for 30% off your subscription order, and you'll also get a free gift with your second shipment. That's ketone.com/stephen. And I'm so honored that once again a company I own can sponsor my podcast.I've built companies from scratch and backed many more, and there's a blind spot that I keep seeing in early-stage founders. They spend very little time thinking about HR. And it's not because they're reckless or they don't care, it's because they're obsessed with building their companies, and I can't fault them for that. At that stage, you're thinking about the product, how to attract new customers, how to grow your team, really how to survive. And HR slips down the list because it doesn't feel urgent. But sooner or later, it is. And when things get messy, tools like our sponsor today, Justworks, go from being a nice-to-have to being a necessity. Something goes sideways, and you find yourself having conversations you did not see coming. This is when you learn that HR really is the infrastructure of your company. And without it, things wobble. And Justworks stops you learning this the hard way. It takes care of the stuff that would otherwise drain your energy and your time, automating payroll, health insurance benefits, and it gives your team human support at any hour. It grows with your small business from startup through to growth, even when you start hiring team members abroad. So if you want HR support that's there through the exciting times and the challenging times, head to justworks.com now. That's justworks.com. (paper crinkles) And
- 1:07:45 – 1:12:20
Can We Live Forever?
- SBSteven Bartlett
do you think much about longevity?
- RYDr. Roman Yampolskiy
A lot, yeah. It's probably the second-most important problem because if AI doesn't get us, that will.
- SBSteven Bartlett
What do you mean?
- RYDr. Roman Yampolskiy
You're gonna die of old age.
- SBSteven Bartlett
Which is fine.
- RYDr. Roman Yampolskiy
That's not good. You wanna die?
- SBSteven Bartlett
I mean...
- RYDr. Roman Yampolskiy
You don't have to. It's just a disease. We can cure it. Nothing stops you from living forever, as long as universe exists, unless we escape the simulation.
- SBSteven Bartlett
But we wouldn't want a world where everybody could live forever, right? That would be...
- RYDr. Roman Yampolskiy
Sure, we do. Why? Who do you want to die?
- SBSteven Bartlett
I don't know. (laughs) I mean, I- I say this because it's all I've ever known, that people die, but wouldn't the world become pretty overcrowded if-
- RYDr. Roman Yampolskiy
No, you stop reproducing if you live forever. You have kids because you want a replacement for you. If you live forever, you're like, "I'll have kids in a million years." That's cool. I'll go explore universe first. Plus, if you look at actual population dynamics outside of, like, one continent, we're all shrinking; we're not growing.
- SBSteven Bartlett
Yeah, this is crazy. It's crazy that the more rich people get, the less kids they, they have, which aligns with what you're saying. And I do actually think, I think if... I'm going to be completely honest here. I think if I knew that I was going to live to a thousand years old, there's no way I'd be having kids at 30.
- RYDr. Roman Yampolskiy
No. It's that biological clocks are based on terminal points. Whereas if your biological clock is infinite, you'd be like, "One day."
- SBSteven Bartlett
And you think that's close-
- RYDr. Roman Yampolskiy
Um.
- SBSteven Bartlett
... being able to extend our lives?
- RYDr. Roman Yampolskiy
It's one breakthrough away. I think somewhere in our genome, we have this rejuvenation loop, and it's said to basically give us at most 120. I think we can reset it to something bigger.
- SBSteven Bartlett
AI is probably going to accelerate that.
- RYDr. Roman Yampolskiy
That's one very important application area. Yes, absolutely.
- SBSteven Bartlett
So maybe Bryan Johnson is right when he says, "Don't die now." He keeps saying to me, he's like, "Don't die now."
- RYDr. Roman Yampolskiy
Don't die ever?
- SBSteven Bartlett
But, you know, he's saying like, "Don't die before we get to the technology."
- RYDr. Roman Yampolskiy
Right. Longevity escape velocity. You want to long, uh, live long enough to live forever. If at some point, we, every year of your existence, add two years to your existence through medical breakthroughs, then you'll live forever. You just have to make it to that point of longevity escape velocity.
- SBSteven Bartlett
And he thinks that len- longevity escape velocity, especially in a world of AI, is pretty, is pretty... It's decades away, minimum. Which means...
- RYDr. Roman Yampolskiy
W- w-... As soon as we fully understand human genome, I think we'll make amazing breakthroughs very quickly.
- SBSteven Bartlett
We-
- RYDr. Roman Yampolskiy
Because we know some people have genes for living way longer. They have generations of people who are centenarians. So if we can understand that and copy that or copy it from some animals which will live forever, we'll get there.
- SBSteven Bartlett
Would you want to live forever?
- RYDr. Roman Yampolskiy
Of course. Reverse, reverse the question. Let's say we lived forever. And you ask me, "Do you want to die in 40 years?" Why would I say yes?
- 1:12:20 – 1:14:03
Bitcoin
- RYDr. Roman Yampolskiy
- SBSteven Bartlett
So you're investing in Bitcoin, aren't you?
- RYDr. Roman Yampolskiy
(smacks lips) Yeah.
- SBSteven Bartlett
Because it's, uh-
- RYDr. Roman Yampolskiy
It's the only scarce resource. Nothing else has scarcity. Everything else, if price goes up, we'll make more. I can make as much gold as you want given a proper price point. You cannot make more Bitcoin.
- SBSteven Bartlett
Some people say Bitcoin is just this thing on a computer that we all agreed was valuable.
- RYDr. Roman Yampolskiy
We are a thing on a computer.... your number.
- SBSteven Bartlett
Okay. So, I mean, n- not investment advice, but investment advice.
- RYDr. Roman Yampolskiy
It's hilarious how that's one of those things where they tell you it's not, but you know it is immediately. There is a, "Your call is important to us," that means your call is of zero importance. And investment is like that.
- SBSteven Bartlett
Mm-hmm. Yeah. (laughs) Yeah, when they say, "Not investment advice," (laughs) it's definitely investment advice. Um, but it's not investment advice. Okay, so y- you're bullish on Bitcoin because it's... it can't be messed with.
- RYDr. Roman Yampolskiy
It is the only thing which we know how much there is in the universe. So gold, there could be an asteroid made out of pure gold heading towards us, devaluing it. Well, also killing all of us. But Bitcoin, I know exactly the numbers. And even the 21 million is an upper limit. How many are lost? Passwords forgotten. I don't know what Satoshi is doing with his million. It's getting scarcer every day while more and more people are trying to accumulate it.
- SBSteven Bartlett
Some people worry that it could be hacked with a supercomputer.
- RYDr. Roman Yampolskiy
A quantum computer can break that algorithm. There is, uh, strategies for switching to a quantum-resistant cryptography for that. And quantum computers are still kinda weak.
- SBSteven Bartlett
Do
- 1:14:03 – 1:15:07
What Should I Do Differently After This Conversation?
- SBSteven Bartlett
you think there's any changes to my life that I should make following this conversation? Is there anything that I should do differently the minute I walk out of this door?
- RYDr. Roman Yampolskiy
I assume you already invest in Bitcoin heavily.
- SBSteven Bartlett
Yes, I'm in- m- an investor in Bitcoin.
- RYDr. Roman Yampolskiy
This is financial advice.
- SBSteven Bartlett
(laughs)
- RYDr. Roman Yampolskiy
Uh, no, just... You seem to be winning. Maybe it's your simulation. You're rich, handsome, you have famous people hang out with you. Like, that's pretty good. Keep it up. Robin Hanson has a paper about how to live in a simulation, what you should be doing in it, and your goal is to do exactly that. You wanna be interesting, you wanna hang out with famous people so they don't shut it down. So you are part of a part someone's actually watching on pay-per-view or something like that.
- SBSteven Bartlett
Oh, I don't know if you wanna be watched on pay-per-view because then you would be this
- RYDr. Roman Yampolskiy
(overlapped) Then they shut you down.
- SBSteven Bartlett
... intellectual experience.
- RYDr. Roman Yampolskiy
If no one's watching, why would they play it?
- SBSteven Bartlett
I'm saying y- don't you want to fly under the radar? Don't you wanna be the, the guy just living a normal life that the, the masters
- RYDr. Roman Yampolskiy
(overlapped) Those are NPCs. Nobody wants to be an NPC.
- 1:15:07 – 1:17:11
Are You Religious?
- SBSteven Bartlett
Are you religious?
- RYDr. Roman Yampolskiy
Not in any traditional sense, but I believe in simulation hypothesis, which has a superintelligent being, so.
- SBSteven Bartlett
But you don't believe in the, like, you know, the religious books?
- RYDr. Roman Yampolskiy
So different religions. This religion will tell you, "Don't work Saturday." This one, "Don't work Sunday. Don't eat pigs, don't eat cows." They just have local traditions on top of that theory. That's all it is. They're all the same religion. They all worship superintelligent being, they all think this world is not the main one, and they argue about which animal not to eat. Skip the local flavors, concentrate on, "What do all the religions have in common?" And that's the interesting part. They all think there is something greater than humans, very capable, all-knowing, all-powerful. Then I run a computer game, four of those characters in the game, I am that. I can change the whole world, I can shut it down. I know everything in the world.
- SBSteven Bartlett
It's funny, I was thinking earlier on when we started talking about the simulation theory that there's... there might be something innate in us that is been left from the creator. Almost like a clue, like a, like an intuition. 'Cause that's what we, we tend to have through history. Humans have this intuition-
- RYDr. Roman Yampolskiy
Yeah.
- SBSteven Bartlett
... that a- all the things you said are true, that there's this somebody above and that-
- RYDr. Roman Yampolskiy
Well, we have generations of people who were religious, who believed God told them and was there and give them books, and that has been passed on for many generations. This is probably one of the earliest generations not to have universal religious belief.
- SBSteven Bartlett
I wonder if those people are telling the truth. I wonder if those peop- those people that say God came to them and said something. Imagine that. Imagine if that was part of this
- RYDr. Roman Yampolskiy
(overlapped)
- SBSteven Bartlett
... if that was part of this
- RYDr. Roman Yampolskiy
I'm looking at the news today. Something happened an hour ago, and I'm getting different conflicting results. I can't even get with cameras, with drones, with, like, guy on Twitter there. I still don't know what happened. And you think 3,000 years ago, we have accurate record of translations and... No, of course not.
- SBSteven Bartlett
You know these conversations you have around AI safety, do you think they make people feel good?
- 1:17:11 – 1:20:10
Do These Conversations Make People Feel Good?
- RYDr. Roman Yampolskiy
I don't know if they feel good or bad, but, uh, people find it interesting. It's one of those topics where I can have a conversation about different cures for cancer with an average person, but everyone has opinions about AI, everyone has opinions about simulation. It's interesting that you don't have to be highly educated or a genius to understand those concepts.
- SBSteven Bartlett
'Cause I tend to think that it makes me feel not positive. And I understand that, but I've always been of the opinion that you shouldn't live in a world of delusion where you're just seeking to be positive, have sort of, uh, pos- positive things said, and avoid uncomfortable conversations. Actually, progress often in my life comes from, like, having uncomfortable conversations, becoming aware about something, and then at least being informed about how I can do something about it. And so, I think that's why, that's why I asked the question 'cause I th- I assume most people will, should, if they're, you know, if they're normal h- human beings, listen to these conversations and go, "Gosh, that's scary and this is concerning." And, and then I keep coming back to this point, which is like, what, what do I do with that energy?
- RYDr. Roman Yampolskiy
Yeah. But I'm trying to point out, this is not different than so many conversations. We can talk about, "Oh, there is starvation in this region, genocide in this region, you're all dying, cancer is spreading, autism is up." You can always find something to be very depressed about and nothing you can do about it.... and we're very good at concentrating on what we can change-
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
... what we are good at, and, uh, basically not trying to embrace the whole world as a local environment. So historically, you grew up with a tribe, you had a dozen people around you. If something happened to one of them, it was very rare. It was an accident. Now, if I go on the internet, somebody gets killed everywhere all the time. Somehow, thousands of people are reported to me every day. I don't even have time to notice. It's just too much. So I have to put filters in place. And I think this topic is what people are very good at filtering as, like, "This was this entertaining talk I went to, kind of like a show. And the moment I exit, it ends." So usually, I would go give a keynote at a conference, and I tell them basically, "You're all gonna die. You have two years left. Any questions?" And people be like, "Will I lose my job? How do I lubricate my sex robot?" Like, all sorts of nonsense, clearly not understanding what I'm trying to say there. And those are good questions, interesting questions, but not fully embracing the result. They're still in their bubble of local versus global.
- SBSteven Bartlett
And the people that disagree with you the most, as it relates to AI safety, what is it that they say?
- 1:20:10 – 1:21:36
What Do Your Strongest Critics Say?
- SBSteven Bartlett
What are their counterarguments, typically?
- RYDr. Roman Yampolskiy
So many don't engage at all.
- SBSteven Bartlett
They tweeted.
- RYDr. Roman Yampolskiy
Like, they have no background knowledge in a subject. They never read a single book, single paper, not just by me, by anyone. They may be even working in a field, so they are doing some machine learning work for some company, maximizing ad clicks. And to them, those systems are very narrow, and then they hear that, "Oh, this AI is going to take over the world," they're like, "It has no hands. How would it do that? It- it's nonsense. This guy is crazy. He has a beard. Why would I listen to him," right? That's, uh... Then they start reading a little bit. They go, "Oh, okay, so maybe AI can be dangerous. Yeah, I see that. But we always solved problems in the past. We're going to solve them again. I mean, at some point, we fixed a computer virus or something. So it's the same." And, uh, basically, the more exposure they have, the less likely they are to keep that position. I know many people who went from super careless developer to safety researcher. I don't know anyone who went from, "I worry about AI safety," to, like, "There is nothing to worry about."
- SBSteven Bartlett
Hmm. What are your closing statements?
- RYDr. Roman Yampolskiy
Uh, let's make sure there is not a closing statement we need to give for humanity. Let's make sure we
- 1:21:36 – 1:22:08
Closing Statements
- RYDr. Roman Yampolskiy
stay in charge, in control. Let's make sure we only build things which are beneficial to us. Let's make sure people who are making those decisions are remotely qualified to do it. They are good not just at science, engineering, and business, but also have moral and ethical standards. And, uh, if you're doing something which impacts other people, you should ask their permission before you do that.
- SBSteven Bartlett
If there was one button in front of you, and it would shut down every
- 1:22:08 – 1:23:36
If You Had One Button, What Would You Pick?
- SBSteven Bartlett
AI company in the world right now, permanently, with the inability for anybody to start a new one, would you press the button?
- RYDr. Roman Yampolskiy
Are we losing narrow AI or just superintelligent AGI part?
- SBSteven Bartlett
Losing all of AI.
- RYDr. Roman Yampolskiy
That's a hard question because AI is extremely important. It controls stock market, power plants. It controls hospitals. It would be a devastating accident. Millions of people would lose their lives.
- SBSteven Bartlett
Okay, we can keep narrow AI.
- RYDr. Roman Yampolskiy
Oh yeah, that's what we want. We want narrow AI to do all this for us, but not God we don't control doing things to us.
- SBSteven Bartlett
So you would stop it, you would stop AGI and superintelligence?
- RYDr. Roman Yampolskiy
We have AGI. What we have today is great for almost everything. We can make secretaries out of it. 99% of economic potential of current technology has not been deployed. We make AI so quickly, it doesn't have time to propagate through the industry, through technology. Something like half of all jobs are considered BS jobs. They don't need to be done, bullshit jobs. So those can be not even automated, they can just be gone. But I'm saying we can replace 60% of jobs today with existing models. We've not done that. So if the goal is to grow our economy, to develop, we can do it for decades without having to create superintelligence as soon as possible.
- SBSteven Bartlett
Do you think globally, especially in the Western world, unemployment's only going to go up from here? Do you think rel- relatively this is the low of unemployment?
- 1:23:36 – 1:24:37
Are We Moving Toward Mass Unemployment?
- SBSteven Bartlett
- RYDr. Roman Yampolskiy
I mean, it fluctuates a lot with other factors. There are wars, there's economic cycles, but overall, the more jobs you automate and the higher is the intellectual necessity to start a job, the fewer people qualify.
- SBSteven Bartlett
So if we plotted it on a graph, over the next 20 years, you're assuming unemployment's gradually going to go up over that time?
- RYDr. Roman Yampolskiy
I think so. Fewer and fewer people would be able to contribute. Already, we kind of understand it because we created minimum wage. We understood some people don't contribute enough economic value to get paid anything really. So we had to force employers to pay them more than they're worth.
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
And we haven't updated it. It's, what, 7.25 federally in US? If you keep up with economy, it should be like $25 an hour now, which means all these people making less are not contributing enough economic output to justify what they're getting paid.
- SBSteven Bartlett
We have a closing tradition on this podcast where
- 1:24:37 – 1:26:21
Most Important Characteristics
- SBSteven Bartlett
the last guest leaves a question for the next guest, not knowing who they're leaving it for. And the question left for you is, what are, what are the most important characteristics for a friend, colleague-... or mate?
- RYDr. Roman Yampolskiy
Those are very different types of people.
- SBSteven Bartlett
Mm-hmm.
- RYDr. Roman Yampolskiy
But for all of them, loyalty is number one.
- SBSteven Bartlett
And what does loyalty mean to you?
- RYDr. Roman Yampolskiy
Not betraying you, not screwing you, not cheating on you.
- SBSteven Bartlett
Despite the temptation?
- RYDr. Roman Yampolskiy
Despite the world being as it is, situation, environment.
- SBSteven Bartlett
Dr. Roman, thank you so much. Thank you so much for doing what you do, because you're, you're starting a conversation and pushing forward a conversation and doing research that is incredibly important, and you're doing it in the face of a lot of, um, a lot of skeptics, I'd say. There's a lot of people that have a lot of incentives to discredit what you're saying and what you do, because they have their own incentives and they have billions of dollars on the line, and they have their jobs on the line potentially as well, so it's really important that there are people out there that are willing to, I guess, stick their head above the parapet and come on shows like this and go on big platforms and talk about the unexplainable, unpredictable, uncontrollable future that we're heading towards. So thank you-
- RYDr. Roman Yampolskiy
Thank you.
- SBSteven Bartlett
... for doing that. This book which, which I think everybody should, should check out if they want a continuation of this conversation, I think it was published in 2024, gives a holistic view on many of the things we've talked about today, um, preventing AI failures and much, much more, and I'm gonna link it below for anybody that wants to read it. If people want to learn more from you, if they wanna go further into your work, w- what's the best thing for them to do? Where do they go?
Episode duration: 1:27:37
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode UclrVWafRAI
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome