The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030! - Dr. Roman Yampolskiy

The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030! - Dr. Roman Yampolskiy

The Diary of a CEOSep 4, 20251h 27m

Steven Bartlett (host), Dr. Roman Yampolskiy (guest), Narrator

AI safety, alignment, and the control problem for superintelligenceTimelines for AGI, humanoid robots, and the technological singularityMass unemployment, economic disruption, and meaning in a post‑work worldGovernance, incentives, and criticism of major AI leaders and labsSimulation theory and its parallels with religion and meaningExistential risks from AI‑enabled bioengineering and other technologiesLongevity, Bitcoin, and personal strategy in a radically changing future

In this episode of The Diary of a CEO, featuring Steven Bartlett and Dr. Roman Yampolskiy, The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030! - Dr. Roman Yampolskiy explores aI Safety Pioneer Warns: Superintelligence Will Erase Nearly All Jobs Dr. Roman Yampolskiy, who coined the term “AI safety,” argues that building superintelligent AI we can’t control is an existential risk to humanity, and that current safety progress lags far behind capabilities. He predicts weak AGI by around 2027, humanoid robots handling most physical work by 2030, and a potential technological singularity around 2045, leading to unprecedented unemployment and a world humans can’t meaningfully understand or steer.

AI Safety Pioneer Warns: Superintelligence Will Erase Nearly All Jobs

Dr. Roman Yampolskiy, who coined the term “AI safety,” argues that building superintelligent AI we can’t control is an existential risk to humanity, and that current safety progress lags far behind capabilities. He predicts weak AGI by around 2027, humanoid robots handling most physical work by 2030, and a potential technological singularity around 2045, leading to unprecedented unemployment and a world humans can’t meaningfully understand or steer.

He contends that essentially all economically valuable work—cognitive and physical—will be automatable, with perhaps only a thin niche of human‑preferred roles remaining, driven more by sentimental preference than actual advantage. Yampolskiy is deeply skeptical that we can ever make superintelligence reliably “safe,” describing control as an impossible problem rather than a merely hard one.

He criticizes AI lab leaders like Sam Altman for prioritizing winning the race to superintelligence over safety, calls for a global shift away from building general superintelligence toward narrow, domain‑specific AI, and urges individuals and institutions to recognize they are personally gambling with their own survival. Beyond AI risk, he explores simulation theory, longevity, Bitcoin, and how these intersect with meaning, religion, and how we should live now.

Key Takeaways

AI capabilities are accelerating exponentially while safety advances are only linear, widening a dangerous control gap.

Yampolskiy describes AI progress as exponential or even hyper‑exponential—e. ...

Get the full analysis with uListen AI

Most jobs—cognitive and physical—will be technically automatable within the next decade, leading to extreme structural unemployment.

By roughly 2027, he expects AGI that can serve as a “drop‑in employee,” providing near‑free cognitive labor via software. ...

Get the full analysis with uListen AI

Building superintelligence is likely irreversible and uncontrollable, making it an existential, not just economic, risk.

Superintelligence is defined as being better than all humans in all domains, including AI research itself. ...

Get the full analysis with uListen AI

Incentive structures in AI labs and global competition push toward unsafe superintelligence despite the shared risk.

Legally, companies must maximize shareholder value, not global safety, and Yampolskiy notes that major labs openly admit they don’t yet know how to align more advanced systems. ...

Get the full analysis with uListen AI

The most plausible near‑term extinction pathways likely involve AI‑augmented misuse of other technologies, especially synthetic biology.

Even before true superintelligence, powerful models can help design novel pathogens or optimize biological weapons. ...

Get the full analysis with uListen AI

Narrow AI can deliver enormous benefits without racing toward general superintelligence, and policy should explicitly favor that path.

He differentiates “narrow superintelligence” (systems that are superhuman in specific domains like protein folding or cancer detection) from general superintelligence. ...

Get the full analysis with uListen AI

Individuals should accept uncertainty, live meaningfully under risk, and, at the margins, support safety‑focused activism and preparation.

Yampolskiy acknowledges that the average person has limited direct influence on superintelligence trajectories, similar to their influence over nuclear war. ...

Get the full analysis with uListen AI

Notable Quotes

We’re creating this alien intelligence. If aliens were coming to Earth and you had three years to prepare, you would be panicking right now. But most people don’t even realize this is happening.

Dr. Roman Yampolskiy

First five years at least, I was working on solving this problem. I was convinced we can make safe AI. But the more I looked at it, the more I realized every single component of that equation is not something we can actually do.

Dr. Roman Yampolskiy

I’m not talking about 10% unemployment, which is scary, but 99%. All you have left is jobs where, for whatever reason, you prefer another human would do it for you.

Dr. Roman Yampolskiy

It’s the last invention we ever have to make. At that point, it takes over, and the process of doing science, research, even ethics research, all that is automated.

Dr. Roman Yampolskiy

Without question, there is nothing more important than getting this right.

Dr. Roman Yampolskiy

Questions Answered in This Episode

You argue that indefinite control of superintelligence is impossible, not just hard. What specific theoretical results or impossibility proofs would you like to see to formally settle that claim—and what experiments could falsify your current view?

Dr. ...

Get the full analysis with uListen AI

If we drew a strict, global policy line around ‘narrow AI only,’ how would you practically define and enforce the boundary between a powerful narrow system and a de facto general agent, especially as capabilities emerge unpredictably?

He contends that essentially all economically valuable work—cognitive and physical—will be automatable, with perhaps only a thin niche of human‑preferred roles remaining, driven more by sentimental preference than actual advantage. ...

Get the full analysis with uListen AI

Your 99% unemployment forecast assumes technical capability to automate almost all jobs; what concrete early warning indicators—economic, social, or technical—should policymakers track over the next five years to know that we’re actually on that trajectory?

He criticizes AI lab leaders like Sam Altman for prioritizing winning the race to superintelligence over safety, calls for a global shift away from building general superintelligence toward narrow, domain‑specific AI, and urges individuals and institutions to recognize they are personally gambling with their own survival. ...

Get the full analysis with uListen AI

You were sharply critical of Sam Altman and major labs’ incentives; if Altman or another frontier‑lab CEO were in front of you now, what single, specific commitment or concession would you ask them to make that could most meaningfully reduce existential risk?

Get the full analysis with uListen AI

Given your near‑certainty about simulation and your view that suffering here reflects imperfect simulator ethics, what ethical obligations, if any, do we have toward the simulated agents we ourselves will almost certainly create once the technology is cheap and mature?

Get the full analysis with uListen AI

Transcript Preview

Steven Bartlett

You've been working on AI safety for two decades at least.

Dr. Roman Yampolskiy

Yeah. I was convinced we can make safe AI, but the more I looked at it, the more I realized it's not something we can actually do.

Steven Bartlett

You have made a series of predictions about a variety of different dates. So, what is your prediction for 2027?

Dr. Roman Yampolskiy

(sighs)

Steven Bartlett

Dr. Roman Yampolskiy is a globally recognized voice on AI safety, and associate professor of computer science. He educates people on the terrifying truth of AI... And what we need to do to save humanity.

Dr. Roman Yampolskiy

In two years, the capability to replace most humans and most occupations will come very quickly. And then in five years, we're looking at a world where we have levels of unemployment we've never seen before. Not talking about 10%, but 99%.

Narrator

(beep) .

Dr. Roman Yampolskiy

And that's without super intelligence, a system smarter than all humans in all domains. So, it would be better than us at making new AI, but it's worse than that. We don't know how to make them safe, and yet we still have the smartest people in the world competing to win the race to super intelligence.

Steven Bartlett

But what do you make of people like Sam Altman's journey with AI?

Dr. Roman Yampolskiy

So, a decade ago we published guardrails for how to do AI right. They violated every single one, and he's gambling eight billion lives on getting richer and more powerful. So, I guess some people want to go to Mars, others want to control the universe. But it doesn't matter who builds it. The moment you switch to super intelligence, we will most likely regret it terribly.

Steven Bartlett

And then by 2045...

Dr. Roman Yampolskiy

Now, this is where it gets interesting.

Steven Bartlett

Dr. Roman Yampolskiy. Let's talk about simulation theory.

Dr. Roman Yampolskiy

I think we are in one, and there is a lot of agreement on this. And this is what you should be doing in it, so we don't shut it down. First...

Steven Bartlett

I see messages all the time in the comments section that some of you didn't realize you didn't subscribe, so if you could do me a favor and double-check if you're a subscriber to this channel, that would be tremendously appreciated. It's the simple, it's the free thing that anybody that watches this show frequently can do to help us here to keep everything going in this show and the trajectory it's on. So, please do double-check if you've subscribed, and, uh, thank you so much. Because in a strange way, you are- you're part of our history, and you're on this journey with us, and I appreciate you for that. So, yeah, thank you. Dr. Roman Yampolskiy. What is the mission that you're currently on, 'cause it's quite clear to me that you are on a bit of a mission, and you've been on this mission for, I think, the best part of two decades at least.

Dr. Roman Yampolskiy

I'm hoping to make sure that super intelligence we're creating right now does not kill everyone.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome