
Ex-Google Exec: How to Position Yourself Now Before the Next AI Phase (2026–2027) | Mo Gawdat
Mo Gawdat (guest), Marina Mogilko (host), Marina Mogilko (host), Marina Mogilko (host)
In this episode of Silicon Valley Girl, featuring Mo Gawdat and Marina Mogilko, Ex-Google Exec: How to Position Yourself Now Before the Next AI Phase (2026–2027) | Mo Gawdat explores mo Gawdat warns of AI job shock before utopia arrives Gawdat predicts a major labor-market disruption within 2–3 years as AI replaces monotonous and many junior-to-midlevel knowledge tasks faster than organizations can adapt.
Mo Gawdat warns of AI job shock before utopia arrives
Gawdat predicts a major labor-market disruption within 2–3 years as AI replaces monotonous and many junior-to-midlevel knowledge tasks faster than organizations can adapt.
He argues the deeper crisis is accountability: powerful actors can deploy AI for surveillance, weapons, manipulation, and profit with little consequence or consent from the public.
He claims entrepreneurship is shifting from long-horizon “chess” forecasting to rapid, constant “squash” adaptation, where pivots may happen weekly and product-building cycles compress dramatically.
He asserts traditional education and credentialing will lose practical value as AI becomes an always-on cognitive extension, requiring new learning models focused on using AI to raise problem-solving capacity rather than memorize content.
Despite a “12–15 years of hell,” he believes AI’s eventual dominance will push society toward a more orderly, low-waste “minimum energy” problem-solving regime—yielding a “biblical” utopia after a dangerous transition period.
Key Takeaways
Expect job displacement to hit faster than most plans assume.
Gawdat argues the biggest shift lands within 2–3 years, starting with monotonous roles (call centers, clerks) and extending into junior and then middle-management work as interfaces improve.
Get the full analysis with uListen AI
Accountability—not raw AI capability—is the root accelerant of dystopia.
He claims the danger comes from actors who can deploy AI at scale (politicians, platforms, “disruptors”) without consent, oversight, or consequences, enabling manipulation, surveillance, and weaponization.
Get the full analysis with uListen AI
Entrepreneurship is now an agility game, not a prediction game.
His “chess to squash” framing implies founders should watch fast-moving trends, iterate constantly, and treat pivots as routine—because AI compresses build cycles and shifts markets weekly.
Get the full analysis with uListen AI
Building has been radically democratized; small teams can ship outsized products.
He cites his startup “Emma” as built in ~6 weeks with a few engineers plus multiple AI systems—work he says would have taken years and hundreds of engineers pre-2022.
Get the full analysis with uListen AI
Use AI to augment thinking, not replace it, or you will get cognitively weaker.
He recommends leveraging AI for non-human strengths (search, synthesis, computation) while keeping judgment and truth-seeking human—e. ...
Get the full analysis with uListen AI
Traditional education models will erode; learning shifts to AI-extended capability.
He argues exams and memorization are obsolete when AI provides instant knowledge; the goal should be raising “human+AI” problem-solving power, not preserving old credential pathways.
Get the full analysis with uListen AI
Ethics is the practical leverage point for individuals and society.
He urges building, funding, and demanding “AI for good,” resisting surveillance/targeting/autonomous weapons, and teaching children that ethical deployment is the only acceptable norm.
Get the full analysis with uListen AI
Notable Quotes
““Within the next two to three years you're going to see a massive shift in the jobs market.””
— Mo Gawdat
““The chess board is over. It's off the table. This has turned into squash.””
— Mo Gawdat
““My AI startup took me six weeks to build. If I had started in 2022, it would have taken me four years.””
— Mo Gawdat
““I think education is over, completely over.””
— Mo Gawdat
““AI is gonna make you dumb if you outsource your problem-solving to AI. AI is gonna make you the smartest you've ever been… if you do the intelligence.””
— Mo Gawdat
Questions Answered in This Episode
What exactly are the seven “FACE RIPs” dimensions—can you map each letter to a concrete near-term example and a measurable indicator to watch?
Gawdat predicts a major labor-market disruption within 2–3 years as AI replaces monotonous and many junior-to-midlevel knowledge tasks faster than organizations can adapt.
Get the full analysis with uListen AI
You say the job-market shock is 2–3 years away: which specific middle-class roles (and which parts of those roles) disappear first as interfaces improve?
He argues the deeper crisis is accountability: powerful actors can deploy AI for surveillance, weapons, manipulation, and profit with little consequence or consent from the public.
Get the full analysis with uListen AI
If capitalism is “labor arbitrage” and labor demand collapses, what realistic transition model replaces it—UBI, negative income tax, public ownership of AI platforms, or something else?
He claims entrepreneurship is shifting from long-horizon “chess” forecasting to rapid, constant “squash” adaptation, where pivots may happen weekly and product-building cycles compress dramatically.
Get the full analysis with uListen AI
You argue accountability is the core issue: what enforceable governance mechanisms would actually work globally (audits, liability for model outputs, licensing, compute controls)?
He asserts traditional education and credentialing will lose practical value as AI becomes an always-on cognitive extension, requiring new learning models focused on using AI to raise problem-solving capacity rather than memorize content.
Get the full analysis with uListen AI
Your claim that “education is over” is provocative—what should replace exams and degrees in hiring, and how do we prevent inequality in AI-augmented learning?
Despite a “12–15 years of hell,” he believes AI’s eventual dominance will push society toward a more orderly, low-waste “minimum energy” problem-solving regime—yielding a “biblical” utopia after a dangerous transition period.
Get the full analysis with uListen AI
Transcript Preview
My AI startup took me six weeks to build. If I had started in 2022, it would have taken me four years. And, and when you really think about that, that basically means everyone now has a chance.
This is Mo, former Chief Business Officer at Google X, where he spent over a decade running business innovations. He says everyone now has a chance, but only if they understand what's actually coming.
The skill of an entrepreneur in the past was the ability to foresee something in the future that no one else saw and to prepare for that. That's a game of chess. It's over. It's off the table. This has turned into squash. I'm just basically saying get prepared.
How much time do we have to prepare?
Within the next two to three years, you're going to see a massive shift in the jobs market. So you asked me what should we do? Number one, learn the skills. Number two...
Mo, thank you so much for doing this. Welcome to Silicon Valley Girl.
Thank you.
You said something that we're about to enter what you call twelve to fifteen years of hell before heaven, possibly starting in 2027. So what's going to happen in 2027?
Uh, I think it will peak in 2027. It w- it already started for sure. Um, I call it FACE RIPs, uh, just as, uh, an acronym for people to remember. Uh, you know, each of those letters is a word, but let me tell, tell the story quickly in, in ways that people will understand. Uh, there is the power and freedom, uh, uh, dimension, uh, so the P and the F. Uh, there is the R and the C, the reality and connection dimension. Uh, there is the, uh, I and the C, the innovation and connection... and, and, uh, sorry, and, um, an economics dimension, and then there is the A. So let me tell them very quickly. This-- To start with, uh, AI is our last innovation, right? Uh, most people don't know that, but we are already building AIs that are building AIs.
Yeah.
We're building AIs that are discovering scientific discovery that will blow you away. Uh, they're reinventing math. Uh, they are understanding biology in ways that we've never seen before. They're, uh, understanding material science in ways that are, uh, just mind-blowing. And so very quickly, uh, most innovation, definitely tech innovation, will be done at the hands of AI. Um, because of that, and because most tasks that need intelligence will be handed over to the machines. As the machine's capabilities, uh, increase, lots of debate around when exactly. Say it's ten years, say it's two years, doesn't really matter, hmm. Eventually, every job that AI does better than humans will be handed over to AI. Um, and every, every task we've ever assigned to them, they eventually ended up doing better than humans. And so, um, the first part of the dystopia is that innovation is going to take away all jobs, okay? Of course, the capitalists of Silicon Valley will tell you, "This is great. It's incredible productivity gains for everyone." Uh, you know, "You see, jobs will be easy. Uh, people won't have to work as hard." All of the fancy PR-led, uh, um, you know, conversations that we try to appear, uh, uh, altruistic when we share them. Uh, the truth is people will be out of jobs, right? Ten, twenty, thirty percent of certain sectors will see unemployment of that rate in the next few years, right? And when that happens, uh, economics at large will change massively. The whole definition of capitalism was labor arbitrage. And without labor, uh, you know, without the need for labor, the obligation to or s- or the need to keep people happy and engaged and alive and un-disgruntled, if you want, to the point where they don't rise, becomes more of an obligation than a desire, right? There's a very big difference in, you know, in terms of wanting someone to, to be the be- their best because they are, uh, productive members of society or trying to just give them a UBI, a universal basic income, to just keep them alive so that they don't, uh, uprise. And you can imagine that in a capitalist society, especially like the US and most of the West, you know, while we start with UBI, that UBI is going to be pa-paid by the taxes of the platform owners, and the platform owners will have enough power to, uh, to say, "I don't want to pay that much. I mean, those guys are not producing anything." And so over time, you can imagine how that would turn into a struggle, right? So, so that dimension of intelligence and innovation on one side becoming entirely a machine thing, leading to a redefinition of economics, a redefinition of money, a redefinition of jobs, a redefinition of earnings, um, a redefinition of capitalism.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome