AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI!

AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI!

The Diary of a CEOMar 26, 20262h 9m

Karen Hao (guest), Steven Bartlett (host)

“Empire of AI” metaphor (data, labor, land, knowledge control)AGI as a shifting marketing and power toolSam Altman–Musk dynamics and OpenAI origin storyOpenAI board coup and employee revoltData annotation and the hidden human supply chainJob displacement, “jagged frontier,” and career-ladder erosionData centers: energy use, water demand, pollution, environmental justiceRegulation, lobbying, and democratic accountabilityAlternatives: targeted/efficient AI (e.g., AlphaFold)

In this episode of The Diary of a CEO, featuring Karen Hao and Steven Bartlett, AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI! explores aI empires, OpenAI turmoil, and harms behind the AGI myth Karen Hao frames Big AI as “empires” that extract data, exploit global labor, and shape public narratives to consolidate power while claiming a benevolent mission.

AI empires, OpenAI turmoil, and harms behind the AGI myth

Karen Hao frames Big AI as “empires” that extract data, exploit global labor, and shape public narratives to consolidate power while claiming a benevolent mission.

She argues “AGI” is an elastic, audience-dependent concept used to sell products, raise capital, and deter regulation, rather than a coherent scientific target.

Hao recounts reported inside dynamics at OpenAI—including the board’s brief firing of Sam Altman—portraying a leadership culture of instability, internal distrust, and contested safety priorities.

The conversation explores real-world externalities of AI scale: job ladder erosion via automation and data-annotation work, plus energy/water/air-quality harms from data centers disproportionately sited in vulnerable communities.

Hao advocates breaking up concentrated AI power and redirecting innovation toward “bicycle not rocket” AI—more targeted, efficient systems—while encouraging public resistance through regulation, lawsuits, local organizing, and withholding data/adoption.

Key Takeaways

“AGI” is treated as a flexible slogan, not a stable technical destination.

Hao argues there is no consensus definition of human intelligence, enabling AI leaders to redefine AGI depending on the audience—Congress (societal cures), consumers (assistant), Microsoft (revenue target), or the public (economic outperformance).

Get the full analysis with uListen AI

The AI race narrative can function as strategic persuasion (“China will win”), not neutral forecasting.

She claims leaders amplify existential stakes to justify centralizing control and accelerating investment, turning fear and utopian promises into leverage for capital, adoption, and regulatory deference.

Get the full analysis with uListen AI

OpenAI’s leadership crisis is presented as a governance failure under extreme stakes.

Hao reports that concerns about Altman’s “instability,” inconsistent disclosures, and internal chaos contributed to the board’s decision to fire him quickly—then backlash from employees and Microsoft enabled his rapid return.

Get the full analysis with uListen AI

AI’s “automation benefits” can be downstream of hidden labor and worsening work conditions.

She highlights data-annotation and RLHF-style workflows as essential to model performance, describing an emerging underclass of precarious workers—including laid-off professionals—training systems that may later displace more jobs.

Get the full analysis with uListen AI

Job displacement is driven by both model capability and executive choice.

Hao notes firms may replace workers because AI is “good enough” and cheaper, or use AI as justification for downsizing; she cites Klarna’s shrinkage via attrition alongside heavier AI use as an example of nuanced pathways.

Get the full analysis with uListen AI

Compute scale creates environmental and public-health externalities that land unevenly.

She cites data centers’ massive power and water needs and points to cases like methane turbines used to power Musk’s Memphis “Colossus,” arguing these projects can worsen pollution and resource scarcity in vulnerable communities.

Get the full analysis with uListen AI

Meaningful change is more plausible through democratizing power than finding ‘better’ CEOs.

Hao argues the core issue is structural—too few people decide for billions—so solutions should target monopoly power, transparency, labor rights, and enforceable rules, not personality-based trust in individual leaders.

Get the full analysis with uListen AI

Notable Quotes

“They profit enormously off of this myth.”

Karen Hao

“We need to break up the empires of AI.”

Karen Hao

“If most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?”

Karen Hao

“It’s not that these technologies don’t have utility, it’s that the production of these technologies right now is exacting a lot of harm on people.”

Karen Hao

“Let’s not make it go flawlessly if we don’t agree with what they are doing.”

Karen Hao

Questions Answered in This Episode

On AGI: What single operational definition of “AGI” would you accept as legitimate across audiences (public, regulators, investors), and how should it be enforced?

Karen Hao frames Big AI as “empires” that extract data, exploit global labor, and shape public narratives to consolidate power while claiming a benevolent mission.

Get the full analysis with uListen AI

On evidence: Which specific internal documents most clearly show intentional “myth-making” or public manipulation, and what do they say verbatim?

She argues “AGI” is an elastic, audience-dependent concept used to sell products, raise capital, and deter regulation, rather than a coherent scientific target.

Get the full analysis with uListen AI

On OpenAI governance: What concrete governance design would prevent a repeat of the Altman firing/rehiring chaos—especially with a capped-profit/nonprofit hybrid?

Hao recounts reported inside dynamics at OpenAI—including the board’s brief firing of Sam Altman—portraying a leadership culture of instability, internal distrust, and contested safety priorities.

Get the full analysis with uListen AI

On labor: What minimum labor standards (pay, transparency, dispute process, mental health support) should be mandatory for data-annotation and RLHF supply chains?

The conversation explores real-world externalities of AI scale: job ladder erosion via automation and data-annotation work, plus energy/water/air-quality harms from data centers disproportionately sited in vulnerable communities.

Get the full analysis with uListen AI

On environment: What should be the permitting threshold for new AI data centers (power, water, emissions), and who should have veto power locally?

Hao advocates breaking up concentrated AI power and redirecting innovation toward “bicycle not rocket” AI—more targeted, efficient systems—while encouraging public resistance through regulation, lawsuits, local organizing, and withholding data/adoption.

Get the full analysis with uListen AI

Transcript Preview

Karen Hao

So much of what's happening today in the AI industry is extremely inhumane

Steven Bartlett

But this is me playing devil's advocate. And logically, it could be the case that the civilization that accelerate their research with AI is going to be the superior civilization.

Karen Hao

No, it's not. This is a prediction that you're making, right? All-

Steven Bartlett

Elon's making, Zuckerberg's making-

Karen Hao

Yes

Steven Bartlett

... Altman's making.

Karen Hao

And do you know what the common feature of all of them is? They profit enormously off of this myth. You know, I have all these internal documents showing that they're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit.

Steven Bartlett

So what do we do about it?

Karen Hao

We need to break up the empires of AI. You know, I've been covering the tech industry for over eight years, interviewed over 250 people, including former or current OpenAI employees and executives. And I can tell you that there are many parallels between the empires of AI and the empires of old, right? Like lay claim to the intellectual property of artists, writers, and creators in the pursuit of training these models. Second, they exploit an extraordinary amount of labor, which breaks the career ladder because someone gets laid off, and then they work to train the models on the very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill. And when they talk about that there's gonna be some new jobs created that we can't even imagine, a lot of the jobs that are created are way worse than the jobs that were there. And then there's the environmental and public health crisis that these companies have created, and how they're able to also spend hundreds of millions to try and kill every possible piece of legislation that gets in their way and will censor researchers that are inconvenient to the empire's agenda. But what I'm saying is not that these technologies don't have utility, it's that the production of these technologies right now is exacting a lot of harm on people. But we have research that shows that the very same capabilities could be developed in a different way that doesn't have all of these unintended consequences. So let's talk about all of that.

Steven Bartlett

This is super interesting to me. My team give me this report to show me how many of you that watch this show subscribe, and some of you have told us, according to this, that you are unsubscribed from the channel randomly. So favor to ask all of you, please could you check right now if you've hit the subscribe button if you are a regular viewer of this show and you like what we do here. We're approaching quite a significant landmark on this show in terms of a subscriber number. So if there was one simple free thing that you could do to help us, my team, everyone here, to keep this show free, to keep it improving year over year and week over week, it is just to hit that subscribe button and to double-check if you've hit it. Only thing I'll ever ask of you. Do we have a deal? If you do it, I'll tell you what I'll do. I'll make sure every single week, every single month, we fight harder and harder and harder and harder to bring you the guests and conversations that you wanna hear. I've stayed true to that promise since the very beginning of The Diary of a CEO, and I will not let you down. Please help us. Really appreciate it. Let's get on with the show. [upbeat music] Karen Hao, you've written this book in front of me here called Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. I guess my first question is, what is the research and the journey you went on in order to write this book we're gonna talk about and the subjects within it today?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome