The Diary of a CEOKaren Hao: Why 'AGI' is a slogan, not a destination
Through race-or-die narratives, AI leaders extract resources and legitimacy; hidden labor and worsened conditions sit beneath automation gains.
EVERY SPOKEN WORD
120 min read · 23,585 words- 0:00 – 2:47
Intro
- KHKaren Hao
So much of what's happening today in the AI industry is extremely inhumane
- SBSteven Bartlett
But this is me playing devil's advocate. And logically, it could be the case that the civilization that accelerate their research with AI is going to be the superior civilization.
- KHKaren Hao
No, it's not. This is a prediction that you're making, right? All-
- SBSteven Bartlett
Elon's making, Zuckerberg's making-
- KHKaren Hao
Yes
- SBSteven Bartlett
... Altman's making.
- KHKaren Hao
And do you know what the common feature of all of them is? They profit enormously off of this myth. You know, I have all these internal documents showing that they're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit.
- SBSteven Bartlett
So what do we do about it?
- KHKaren Hao
We need to break up the empires of AI. You know, I've been covering the tech industry for over eight years, interviewed over 250 people, including former or current OpenAI employees and executives. And I can tell you that there are many parallels between the empires of AI and the empires of old, right? Like lay claim to the intellectual property of artists, writers, and creators in the pursuit of training these models. Second, they exploit an extraordinary amount of labor, which breaks the career ladder because someone gets laid off, and then they work to train the models on the very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill. And when they talk about that there's gonna be some new jobs created that we can't even imagine, a lot of the jobs that are created are way worse than the jobs that were there. And then there's the environmental and public health crisis that these companies have created, and how they're able to also spend hundreds of millions to try and kill every possible piece of legislation that gets in their way and will censor researchers that are inconvenient to the empire's agenda. But what I'm saying is not that these technologies don't have utility, it's that the production of these technologies right now is exacting a lot of harm on people. But we have research that shows that the very same capabilities could be developed in a different way that doesn't have all of these unintended consequences. So let's talk about all of that.
- SBSteven Bartlett
This is super interesting to me. My team give me this report to show me how many of you that watch this show subscribe, and some of you have told us, according to this, that you are unsubscribed from the channel randomly. So favor to ask all of you, please could you check right now if you've hit the subscribe button if you are a regular viewer of this show and you like what we do here. We're approaching quite a significant landmark on this show in terms of a subscriber number. So if there was one simple free thing that you could do to help us, my team, everyone here, to keep this show free, to keep it improving year over year and week over week, it is just to hit that subscribe button and to double-check if you've hit it. Only thing I'll ever ask of you. Do we have a deal? If you do it, I'll tell you what I'll do. I'll make sure every single week, every single month, we fight harder and harder and harder and harder to bring you the guests and conversations that you wanna hear. I've stayed true to that promise since the very beginning of The Diary of a CEO, and I will not let you down. Please help us. Really appreciate it. Let's get on with the show. [upbeat music]
- 2:47 – 5:08
Why Some Insiders Say AI Is Driven More By Profit Than Progress
- SBSteven Bartlett
Karen Hao, you've written this book in front of me here called Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. I guess my first question is, what is the research and the journey you went on in order to write this book we're gonna talk about and the subjects within it today?
- KHKaren Hao
I took a strange route into journalism. I studied mechanical engineering at MIT. And so when I graduated, I moved to San Francisco. I joined a tech startup. I became part of Silicon Valley. And I basically received an education in what Silicon Valley is about, because a few months into joining a very mission-driven startup that was focused on building technologies that would help facilitate the fight against climate change, the board fired the CEO because the company was not profitable. And this was, in hindsight, a very pivotal moment for me because I thought, "If this hub is ultimately geared towards building profitable technologies, and many of the problems in the world that I think need solved are not profitable problems like climate change, then what are we actually doing here?" Like, what-- How did we get to a point where innovation is not actually necessarily working in the public benefit and sometimes even undermining the public benefit in pursuit of profit? In that moment, I had a bit of a crisis where I thought, "Well, I just spent four years trying to set myself up for this career that I now don't think I am cut out for." And I thought, "Well, I might as well just try something totally different. I've always liked writing." And that's how after two years, I landed at a role at MIT Technology Review covering AI full-time, and that gave me a space to then explore all of these questions of who gets to decide what technologies we build, how does money and ideology also drive the production of those technologies, and how do we ultimately make sure that we actually reimagine the innovation ecosystem to work for a broad base of people all around the world. And so that is kind of how I then set off on this journey of ultimately writing a book. I didn't realize that I was working towards writing a book, but starting in 2018 when I took that job was essentially the moment in which I began researching the story that I, I document
- 5:08 – 11:07
What 250 OpenAI Insiders Revealed Behind Closed Doors
- KHKaren Hao
in it.
- SBSteven Bartlett
Very timely time to start working in artificial intelligence. For anyone that doesn't know, this is pre-O- OpenAI ChatGPT launch moment that, uh, shook the world. But in writing this book, you interviewed a lot of people and went to a lot of places. Can you give me a flavor of how many people you've interviewed, where it's taken you around the world, et cetera?
- KHKaren Hao
I interviewed over 250 people, so over 300 interviews. Over 90 of those people were former or current OpenAI employees and executives. So the book covers the inside story of OpenAI's first decade and how it ultimately got to where it is today. But I didn't wanna write a corporate book. I felt very strongly that in order to help people understand the impact of the AI industry, we would also have to travel well beyond Silicon Valley. These companies tell us that AI is going to benefit everyone, and that's their mission. But you really start to see that rhetoric break down when you go to the places that look nothing like Silicon Valley, that speak nothing like Silicon Valley, and that have a history and culture that are fundamentally different as well. And that's where you start to really understand the true reality of how this industry is unfolding around us.
- SBSteven Bartlett
Karen, I often try and steer conversations, but in this situation, I feel like it's probably my responsibility to follow. So with that in mind, I, I'm gonna ask you, where does this journey begin and where should we be starting if we're talking about the subjects of empire of AI, AI generally, artificial intelligence? And also I'd say one thing I'm really keen to do in this conversation, which is I often see in conversations is left out, is let's assume that our viewers know nothing about AI.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
So they don't know what scaling laws are or GPUs or compute or whatever, and let's try and keep this as simple as we possibly can in terms of language or explain all the complicated language so that we can bring as much people with us as we possibly can.
- KHKaren Hao
Yes.
- SBSteven Bartlett
Where should we start?
- KHKaren Hao
I think we should start with when AI started as a field. So this was back in 1956, and there were a group of scientists that gathered at Dartmouth University to start a new discipline, a scientific discipline, to try and chase an ambition. And specifically, an assistant professor at Dartmouth University, John McCarthy, decided to name this discipline artificial intelligence. This was not the first name that he tried. The previous year he tried to name it automata studies, and the reason why some of his colleagues were concerned about this name was because it pegged the idea of this discipline to recreating human intelligence. And back then, as is true today, we have no scientific consensus around what human intelligence is. There's no definition from psychology, biology, neurology, and in fact, every attempt in history to quantify and rank human intelligence has been driven by nefarious motives. It's been driven by a desire to prove scientifically that certain groups of people are inferior to other groups of people. There are no goalposts for this field, and there are no goalposts for the industry when they say that they are ultimately trying to recreate AI systems that would be as smart as humans. How do we even define what that means, and when are we going to get there if we don't know how to define the destination? And what that effectively means is that these companies can just use the term artificial general intelligence, which is now the term to refer to this ambitious, um, goal to recreate human intelligence. They can use it however they want to, and they can define and redefine it based on what is convenient for them. So in OpenAI's history, it has defined and redefined it many times. When Sam Altman is talking with Congress, AGI is a system that's gonna cure cancer, solve climate change, cure poverty. When he's talking with consumers that he's trying to sell his products to, it's the most amazing digital assistant that you're gon- ever gonna have. When he was talking with Microsoft, you know, in the deal that OpenAI and Microsoft struck, where Microsoft invested in the company, it was defined as a system that will generate a hundred billion dollars of revenue. And on OpenAI's own website, they define it as highly autonomous systems that outperform humans in most economically valuable work. This is, like, not a coherent [chuckles] vision of one technology. These are very different definitions that are spoken out loud to the audience that needs to be mobilized to ward off regulation or get more consumer buy-in into the, the industry's quest, or to get more capital, more resources for continuing on this journey with ambiguous definitions.
- SBSteven Bartlett
I mean, speaking about different definitions through time, in 2015, in a blog post that Sam Altman wrote before OpenAI was officially a-announced, he explicitly outlined the existential risk by saying, "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen, for example, an engineered virus, but AI is probably the most likely way to destroy everything."
- KHKaren Hao
In general, when Altman is writing for the public or speaking for the public, he does not just have the public as the audience in mind. There are other people that he is trying to motivate or mobilize when he says these things. And in that particular moment, Altman was trying to convince Elon Musk to join him on co-founding OpenAI. And Musk, in particular, was spending all of his time sounding the alarm on what he saw as a huge existential threat that AI could pose. And so in that blog post, if you look at the, the language that Altman uses side by side with the language
- 11:07 – 15:06
Did Sam Altman Really Outmaneuver Elon Musk?
- KHKaren Hao
that Musk was using at the time, it mirrors all the things that Musk was saying.
- SBSteven Bartlett
It's identical.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
I mean, 10 years ago, Musk was going on sh- podcasts, saying, tweeting whatever, that the greatest existential risk to humanity was AI.
- KHKaren Hao
Yeah. And so, you know, like his p-parenthetical, there are other things that we-- that might actually be more likely to happen, like engineered viruses. It's because up until then, Altman had been talking just about engineered viruses. And so now that he needs to pivot to speak to an audience of one, to Musk, he needs to kind of resolve the contradiction between what he's now elevating as his new central fear to be the same as Musk's new central fear with what he had previously been saying. So that's why he's like, "I think this is now, even though before I said this." [laughs]
- SBSteven Bartlett
And are you saying that Sam Altman manipulated Musk? Because Elon did end up donating a huge amount of money to, um, OpenAI and co-founding it, I believe, with Sam Altman.
- KHKaren Hao
Elon Musk did end up co-founding it with Altman, and certainly from Musk's perspective, he does feel manipulated because he feels like Altman was engineering his language in a way that would make Musk trust him as a-A partner in this endeavor. And of course, then Musk is, uh, leaves, um, and through some of the documents that came out during the, the lawsuit that Musk and Altman are engaged in now, it has become clear that there was a degree to which Musk was actually muscled out a little bit. And so that's why he's left with this very intense personal vendetta against Altman, saying that somehow Altman tricked him into being part of this.
- SBSteven Bartlett
So in, in 2015, Sam Altman is writing these blog posts saying, "This is, you know, one of the greatest existential threats." At the same time, in 2015, Musk is doing some very famous speeches at the time at MIT. He said that AI was the biggest existential threat and compared developing AI to summoning the demon. And what you're saying here is you're saying that Sam Altman was just mirroring the language that Elon was using to get Elon involved in Open, OpenAI, and later it appears, and again, there's a legal case taking place now, that Sam might have muscled Elon out in some capacity.
- KHKaren Hao
Yeah. So we know from the lawsuit and the documents that have come out in the lawsuit that Ilya Sutskever, who was the chief scientist of OpenAI at the time, and Greg Brockman, chief technology officer at the time, when they were deciding whether or not to maintain OpenAI as a nonprofit, because it was originally founded as a nonprofit, they decided, "Okay, we need to create a for-profit entity." But the question was, who should be the CEO of this for-profit entity? Should it be Musk or should it be Altman? Because it's... They were the two co-chairmen of the nonprofit. And in the emails, it became clear that Ilya and Greg first chose Musk to be the CEO. But through my reporting, I discovered that Altman then appealed personally to Greg Brockman, who was a friend of his that they'd known s- they'd known each other for many years through the Silicon Valley scene, and said, "Don't you think that it would be a little bit dangerous to have Musk be the CEO of this company, this new for-profit entity? Because, you know, he's a famous guy. He has a lot of pressures in the world. He could be threatened, he could act erratically, he could be unpredictable. And do we really want a technology that could be super powerful in the future to end up in the hands of this man?" And that convinced Greg, and Greg then convinced Ilya. You know, I think there's a point here. Do we really want to give this much power to Musk? And that is why Musk then leaves, because then they s- the two switch their allegiances. They say, "Actually, we want Altman to be the CEO." And then Musk was like, "If I'm not CEO, I'm out."
- 15:06 – 17:53
What People Get Wrong About Sam Altman
- SBSteven Bartlett
So it sounds like Sam again managed to persuade someone to do something.
- KHKaren Hao
Mm-hmm.
- SBSteven Bartlett
I guess this begs the question, what do you think of Sam Altman?
- KHKaren Hao
I think he's a very controversial figure.
- SBSteven Bartlett
You did an interesting pause. It's a pause where someone tries to select their words carefully. [chuckles]
- KHKaren Hao
Well, this is, this is, this is what's so interesting about those interviews is people are extremely polarized on Altman. There-- No one has in-between feelings about him. Either they think he's the greatest tech leader of this generation, akin to the Steve Jobs of the modern era, or they think that he's really manipulative and an abuser and a liar. And what I realized, because I interviewed so many people, is it really comes down to what that person's vision of the future is and what their goals are. So if you align with Altman's vision of the future, you're gonna think he's the greatest asset ever to have on your side because this man is really persuasive. He's incredible at telling stories. He's incredible at mobilizing capital, at recruiting talent, at getting all the inputs that you need to then make that future happen. But if you don't agree with his vision of the future, then you begin to feel like you're being manipulated by him to support his vision, even if you fundamentally don't agree with it. And s- this is the story especially of Dario Amodei, CEO of Anthropic, who was originally an executive at OpenAI.
- SBSteven Bartlett
So for people that don't know, Dario now runs, um, Anthropic, which is the maker of Claude. A lot of people probably are fam- more familiar with Claude.
- KHKaren Hao
Yeah. And it's one of the biggest competitors to OpenAI. And Amodei, at the time when he was an ex-executive at OpenAI, he thought that Altman was on the same page with him, and then over time began to feel that Altman was actually on exactly the opposite page of him, and felt that Altman had used Amodei's intelligence, capabilities, skills to build things and bring about a vision of the future that he actually fundamentally didn't agree with. And so that's why people end up with this bad taste in their mouths. And so, you know, I've been covering the tech industry for over eight years and covered many companies. I've covered Meta, Google, Microsoft, in addition to OpenAI. And OpenAI and Altman is... it's the only figure that I've seen this degree of polarization with, where people cannot decide whether he's the greatest or the worst. [chuckles]
- 17:53 – 25:33
The Power Struggle: Who Tried To Oust Sam Altman—And Why
- SBSteven Bartlett
You mentioned Dario there, and I found it really... What I found really interesting is to look at how people's quotes evolve over time with their incentives. So I was looking at all of the, all of the things they've said on the record, on podcasts, in their blog posts, to see how it's evolved over time. And Dario, who was the former VP of research, OpenAI, and has now moved on to Anthropic, who are taking a slightly different approach to developing AI, said back in 2017 while he was still at OpenAI that, this is a quote, "I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can't see any reason in principle why that couldn't happen. My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10% and 25%."And also you mentioned Ilya, who was a co-founder of OpenAI and then left. I guess the first question I'd ask is, why did Ilya leave?
- KHKaren Hao
That's a great question. So he was instrumental in trying to get Sam Altman fired, and he's another one of the people who over time began to feel like he was being manipulated by Altman towards contributing something that he didn't believe in. And for-
- SBSteven Bartlett
How'd you know?
- KHKaren Hao
Because I interviewed a lot of people. Ilya in particular, had two pillars that he cared about deeply. One is making sure we get to so-called AGI, and the other is making sure that we get to it safely. And he felt that Altman was actively undermining both things. He felt that Altman was creating a very chaotic environment within the company where he was pitting teams against each other, where he was telling different things to different people.
- SBSteven Bartlett
Have you ever spoken to him?
- KHKaren Hao
I have. So, so I interviewed him in 2019 for a profile that I did of OpenAI, um, for MIT Technology Review.
- SBSteven Bartlett
And back in 2019, he has a quote where he says, "The future's gonna be good for AIs regardless. It would be nice if it was also good for humans as well. It's not that it's going to actively hate humans or want to harm them, but it's just going to be so powerful, and I think a good analogy would be the way that humans treat animals. It's not that we hate animals. I think humans love animals, and I have a lot of affection for them. But when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because it's important to us. And I think by default, that's the kind of relationship that's going to be between us and AI, which are truly autonomous and operating on their own behalf." And that was in 2019, the year that you interviewed him.
- KHKaren Hao
One of the things that I, I feel like we should take a step back to examine is going back to this idea of what even is artificial intelligence and what do we mean by intelligence. And a huge part of the views of the different people and the quotes that you're reading derives from a specific belief that they each have in this question of what is intelligence, what constitutes intelligence. For Ilya, he has, throughout his research career, felt that ultimately our brains are giant statistical models. This is not something that, you know, we actually know, but this is his own hypothesis. Also, the hypothesis of his mentor, Geoffrey Hinton, who also was on this podcast. This is why they have such a strong conviction in the idea of building AI systems that are statistical models, and that this particular approach is going to lead to intelligent systems as we are intelligent. It's a hypothesis that they have. It's not one that has been proven by science, and some people vehemently disagree with them on this particular thing. But if you step into their shoes and take on that hypothesis and assume that it's true that our brains are in fact statistical engines, and that these systems that they're building are also statistical engines that they're making bigger and bigger and bigger until they become the size of the human brain, that's why they say that making this comparison where the system will become equal to human intelligence and then maybe exceed human intelligence is relevant in their framework. And, um, Ilya gave a talk at one point at this really prominent AI research conference that happens every year called Neural Information Processing Systems. It's a mouthful. But he gave this keynote where he shows this chart of the size of brains and the intelligence of a species, and it's roughly linear. The bigger the size of the brain, the more intelligent the species. And so for him, he thinks he's building a digital brain because he, he thinks brains are just statistical engines. So from that logic, it's like, okay, if we then build a bigger statistical engine than the human brain, then based on this chart, it will be more intelligent, and then we will be subjected to the same treatment that we've subjected animals. But it's really important to understand that these are scientific hypotheses of specific individuals within the AI research community, and there's a lot, a lot of debate about whether this is in fact the case. And some of the biggest critics say it's very reductive to think of our brains as simply just statistical engines.
- SBSteven Bartlett
Why, why does it matter to know the mechanism? Is it not just important to know the outcome, which is that it's gonna be able to do, make a video for me or agents are gonna be able to do the work that I do? Does it, does it really, really matter for us to know the mechanism behind it?
- KHKaren Hao
Yes and no. So it matters because these companies, they are driving their future actions based on this hypothesis. So they have decided, we think that this hypothesis is true, like we should just continue building larger and larger statistical models in the pursuit of artificial general intelligence, and that's then having global consequences. Like in order to continue doing that, they're hoovering up more and more data. They're building more and more data centers. They are having, uh, they're, you know, exploiting more and more labor in order to continue on this path. Here's a question that I think is important to ask is why are we trying to build AI systems that are duplicative of humans? We're kind of having this conversation right now where we've just taken the premise of this industry as a good thing, like they said that we should be building AGI, so we say that we should be building AGI. But I would like to ask like why are we doing that? Why is it that we are building a technology that is ultimately designed to replace and automate people away?That is not the enterprise of technology. Like, we should be building technology, and the purpose of technology throughout history has been to improve human flourishing, not to replace people. And so this is, like, a, a critical part of my critique of these companies and, and these scientists that have just adopted this goal and have relentlessly pursued it and have had enormous capital and enormous resources to pursue it is, is this the right goal? Uh, wh- like, why are we doing this? Why can't we just build AI systems that do things like accelerate drug discovery and improve people's healthcare outcomes, which are systems that have nothing to do with the statistical engines that they're trying to build to duplicate
- 25:33 – 31:55
The Real Reason Tech Giants Are Racing To Build AI
- KHKaren Hao
the human brain.
- SBSteven Bartlett
So why aren't they doing it? I mean, you've interviewed all these people. I think it's, what, 300 people in total, 80 or 90 of them from OpenAI, the maker of ChatGPT. Why do you think they're doing it?
- KHKaren Hao
I think it's because they're driven by an imperial agenda, and that is why I call these companies empires of AI.
- SBSteven Bartlett
What do you mean by an imperial agenda? What, what does that term mean?
- KHKaren Hao
Empire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies do and the scale that they operate and what motivates them to do what they do. And there are many parallels that you see between what I call the empires of AI and the empires of old. They lay claim to resources that are not their own in the pursuit of training these models. That's the data of individuals, the intellectual property of artists, writers, and creators, f- their land grabbing in order to build these supercomputer facilities for training the next generation models. Second, they exploit an extraordinary amount of labor. They contract hundreds of thousands of workers all around the world, including in the US, to ultimately make these technologies. We can talk about that more. And they also design their tools to be labor automating so that when the technologies are deployed, it also affects labor rights because it erodes away labor rights, and this is a political choice that they have. Third, they monopolize knowledge production, so they project this idea that they're the only ones that really understand how the technology works. And so if the public doesn't like it, it's because they don't actually know enough about this technology. They do this to the public, they do this to policymakers, and they've also captured the majority of the scientists that are working on understanding the limitations and capabilities of AI.
- SBSteven Bartlett
You think they're gaslighting the public i-in a way?
- KHKaren Hao
They are, yeah. So if most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?
- SBSteven Bartlett
No.
- KHKaren Hao
And in the same way, they employ and bankroll, the AI industry employs and bankrolls most of the AI researchers in the world. So they set the agenda on AI research in soft ways, simply by funneling money to their priorities so that only certain types of AI research are produced. But they also will censor researchers when they do not like what the researcher has found. And so I talk about the case of Dr. Timnit Gebru in my book, who was the ethical AI team co-lead at Google. When she w-was literally hired to critique the types of AI systems that Google was building, she then co-wrote a critical research paper that was showing how large language models specifically were leading to certain types of harmful outcomes. And in an attempt to try and stop this research from being published, Google ended up firing Gebru and then fired her other co-lead, Margaret Mitchell. And so they control and quash the research that is inconvenient to the empire's agenda.
- SBSteven Bartlett
Did you have an example where this is happening to journalists as well that are asking questions of their team members? I think I was watching a video of yours where there was a young man that was saying he had someone show up at his door, knocked on his door and asked for information, emails, text messages, and this person was from one of the big AI companies.
- KHKaren Hao
This was... OpenAI started subpoenaing some of its critics, yeah, um, as, uh, uh, as part of a what's, what appears to be a campaign of intimidation, but also what appeared to be a campaign of fishing for more information to figure out, to map out the network of critics further. But this was a man who runs a small watchdog nonprofit, and they had been doing a lot of work during that time to try and ask questions about OpenAI's attempt to convert from a nonprofit to a for-profit. Ultimately, OpenAI was successful in that conversion, but during the period where it was sort of existential for OpenAI to complete this conversion, there were a lot of civil society groups and watchdog groups like Midas who were trying to prevent the process from happening in the dead of night. They were trying to get more transparency. They were trying to have more public debate about this because it's unprecedented. And it was then that, um, there was a knock on his door and he was served papers.
- SBSteven Bartlett
What do the papers say?
- KHKaren Hao
The papers ask him to reproduce every single piece of communication that he had had that might have involved Musk. So this was, like, the strange paranoia that OpenAI had that Musk was somehow funding these people to block the conversion. None of them were actually funded by Musk. So in this particular case, the request, he simply was just answered, "You know, I, I don't have any documents because this doesn't exist."
- SBSteven Bartlett
So going back to this point of empires, you were saying that one of the factors of an empire is a land grab, and then the next one was...
- KHKaren Hao
Was labor exploitation.
- SBSteven Bartlett
Labor exploitation.
- KHKaren Hao
The third one, controlling knowledge-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... production. And one of the other ones that's really important to understand about-The AI empires in particular is empires always have this narrative that they, they say to the public like, "We're the good empire, and we need to be an empire in the first place because there are also bad empires in the world. And if you allow us to take all the resources and use all of the labor, then we promise we will bring you progress and modernity for everyone."
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
"We will bring you to this utopic state akin to an AI heaven. But if the evil empire does it first, we will descend into a hell." And-
- SBSteven Bartlett
And the evil empire being, in this case?
- KHKaren Hao
In this case, most often it's China. But actually in the early days, OpenAI evoked Google as-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... the evil empire. So all of their decisions were about, we need to do it first, because otherwise Google, this evil corporation that's driven by profit, us as a benevolent nonprofit, like this is a, this is an, a critical contest of who wins.
- SBSteven Bartlett
Do you think
- 31:55 – 33:28
Do AI CEOs Actually Believe This Will Help Humanity?
- SBSteven Bartlett
the people building these AI companies believe that the outcome is gonna be all good now? Do you think they think that it's gonna be, it's gonna serve everyone, it's gonna be the age of abundance, everything's gonna go well?
- KHKaren Hao
So-
- SBSteven Bartlett
What do you think they believe?
- KHKaren Hao
So this is so-
- SBSteven Bartlett
What do you think Sam believes?
- KHKaren Hao
[laughs] So this is so funny, is such a core part of the mythology that they create around the AI industry includes the belief that it could go very badly. It goes hand in hand. Like they need that part of the myth in order to then say, "And that's why we need to be in control of the technology, because that's the only way that it's gonna go really, really well." And Altman has said publicly, you know, the worst case, lights out for everyone. But best case, we cure cancer, we solve climate change, and there's abundance. And Dario Amodei, same kind of rhetoric. He's like, worst case, catastrophic or existential harm for humanity. Best case, mass human flourishing. So this is like two sides of the same coin. Like they have to use both of these narratives in order to continue justifying an extremely anti-democratic approach to AI development where there should not be broad participation in developing this technology. They must be the ones controlling it at every step of the way.
- SBSteven Bartlett
Sam Altman did a tweet saying, "There are some books coming out about OpenAI and me. We only participated in two of them, one by Kesh Hagey-
- KHKaren Hao
Keech Hagey
- SBSteven Bartlett
... Keech Hagey, Focused On Me,
- 33:28 – 41:27
Why OpenAI Refused To Be Part Of This Book
- SBSteven Bartlett
and one by Ashley Vance on OpenAI." Um, he went on to say, "No book will get everything right, especially when some people are so intent on twisting things, but these two authors are trying to." You quote retweeted that tweet from Sam Altman, and you said, "The unnamed book, Empire of AI, is mine." Do you believe that tweet from Sam Altman was in reference to your book?
- KHKaren Hao
A hundred percent, because there's only three books coming out about him.
- SBSteven Bartlett
And he'd caught wind that your book was coming out and-
- KHKaren Hao
He knew my book was coming out because I had contacted OpenAI from the very beginning of my process and said, "I'm working on a book now. Will you participate in it?" And actually, initially, they said yes, even though... So my history with OpenAI, I profiled the company for MIT Technology Review. I embedded within the office for three days in twenty nineteen. My profile comes out in twenty twenty. The leadership are very unhappy. And in my book, I actually quote an email that I received that Sam Altman sent to the company about my profile saying, "Yeah, this is not great." [laughs] And from then on, the company's stance to me was, "We are not going to participate in anything that you do. We are not going to respond to anything, any questions that you receive." And this was, you know, this was things that they explicitly articulated. It wasn't like me inferring. Um, so I, I had a, a colleague at MIT Technology Review that also covered AI, and at one point, OpenAI sent him this press release being like, "We would love for you to cover this story." And he was like, "I'm really busy. Will you send it to Karen?" And they were like, "Oh, no, we have a history, you understand." And so-
- SBSteven Bartlett
[laughs]
- KHKaren Hao
So for three years, they, they refused to talk to me, but then I ended up at The Wall Street Journal, where if they felt a b- a bit compelled because it was the Journal to reopen the lines of communication. And so I, I, I started having, you know, a, more dialogue with them. E- every time I wrote a piece, I would always send them, "Here's my request for comment." I would always ask them like, "Will you sit for interviews?" And we did get to a more productive relationship. And then I embarked on the book, so I, I left the Journal to focus on the book full-time. And I told them right away, "I'm working on this book. I want to continue this productive conversation where I make sure I reflect OpenAI's perspective in the book." And so they were like, "We can arrange interviews for you. You can come back to the office. We'll set up some conversations." And then as we were going back and forth on this, the board fires Sam Altman. And that's when things started going kind of south because the company started becoming very sensitive to scrutiny. And so then they started pushing, kicking the can down the road, down the road, down the road, and I kept saying, "Hey, when are we rescheduling this? What's going on?" And then I get an email saying, "We are not going to participate at all. You are not coming to the office. You're not doing interviews." And I had a- actually already booked my tickets, so I was already gonna fly to San Francisco to have the, the interviews. And so then I told them, I was like, "That's fine. I will still engage in the, the process where I'll give you extensive requests for comment. I'll... As through my reporting, I'll keep you updated on all the things that I'm finding so that you can choose to still comment."I gave them 40 pages of requests for comment, and I gave them over a month to respond to all of that. So this was when the tweet came out, was we were doing all this back and forth trying to... And that's when Altman tweeted this.
- SBSteven Bartlett
Hmm.
- KHKaren Hao
And they never responded to a single one of the, one of the 40 pages.
- SBSteven Bartlett
Sam Altman does a lot of interviews.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
You know, he's doing a lot of interviews all the time. He's done every podcast. I've seen him on everything from Tucker Carlson to, I think he's done Theo Von, Joe Rogan, um, e- podcasts all over the world. I wonder why he won't do mine. [laughs]
- KHKaren Hao
[laughs] Well, maybe. Uh-
- SBSteven Bartlett
I don't know why. I, I, I don't know. I think I'm fair with everyone. I just ask, I just ask questions I genuinely care about. I don't come in with huge preconceptions, or at least meet p- pe- people for the first time. But I've heard through the grapevine, um, that he doesn't wanna do mine.
- KHKaren Hao
I mean, going back to what you were saying earlier, that w- with this, the way that OpenAI and these companies control research, you asked, "Do they also do this with journalists?" I mean, yes, the answer is yes. And apparently they, they also do it with anyone who has, you know, a broad mass communications platform.
- SBSteven Bartlett
Mm.
- KHKaren Hao
It's not just about the conversation that you're going to have with them. It's about who you also choose to platform.
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
And there's this huge problem in technology journalism where companies know that a really big carrot that they can give to technology journalists is access.
- SBSteven Bartlett
Yeah, yeah, yeah.
- KHKaren Hao
And they will withholds that access at the drop of a hat if they catch wind that you're speaking to someone that they didn't want you to speak to.
- SBSteven Bartlett
This is so true, and I don't think the average person really truly understands this.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
So this kinda sounds like theory as you say it, but I'm not gonna name name h- names here 'cause I don't think it's important, but there is a particular person in AI who, um, whose team have basically dangled the carrot of them coming here for, like, 18 months. And I'm like, you don't, you don't have to dangle the carrot. I'm gonna speak to whoever I want to regardless of the carrot or not.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
And when this person comes, if they wanna come, I'll s- I'll give them a fair shot. I'll ask them all genuinely curious questions about what they're doing, their incentives. I won't gotcha them. I don't have a history of ever gotchaing anybody.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
Even if I dis- like, even if I have a different of opinion-
- KHKaren Hao
Yeah
- SBSteven Bartlett
... I'll ask the question.
- KHKaren Hao
Yeah.
- 41:27 – 44:58
Why Sam Altman Was Forced Out
- KHKaren Hao
Yeah. And just-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... report what I see are the facts being presented to me, irrespective of whether [laughs] the company likes it or not. And most often, the company really does not like it.
- SBSteven Bartlett
Mm.
- KHKaren Hao
But I con- continue to do the work. They don't need to open the front door for me. I, uh, was still able to do more than 300 interviews.
- SBSteven Bartlett
So S- Sam Altman gets kicked off the OpenAI executive team. Did you find out why that happened?
- KHKaren Hao
Yeah. There's a scene-by-scene recounting.
- SBSteven Bartlett
From who?
- KHKaren Hao
I can't remember the exact number of sources, so I don't wanna misquote myself, but it was around six or seven people that were directly involved or had spoken to people directly involved in the decision-making process. So Ilya Sutskever is seeing these serious concerns about the way that Altman's behavior is leading to bad research outcomes and poor decision-making at the company. He then approaches a board member, Helen Toner.
- SBSteven Bartlett
Ilya, for anyone that doesn't know, is the f- the co-founder we mentioned earlier, the co-founder of OpenAI we mentioned earlier.
- KHKaren Hao
Yes. And he kind of does a bit of a sounding board thing to Helen just because Ilya's freaking out. He's like, he's been, like, sitting on this, these, these concerns for a while, and he's like, "If I tell this to someone, this could also be really bad for me if Altman finds out."And so he asks for a meeting with Toner, and in that first meeting, he's like re- like he barely says a thing. He's just like dancing around trying to figure out, "Hey, is this someone that I can maybe trust to divulge more information?"
- SBSteven Bartlett
And Toner's role and responsibilities at OpenAI were?
- KHKaren Hao
She was a board member at the time.
- SBSteven Bartlett
Just a board member.
- KHKaren Hao
Yeah. And, and specifically an independent board member.
- SBSteven Bartlett
Okay.
- KHKaren Hao
So OpenAI, when it was a nonprofit, the board was split between people who had a stake, financial stake in the company, and then people who were fully independent. And this was meant to be a structure that would balance the decision-making to be in the benefit of the public interest rather than to be in the benefit of the for-profit entity that OpenAI then created.
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
And Ilya, as a non-independent board member, was approaching Toner as an independent board member to try and see whether or not she was potentially seeing or hearing the same things that he was about the effect that Altman was having on the company. This then sets off a series of conversations, first between Ilya and Helen, and then between Mira Murati and some of the board members. So Mira Murati was at that point the chief technology officer of OpenAI, where these two senior leaders essentially through these conversations and through documentation that they're pulling together, like email, Slack messages and so forth, they convey to the independent board members, three independent board members, "We are very concerned about Altman's leadership. Like he is creating too much instability at the company, and it is like he is the root of the problem." It's not... They, they, they were trying to say to
- 44:58 – 51:13
The Hidden Instability, What Was Altman Actually Disrupting Internally?
- KHKaren Hao
these independent board members, like, "The problem will not be fixed unless Altman is removed because of the way that he's pitting teams against each other and creating this environment where people are unable to trust each other anymore, and they're competing rather than collaborating on what's supposed to be this really, really important technology."
- SBSteven Bartlett
When you say instability, that's a, that's quite a vague term. That could mean lots of things. Like instability could mean pushing people hard to work harder.
- KHKaren Hao
Right.
- SBSteven Bartlett
What, what do you mean by instability in spec- as specific terms as you can possibly say them?
- KHKaren Hao
When ChatGPT came out in the world, OpenAI was wholly unprepared.
- SBSteven Bartlett
Mm-hmm. Yeah.
- KHKaren Hao
They didn't think that they were launching a gangbusters product. [chuckles]
- SBSteven Bartlett
Yeah.
- KHKaren Hao
They thought they were releasing a research preview that would help them get the data flywheel going, collect a bunch of data from users that would then inform what they thought would be the gangbusters product, which was a chatbot using GPT-4, and ChatGPT was using GPT-3.5. And because of that, there were servers crashing all the time because they, they weren't... They had to scale their, their infrastructure, you know, faster than any company in history, and there were, um, th-there were all of these outages. They were trying to also hire faster than any company in history to try and have more personnel there. And they were then sometimes hiring people that they were like, "Actually, we made a mistake. We shouldn't have hired you." So they were firing people left and right, and people were just disappearing off of Slack, and that's how their colleagues would learn that they were no longer at the company. And so it was, yes, like many fast-growing companies, a very chaotic environment, and a particularly chaotic environment because it was extra fast. Like they had to accelerate more than any other startup. And on top of that, Mira Murati and Ilya Sutskever felt that Altman was making it worse. Like he was not actually effectively ameliorating the circumstances of the chaos. He was actually sowing more chaos, getting these teams to be more divided. And this is where it's important to understand that the executives and the independent board members, they're all operating under this idea that they're building AGI and that AGI could either be devastating or utopic to humanity. And so it's not... Y-yes, it's like any other company, and no, it's not like any other company. You cannot have, like in their view, you cannot have this degree of chaos as the pressure cooker for creating a technology that they, in their conception, could make or break the world. And so that is basically what the independent board members also begin to reflect on. They have these conversations amongst themselves where they're like, "Well, based on what we're hearing about Altman's behavior, like if this was an Instacart, would that warrant firing him?" And they concluded, "Maybe not, but this is not Instacart." [chuckles] And that's why they were like, "Well, crap, maybe this is actually... This does rise to the, to, to the bar where we should consider replacing him because we are ultimately building a technology that we think could have transformative impacts either in the positive or negative direction." And so that is what happens. It's like these two executives, and then the independent board members also, they were hearing other feedback as well from their connections within the company, with other people in the industry. At one point, Adam D'Angelo, who is one of the independent board members and the CEO of Quora, uh, which is a, you know, start, a tech startup in the Valley, he is at a party in San Francisco, and he starts to hear some of these rumors that there's something weird about the way that OpenAI has structured its OpenAI startup fund, which was this fund that they, the company had created to start investing in other startups.
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
And he realizes they'd never really seen documentation about how the startup fund had been set up from Altman. And finally, they get the documents, and it turns out that OpenAI's startup fund is not OpenAI's startup fund, it's Altman's startup fund. And this was something like one of several experiences that the independent board members were also having, where they're like, there's something not right about the fact that there continuously are inconsistencies, inconsistencies between the way that Altman is portraying what is being done versus what is actually being done. And so when these two executives approach the board or the independent board members, then they're like, "Okay, this lines up with also the experiences that we've been having." And at that point, they then have this series of very intense discussions where they're meeting almost every day talking about, should we actually really consider removing Altman? And in the end, they conclude, yes, we should, and if we're gonna do it, we need to do it quickly because they were very concerned that the moment that Altman found out, his persuasive abilities would make it impossible to do. And so they end up firing Altman without telling anyone. You know, they don't talk to any stakeholders to get them on the same page. Microsoft gets a call right before they execute the action saying, "We're gonna fire Altman."
- SBSteven Bartlett
And Microsoft, for anyone that doesn't know, are a lead investor in OpenAI at the time.
- KHKaren Hao
Yes. One of the only investors in OpenAI at the time. And that is what then devolves the whole thing because every single person that is affected by this decision is now extremely angry that they were not involved, and that is what then creates this campaign to bring Altman back, and then Altman is reinstalled as CEO days later.
- 51:13 – 54:35
Ad Break
- SBSteven Bartlett
[page flipping] This company that I've just invested in is growing like crazy. I wanna be the one to tell you about it because I think it's gonna create such a huge productivity advantage for you. Wispr Flow is an app that you can get on your computer and on your phone, on all your devices, and it allows you to speak to your technology. So instead of me writing out an email, I click one button on my phone, and I can just speak the email into existence, and it uses AI to clean up what I was saying. And then when I'm done, I just hit this one button here, and the whole email is written for me, and it's saving me so much time in a day because Wispr learns how I write. So on WhatsApp, it knows how I am, a little bit more casual. On email, a little bit more professional. And also, there's this really interesting thing they've just done. I can create little phrases to automatically do the work for me. I can just say, "Jack's LinkedIn," and it copies Jack's LinkedIn profile for me because it knows who Jack is in my life. This is saving me a huge amount of time. This company is growing like absolute crazy, and this is why I invested in the business and why they're now a sponsor of this show. And Wispr Flow is frankly becoming the worst-kept secret in business productivity and entrepreneurship. Check it out now at Wispr Flow, spelled W-I-S-P-R F-L-O-W .ai/stephen. It will be a game changer for you. [page flipping] There's a phase a lot of companies hit where they're no longer doing the most important thing, which is selling, and they get really bogged down with admin, and it's often something that creeps up slowly and you don't really notice until it's happened. Slowly, momentum starts to leak out. This happened to us, and our sponsor, Pipedrive, was a fix I came across ten years ago. And ever since, my teams across my different companies have continued to use it. Pipedrive is a simple but powerful sales CRM that gives you the visibility on any deals in your pipeline. It also automates a lot of the tedious, repetitive, and time-consuming parts of the sales process, which in turn saves you so many hours every single month, which means you can get back to selling. Making that early decision to switch to Pipedrive was a real game changer, and it's kept the right things front of mind. My favorite feature is Pipedrive's ability to sync your CRM with multiple email inboxes so your entire team can work together from one platform. And we aren't the only ones benefiting. Over a hundred thousand companies use Pipedrive to grow their business. So if something I've said resonates, head over to pipedrive.com/ceo, where you can get a thirty-day free trial, no credit card or payment required. [page flipping] How does a CEO of a major company get fired by the board? Because board members, th-there's a quote in your book on page three hundred and fifty-seven where you say about Ilya saying, "I don't think Sam is the guy who should have the finger on the button for AGI."
- KHKaren Hao
Mm-hmm.
- SBSteven Bartlett
Now, I, I ask myself this question. You know, I work with lots of people here. We have, uh, a hundred and fifty people that work in this business, and those people know me best.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
They see me on camera, they see me off camera. So if they said that, "We don't think Steven is the right person to host The Diary Of A CEO," for example-
- KHKaren Hao
Yeah
- SBSteven Bartlett
... it would take a lot for them to say that.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
They must have seen some shit off camera for them to go, "We don't think he's the, the right person to be on camera"-
- KHKaren Hao
Yeah
- SBSteven Bartlett
... for whatever reason. And in the case of AI, which is much more consequential than a podcast that is, you know, filmed in my old kitchen, um, it almost sends a chill down one's body to think that the co-founder of a business has gone to the board and said, "This isn't the guy to lead this consequential organization."
- KHKaren Hao
And it wasn't just Ilya. Miriam Murati then also said, "I don't think Altman is the right guy."
- SBSteven Bartlett
And then they both left later.
- KHKaren Hao
So then Altman comes back and lo and behold, Ilya never comes back. So his concerns about the fact that Altman finding out would be bad for him [chuckles]
- 54:35 – 1:05:10
What Really Happened When Sam Altman Was Fired—And Why Employees Revolted
- KHKaren Hao
manifested. He ended up not coming back, and Miriam Murati then left shortly thereafter.
- SBSteven Bartlett
Quite a lot of these people leave, don't they? OpenAI.
- KHKaren Hao
They do. So if you consider one of the origin stories of OpenAI is this dinner that happened at the Rosewood Hotel, which is a very swanky hotel, um, right, right in the heart of Silicon Valley that, uh, was one of Elon Musk's favorites whenever he was coming up from LA to the Bay Area. And there was this dinner that was there where Altman was intending to recruit the OG team that would start OpenAI. So he's kind of-... con-- telling everyone, "You might have a chance to meet Musk because Musk is gonna come to this dinner, dinner." And he cold emails Ilya and gets Ilya to then come because... And Ilya specifically wants to come because he [chuckles] wants to meet Musk. And he also emails all these other people, including Greg Brockman, Dario Amodei, and-
- SBSteven Bartlett
These are all people that end up working at OpenAI
- KHKaren Hao
... and they all, al-almost all of them, not, not every one of them, but almost all of them end up working at OpenAI.
- SBSteven Bartlett
And leaving other companies.
- KHKaren Hao
Almost all of them end up leaving specifically after they clash with Altman.
- SBSteven Bartlett
And Ilya, he left and launched a company called Safe Superintelligence.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
Which is... I mean, that's an indirect if I've ever heard one.
- KHKaren Hao
[laughs]
- SBSteven Bartlett
Do you know what I mean? Do you know what I mean? If someone, like, uh, co-founded this, uh, podcast with me, and then they left and started a podcast called Safe Podcasting-
- KHKaren Hao
[laughs]
- SBSteven Bartlett
... I, I'd take that as a slight.
- KHKaren Hao
[laughs]
- SBSteven Bartlett
I'd, I'd have people knocking on their door-
- KHKaren Hao
[laughs]
- SBSteven Bartlett
... and asking for their texts.
- KHKaren Hao
One of the things that is happening here is it is not a coincidence that every single tech billionaire has their own AI company.
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
They want to create AI in their own image, and that's why they keep not getting along. And in fact, it's not just don't get along. They end up hating each other after working together- [chuckles]
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... and then splinter off into their own organizations. So after Musk leaves, he starts xAI. After Dario leaves, he starts Anthropic. After Ilya leaves, he starts Safe Superintelligence. After M-Mira leaves, she starts Thinking Machines Lab. They want to have control over their own vision of this technology, and the best way that they have derived from their experiences of trying to put their vision into the arena is by creating a competitor and then competing with OpenAI and with all the other companies out there.
- SBSteven Bartlett
Do you think some of these AI CEOs realize that they are quite literally summoning the demon, as Elon said 10 years ago, but they don't really care because being the person that summoned the demon is... makes you consequential and powerful and historical, even if the outcome is potentially horrific, even if there's, like, a 20% outcome of it being horrific? I remember, I think it was Dario, he's the one that said there's somewhere between a 10% and 25% chance of things going catastrophically wrong on the scale of human civilization. 25% is a one in four chance. If you put bullets in a four-chamber revolver and said, "Steven, the upside is you could become a multi-gazillionaire and be remembered forever. The downside-
- KHKaren Hao
[laughs]
- SBSteven Bartlett
... is that there would be a bullet in your head," there is no chance that I would take, take that bet with a 25% potential chance of things going catastrophically wrong.
- KHKaren Hao
So I have a very long answer to this because do they know if they're summoning the demon? It really depends on what we define as summoning the demon. And in this particular case, to go back to what we were saying before, there's a mythology that the AI industry uses where summoning the demon is an integral part of convincing everyone that therefore they can be the only ones that are developing this technology.
- SBSteven Bartlett
Ah, I got it. So on one end, you've gotta say, "If we don't, China will."
- KHKaren Hao
Mm-hmm.
- SBSteven Bartlett
"And that's terrible."
- 1:05:10 – 1:12:49
Should You Trust Politicians To Regulate AI—Or Is That Riskier?
- SBSteven Bartlett
Those people, they can go to the polls, right? So if the public are sufficiently educated, they can go to the polls and pick a leader that says they're going to legislate or pass laws or try and pass laws.
- KHKaren Hao
Yes. But at the speed and pace at which these companies operate and at the sheer scale and size, they're able to also spend extraordinary amounts of money, hundreds of millions in this upcoming midterms, to try and kill every possible piece of legislation that gets in their way and craft legislation that would codify their advantage. And so to me, w- I think sometimes as a society, we obsess a little bit with are these leaders good or bad people?
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
And to me, the m- bigger question is, is the governance structure that we've created a sound one or, uh, that allows broad participation or an anti-democratic one that has consolidated this decision-making power in the hands of the few? Because no person is perfect. It doe- I don't, I don't care who is on, uh, the top of these companies. They're not going to have the ability to make decisions on behalf of so many people around the world who live and talk and, um, a-and, and have a culture and history that are fundamentally different from them without things going wrong. And so that is why throughout history we've moved from empires to democracy. It's because empire as a structure is inherently unsound. It does not actually maximize the chances of most people in the world being able to live dignified lives.
- SBSteven Bartlett
I'm gonna try and take on their point of view, so this is me playing devil's advocate. Okay. But Karen, if w- the US don't continue to accelerate their research with AI, at some point, China's model is gonna become so smart and intelligent that we're basically gonna have to rent it off them, and we're gonna be... You know, they'll get the scientific discoveries. They'll discover the new era of autonomous weapons, and we will be their backyard. And, like, logically, that argument does appear to be pretty true, that-
- KHKaren Hao
No, it's not
- SBSteven Bartlett
... if we scale up, if we just imagine any rate of change with this intelligence, at some point, we're gonna come to a weapon that could theoretically disable, um, all of the United States electricity, their weapons systems. It would know exactly how to disable the United States from a cyber perspective because it would be that smart. All you've got to imagine is any rate of improvement over any period, any, uh, s- sort of a long period of time. So this is a theory that might be true, and if it's true-
- KHKaren Hao
[laughs] I mean, yeah, any theory might be true. [laughs]
- SBSteven Bartlett
But, but if, but, but, you know, again, going to this point of, like, even if it's a small percentage, it's worth paying attention to on the other side of the foot. This is a theory that people talk about. It could be the case that the most intelligent civilization is going to be the superior civilization. Logically, that's a pretty sound thing to say, no?
- KHKaren Hao
So there's a lot of, a lot of fundamentals in this argument that would need to be true in order for this to be a viable argument, and let's knock them down one by one. So the first one is that these systems are intelligent and that just scaling them is gonna bring us more intelligence.
- SBSteven Bartlett
So far, so true.
- KHKaren Hao
No, it's actually not. Because first of all, again, we don't actually know if these systems are... Like, i-intelligence is not, it's not, like, the right analogy almost. It's sort of like, it's like as a calculator, a calculator can do math problems faster than a human. Does that make it intelligent?
- SBSteven Bartlett
It has a narrow intelligence because it's solving a narrow problem, which is, like, one plus one equals two. But-
- KHKaren Hao
And these systems, they actually also are quite narrowly intelligent in the sense that even though these companies say that they're everything machines that can do anything for anyone, they actually can only do some things for some people. This is like the jagged frontier of these AI models. Like, some of the capabilities are quite good, other capabilities are not that good. You know why that happens? It's because the company can only focus on advancing certain types of capabilities. It can't literally focus on advancing all types of capabilities. They have to actually set their mind to advancing a certain by gathering the data that is needed for that capability by taking, uh, you know, uh, g-getting a bunch of human contractors to annotate and train the model to do that exact thing. And so scaling these models is actually a perpendicular question to are we actually getting more cyber capability specifically and more military capabilities specifically.
- SBSteven Bartlett
I would argue that most of the, most of the top people in AI believe that the intelligence is gonna continue to scale for some time. A lot of them do, like Geoffrey Hinton does.
- KHKaren Hao
And again, it's, it's back to his hypothesis about how human intelligence works and what the appropriate model of the brain is. His hypothesis throughout his career has been the brain is a statistical engine.
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
But that's his hypothesis, and that is not universally agreed upon, especially among people that are not in the AI world. When you talk with neuroscientists and psychologists, people who actually study human intelligence and the human brain, that is where you start to get a lot of debate and disagreement about this particular view that Hinton has. And so th-this is kind of like one of the, one of the things is, like, AI is already being used in the military and has been used in the military for a long time. But ex-- specifically accelerating large language models isn't just the only path for getting military capab-- Like, the companies would have to choose to specifically pick military capabilities to accelerate, not just, like, general intelli- It's like... You know what I'm saying? Like, they create this myth that they are actually pushing the frontier of all of the capabilities of the model, but that's not what's actually happening internally. And I have, h- I had hundreds of pages of documents on, like, how they were specifically training models. They pick what capabilities they want to advance, and you know how they pick them? It's based on which industries would be able to pay them the most money for their services. So they pick finance, law, medicine, healthcare, commerce. It's not actually intelligent like a, like a, a baby, where you, the, the more that you, that the baby grows up, they start having this, like, general, these general abilities.
- SBSteven Bartlett
I think I have jagged intelligence, honestly.
- KHKaren Hao
[laughs]
- SBSteven Bartlett
I wasn't gonna say it, but [laughs] I think I know a little, I know a little bit about, uh, a f- No, I know a lot about a little bit of things.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
So-
- KHKaren Hao
But it's, but you also have the capability to learn and acquire knowledge by yourself, and you also have the ability to choose what you're gonna learn and acquire by yourself.
- SBSteven Bartlett
It's not easy, and it takes a lot more time than these models it seems. Less compute, but-
- KHKaren Hao
And you can learn how to drive in one place and then immediately know how to drive in another place. These models cannot do that. Every time a self-driving car is shifted to another location, it has to completely retrain on that location. It's like all the self-driving cars, I mean, we're sitting in Austin right now, and there's all these self-driving cars that are driving through Austin. They-
- SBSteven Bartlett
But when one of them learns, they all learn, which is a, which is-
- KHKaren Hao
Well, the, it's just because it's a s- it's an operating system that is, has an AI model as part of it, and you're training the AI model, and then you deploy the AI model across all the self-driving cars.
- SBSteven Bartlett
Which is a big advantage
- 1:12:49 – 1:15:30
How Robots Updating Themselves Could Change Everything Overnight
- SBSteven Bartlett
because if one Optimus robot learns one thing in one factory, they all learn it. And imagine that, imagine if humans, if we all learnt what all the other humans learnt. That would be, that would give us such an unbelievable competitive advantage. I mean, one of the ways we did that is through communication.
- KHKaren Hao
Or it could not because they could be learning the wrong thing, which has also happened again and again with these technologies, is that all of them then learn the wrong thing, and they all have the same failure mode. I mean, part of the resilience [chuckles] of human society is that we do have different expertises and we also have different failure modes.
- SBSteven Bartlett
I think sometimes we hold AI models to a higher standard than we hold humans to. And in a weird way, 'cause I, I would, I'd hear on stage, we're in, we're in Austin at the moment, and I'd hear people go, "Ah, but, you know, them AI models, they hallucinate sometimes." I'm like, "Have you met a human?" Like, [laughs] I halluc-
- KHKaren Hao
Okay
- SBSteven Bartlett
... I hallucinate all the time. I can barely spell or do math. [laughs] So-
- KHKaren Hao
Yes, but it's, it's once again, like, using this analogy that was specifically picked in the early days of the field as a way to market these technologies. Like, we're repeatedly using the intelligence analogy and relating these machines to human intelligence as a, uh, a way to try and gauge whether or notIt is good or worthy or capable in society
- SBSteven Bartlett
I think the output is the thing that really mat-- is the most consequential, which is like, okay, it might have a different brain and mi-- a different system, but it, does it arrive at the same capability? Like, does it, is it able to do surgery on someone's brain? Is it able to drive a car? Like, my car drives itself in, in Los Angeles. I don't touch the steering wheel, and I can drive for, for many, many hours. And in here in Austin, I just saw the ones the other day where they've removed the steering wheel and the pedals, the new Cybertrucks. So I go, it doesn't really matter if it's using a different system. If it's navigating through the world as a car, it has a better safety record than human beings, um, then as far as I'm concerned, intelligence or not, it's like-
- KHKaren Hao
Yes
- SBSteven Bartlett
... you know?
- KHKaren Hao
But that was not the original argument that you made, which was like, these systems are just generally gonna become more intelligent across different things based on the prediction. This is a prediction that you're making, right? Like that-
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... and this is a prediction that all of the AI, um-
- SBSteven Bartlett
Ilya's making, Dara's making-
- KHKaren Hao
All-
- SBSteven Bartlett
... Elon's making, Zuckerberg's making-
- KHKaren Hao
Yes
- SBSteven Bartlett
... Altman's making, Demis is making.
- KHKaren Hao
And do you know what the common feature of all of them is? They profit enormously off of this myth.
- SBSteven Bartlett
Elon has recently spearheaded the construction of Colossus, a massive supercomputer in Memphis housing a hundred thousand GPUs specifically to scale up their Grok API models faster than their competitors. It appears that they've all converged around this idea that you can brute force your way to greater, more generalized intelligence.
- KHKaren Hao
They've converged around the idea that you can brute force your way into models that they can sell to people for automating certain tasks that are, that are financially lucrative.
- 1:15:30 – 1:18:27
Will AI Surpass The Best Surgeons—And What Happens If It Does?
- SBSteven Bartlett
And I heard Elon say that if you're a surgeon now, there's just no point. He was like, "Don't train to be a surgeon." He says, in a couple of years' time, Optimus and AI generally are gonna be better than any surgeon that's ever lived.
- KHKaren Hao
Yeah. You know-
- SBSteven Bartlett
Do you think these things are true?
- KHKaren Hao
Well, you know, I, I've-- pretty sure it was Hinton that famously/infamously said, there would be no need for radiologists anymore.
- SBSteven Bartlett
Oh.
- KHKaren Hao
There would be no need for radiologists anymore in... He set a deadline that we've already passed. I don't remember how many years. Radiology is doing great as a profession. [chuckles]
- SBSteven Bartlett
Do you think it will be in five years?
- KHKaren Hao
Okay. So this, this once again goes back to this question of like, why do we build technology, and why should we specifically be building AI? Okay. And for me, like, the whole project of technology development advancement is not to advance technology for technology's sake.
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
It's to help people. And there's been lots of research that has shown that actually the best outcomes for people in a healthcare setting is for the radiologist to have the AI model in their hands and for the, for the human expert to use the AI model as a tool, as an input into their judgment, and it is that combination that leads to the most accurate and early diagnoses of certain types of cancer that then help improve the prognosis of the patient.
- SBSteven Bartlett
Do you believe that in the coming years, all the cars, pretty much all the cars on the road will be driving themselves?
- KHKaren Hao
No.
- SBSteven Bartlett
You don't, you don't think so?
- KHKaren Hao
Mm-mm.
- SBSteven Bartlett
How come?
- KHKaren Hao
Because of the way the technology works.
- SBSteven Bartlett
Well, how do you mean?
- KHKaren Hao
Because, because these are just statistical... Uh, I mean, currently, the way that AI models are primarily developed, they're statistical engines. You have what's called a neural network, which is a piece of software that has a bunch of densely connected nodes-
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... and-
- SBSteven Bartlett
Like parameters. Is this what they call parameters?
- KHKaren Hao
Yeah, pretty much. And you're just pumping a bunch of data into it, and then it's analyzing the data and creating this, all of these, finding all these correlations in the data, finding all these patterns. And then it's through those patterns that th-the machine is then able to act autonomously, right?
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
And so the way that they're s- training a self-driving car is they're, they're recording all this footage, and then they have tens of thousands or hundreds of thousands of human contractors that draw literally around every single vehicle in the footage, every single pedestrian, every single traffic light, every single lane marking, and label it exactly as such, so that then it's fed into an AI model that can identify all of these different components, and then it's connected to a-a-another piece of software that is not AI that's saying, "Okay, if you, if the AI model recognizes a pedestrian, we do not run over the pedestrian." [chuckles]
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
If the AI model recognizes a red traffic light, we stop.
- 1:18:27 – 1:24:45
Are Self-Driving Cars Truly Safe
- KHKaren Hao
And so the, like, the thing about statistical engines is that it's based on probabilities. It's not based on deterministic logic. So systems make errors all the time, and it's impossible, it is technically impossible to get them to stop making errors.
- SBSteven Bartlett
Humans make errors way more than systems in this case.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
'Cause like the safety record is like, isn't it like ten times more safe to be driven in a Tesla with autonomous driving than it is to, for a human to drive you per mile?
- KHKaren Hao
It depends on the place. It depends on whether the Tesla was trained to specifically navigate the place that you're driving.
- SBSteven Bartlett
But humans get drunk. They-
- KHKaren Hao
Because if it's i-in Mumbai-
- SBSteven Bartlett
Mm
- KHKaren Hao
... in some place in Vietnam, no, it would not be safer. [chuckles]
- SBSteven Bartlett
Mm.
- KHKaren Hao
I would much rather be driven-
- SBSteven Bartlett
True, I would drive in Vietnam
- KHKaren Hao
... by someone that has-
- SBSteven Bartlett
But in-
- KHKaren Hao
... been driving in that place their whole life. I'm, I'm not arguing against, like, the fact that in certain places where the car has been explicitly trained to drive in this place, that it has a better safety record than the humans that are driving in that place. But you specifically asked if I think that all of the-
- SBSteven Bartlett
Most cars
- KHKaren Hao
... m-most cars in the world? In the US? In-
- SBSteven Bartlett
Let's say the United States, 'cause we're here.
- KHKaren Hao
I don't actually think that it's, like, imminently on the horizon.
- SBSteven Bartlett
Ten years?
- KHKaren Hao
No, I don't think so.
- SBSteven Bartlett
I sat with Dara from Uber, and he's pretty convinced that his nine, nine million couriers will be replaced by autonomous vehicles.
- KHKaren Hao
I mean, how long have, has self-driving cars been invested in thus far? It's, it's been more than ten years. And what percentage of cars right now are autonomous?... on the US roads. I mean, so part of it is it's actually not a technical problem, right? Like, part of it is also a so-social problem, like do people even trust getting into these vehicles? Part of it's also a legal problem-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... which is if the car- the self-driving car kills someone, which it has happened-
- SBSteven Bartlett
Yeah, it has happened
- KHKaren Hao
... who is responsible?
- SBSteven Bartlett
So in the case in LA, it was both Tesla and the driver because the driver dropped their phone, they looked down, and this was a couple of years ago, I believe. Um, and they went to grab their phone, and they hit someone. And so it went to court, and they were held both responsible, both the driver and Tesla. Um, in terms of Tesla, pretty much everyone that gets the car, it comes with autonomy now for pretty much most people, I believe. Um-
- KHKaren Hao
Partial autonomy.
- SBSteven Bartlett
Yeah, it's called full self-driving at the moment, where it's like-
- 1:24:45 – 1:35:23
Which Jobs Actually Survive AI And Who Gets Left Behind?
- KHKaren Hao
in hiring. It's a slowdown in hiring across especially white collar professional industries.
- SBSteven Bartlett
And you saw Anthropic's report, didn't you, this week? The TLDR is it matches kinda what you were saying, where they-- Anthropic looked at exactly how people were using their models, and they looked at, like, what people are saying.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
And they said that there's been a forty percent reduction in entry-level jobs in particular. And then they made this graph, which has gone viral over the internet. The red shows where we are now in terms of capability.
- KHKaren Hao
Mm-hmm.
- SBSteven Bartlett
And based on how people are currently using the models, they ex-
- KHKaren Hao
That's their prediction for-
- SBSteven Bartlett
... extrapolated out that the blue part will be the disrupted parts. This is the things that they say AI can do right now, but people don't realize it yet. So if you look at it, it's like, it's kind of all the stuff you'd expect.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
It's the physical, real world human stuff, which robots maybe can do someday, like construction-
- KHKaren Hao
Yeah
- SBSteven Bartlett
... or agriculture that are untouched. But, like, office and admin, um, f- like fi- saying finance stuff, math.
- KHKaren Hao
And you notice that these are all the things that I just named that they-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... purposely-
- SBSteven Bartlett
Legal
- KHKaren Hao
... finance, math, law-
- SBSteven Bartlett
Media and arts-
- KHKaren Hao
... healthcare
- SBSteven Bartlett
... that's me cooked.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
So I don't have a job.
- KHKaren Hao
Um, office, office and admin, I mean, they, they do focus a lot on, like, assistant type m- and managerial work. So but, but the, the, the other thing that the Klarna CEO said was, but people also want human experiences. So it's not actually just about the capabilities of the models. It's also about what people-What-- Like s-some things they would turn to AI for and some things they wouldn't, irrespective of whether or not AI is capable of doing it, but because of a preference that they want human-to-human interaction.
- SBSteven Bartlett
Mm.
- KHKaren Hao
And so what we're seeing right now is, yeah, the, the thing that happens with every wave of automation, which is that there is a bunch of entry-level work that gets automated away, and there are also new jobs created, but the jobs that are created are one-- in one of two categories. There are people that get even higher skilled jobs, and what he was saying, like, we pay people more for, like, the handcrafted code-
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... now. And there's also the people who get way worse jobs. And so there was this amazing article in New York Magazine that was talking about how a lot of people are getting laid off, and then they end up working in data annotation, which is the labor that I've been referring to throughout this conversation that companies need in order to teach their models the next thing that the companies are trying to automate. And so, like, a marketer gets laid off, and then they go and work for a data annotation firm to train the models on the very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill. And the article was talking about how this has become a huge, uh, uh, catch-all for a lot of people that are struggling with finding job opportunities right now, including, like, award-winning directors in Hollywood that are actually secretly doing this data annotation work to put food on the table. And so when they talk about there's going to be un-- mass unemployment and then there's gonna be some new jobs created that we can't even imagine, I think a lot of these narratives rarely talk about, like, first of all, why are some jobs going away? It's not just because of the model capability, it's also because of executive choices and because of the rhetoric that they use if they wanna just downsize. Um, but the other thing that is rarely s- talked about is the jobs, a lot of the jobs that are created are way worse than the jobs that were there.
- SBSteven Bartlett
Yeah.
- KHKaren Hao
And it breaks the career ladder. So it's the entry level and the mid-tier jobs that get gouged out. It's higher order jobs and then way more lower order jobs that get created. And so how do people continue to progress in their careers? There's no more rungs on the ladder.
- SBSteven Bartlett
I actually don't know the answer to this question, and I've been furiously trying to find a good answer to this question because I can-- You know, everything is theory, and for my audience, m- I would say most of my audience don't run businesses. A lot of them do, a lot of them aspire to, but they don't run businesses, so they're kind of-- They're also in the land of theory. They're hearing lots of different things. Jack Dorsey does his tweet saying he's halving his headcount because of AI. They don't know what's true. They don't know the sort of internal economics at Jack's company, and did he bloat the company during the pandemic and he's just using this as an excuse to make the share price spike seven points because his investors now think they're an AI company or whatever.
- 1:35:23 – 1:38:28
What Klarna’s CEO Sees Coming That Others Don’t
- SBSteven Bartlett
the CEO of Klarna, has actually just called me.
- KHKaren Hao
[laughing]
- SBSteven Bartlett
Hello, Sebastian. You all right?
- SPSpeaker
Hey, how are you?
- SBSteven Bartlett
I'm good. How are you? [chuckles]
- SPSpeaker
It's been a while.
- SBSteven Bartlett
It has been a while since you were on the show. I was just saying we do need to get you back on. I, I, I just, I just had a couple of simple questions 'cause, you know, I do a lot of-
- SPSpeaker
Sure
- SBSteven Bartlett
... interviews and, um, Klarna's always mentioned because I think the media has said that you, like, doubled down on AI, then you reversed because it didn't work out. So I know I spoke to you a while ago, and we exchanged a couple of DMs about it, but that was more than a y- it was almost a year ago now. So I just wanted to get an up-
- SPSpeaker
Yep
- SBSteven Bartlett
... an update on Klarna's business, AI agents, and all of that, if possible.
- SPSpeaker
First and foremost, we were early on, uh, released, um, AI, uh, to support our customer service, which had that, uh, initial, uh, benefit of, uh, more calls being dealt with by AI, which customers liked because those calls or chat messages were much, much faster and more qualitative. Then since then, that has actually expanded slightly. Um, what we did, however, try to communicate as well is that we believed in a world of where AI is cheap and available, the value of human interaction will be regarded as higher. So the future of customer service VIP is a human. Um, we have then hence doubled down on providing more of that. But at the same point of time, the efficiency gains within the company has continued. I mean, we used to be about six thousand people, and now we are less than three thousand, which is two, three years since we stopped recruiting. And at the same point of time, our revenue has doubled, right? So you can clearly see that AI has allowed us to be-- do m- more with less people, but we have avoided layoffs and instead relied on natural attrition when people kind of move on to other jobs. I mean, from my perspective, we will continue to be very, you know, not really recruit much. I mean, we recruit a little bit here and there, but we expect that kind of natural attrition of ten to fifteen percent per year to continue on to become fewer. I think the big breakthrough was really in November, December last year, where even the kind of more most skeptical, uh, engineers who are, like, very well renowned and, and appreciated, like the founder of Linux and stuff like that, basically said that coding has now been resolved and hence is not, you know, uh, you don't need to code anymore. And that was kind of a common sentiment. So I think in, in coding, that's definitely an engineering work that has been a tremendous shift in the last six months.
- SBSteven Bartlett
What do all these people go do, Sebastian?
- SPSpeaker
I am optimistic. I mean, I think obviously people will have a lot of opinions about this topic, but I still believe that we are going to move towards a richer society. Now, in the short termBe more worried about what happens if people don't get a job and, and so forth. But I think in the longer term, I s- I am op-optimistic what it means for society and humanity.
- SBSteven Bartlett
Thank you so much, Seb. I'll chat to you soon. Thank you for taking the time. I appreciate you, mate. Thanks.
- SPSpeaker
All right.
- SBSteven Bartlett
Thanks.
- SPSpeaker
All right. See you.
- SBSteven Bartlett
Yeah. Bye-bye.
- SPSpeaker
Yeah.
- 1:38:28 – 1:42:17
Ad Break
- SPSpeaker
Bye.
- SBSteven Bartlett
[paper flips] You know the little traditional SIM card that goes inside of our phones? They haven't changed at all since they were invented in the '90s. You have this physical piece of plastic that means you're locked into one carrier, one network, and the second you cross a border, that carrier can start charging you whatever they want. But there are alternatives, and today's sponsor, Saily, is one of them. It's an eSIM app that gives you a safe and secure data connection in over 200 destinations. All of their eSIMs have built-in cybersecurity, which is great if you're traveling for work and looking at confidential material. I've been using Saily whenever I travel because the connection is always reliable, and it saves me a ton of roaming fees. It also means I don't have to deal with all of the faff that surrounds sorting out a SIM everywhere I go. If you wanna give it a try, download the Saily app from the App Store now and scan the QR code on screen. And if you want 15% off your first purchase, use my code DOAC when you get to checkout. That's DOAC for 15% off. Keep that to yourself. [paper flips] This is something that I've made for you. I've realized that The Diary of a CEO audience are strivers. Whether it's in business or health, we all have big goals that we wanna accomplish. And one of the things I've learnt is that when you aim at the big, big, big goal, it can feel incredibly psychologically uncomfortable because it's kinda like being stood at the foot of Mount Everest and looking upwards. The way to accomplish your goals is by breaking them down into tiny, small steps, and we call this in our team the 1%. And actually, this philosophy is highly responsible for much of our success here. So what we've done so that you at home can accomplish any big goal that you have is we've made these 1% Diaries, and we released these last year, and they all sold out. So I asked my team over and over again to bring the diaries back, but also to introduce some new colors and to make some minor tweaks to the diary. So now we have a better range for you. So if you have a big goal in mind and you need a framework and a process and some motivation, then I highly recommend you get one of these diaries before they all sell out once again. And you can get yours at thediary.com. And if you want the link, the link is in the description below. [paper flips] Any thoughts?
- KHKaren Hao
Well, I actually had thoughts on something that you said before he called.
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
Which is you were saying that the Gen Z-ers, like, there's this trend that they're actually disconnecting from technology, so they're becoming more in person, and then there's this other class of workers that are actually leaning into the technology but then becoming more human because they're leaning into the technology because they're realizing that they should actually just be spending more time doing in-person to person interactions rather than-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... staring at a spreadsheet.
- SBSteven Bartlett
Yeah, yeah. Yeah.
- KHKaren Hao
And so they're no longer doing the typing and whatever. I really wanna go back to this New York Magazine piece that just came out because what you're describing is true for a very specific category of people, which is often, like, the business owners and leadership within companies that actually can make these decisions on how they spend their time and what they ultimately do with their time. But what the piece talks about is the working class, like, people, like, people who are not business owners that are then having to experience being laid off and then working for the data annotation industry-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... which is now one of the t-top jobs on LinkedIn, by the way. Um, the, the-
- SBSteven Bartlett
Really?
- KHKaren Hao
Yeah. So LinkedIn had a report that showed the top 10 jobs with the highest growth in the last year, and data annotation is on that list. And-
- SBSteven Bartlett
And for anyone that doesn't know what data annotation is.
- KHKaren Hao
Yeah. So data annotation is the process of teaching these chatbots or, or any AI system to do what they ultimately are able to do. So the fact that ChatGPT can chat is because there were tens of thousands or hundreds
- 1:42:17 – 1:51:12
What AI Could Cost Us: Meaning, Health, And The Environment
- KHKaren Hao
of thousands of people that were literally typing into a large language model and showing it, "This is how you're supposed to then respond when a user types in a prompt like this." Before they did that work, ChatGPT d-didn't exist. Like, it just... It c- it would just... You would prompt the model, and the model would generate some text that was not in dialogue with the person. It would kinda generate something that was adjacently related.
- SBSteven Bartlett
Is this what they call reinforcement learning, where you kind of... You give it, like, a-
- KHKaren Hao
It's a part of the process of reinforcement learning. So you do data annotation, which is literally, um, showing lots of different, um, you know, examples of things that you want the model to know, and then reinforcement learning is getting the model to then train on those examples iteratively-
- SBSteven Bartlett
Okay
- KHKaren Hao
... in a way that then gives the model some of those capabilities. And what the New York Magazine piece highlighted is many, many of the people that are getting laid off now or, or, or are struggling to find work, and these are highly educated people. They're college graduates, PhD graduates, law degree graduates, doctors, um, and again, like, award-winning directors that are, that are then struggling to find employment in the economy because the economy has been very much restructured by AI. They are then finding themselves being... serving this industry, and the industry is designed in a way that is extremely sh- inhumane because what the companies... The companies that use these data annotation services, like, there's these third-party providers that are data annotation firms. A- an OpenAI, a Groq, um, a Google, they will hire these firms to then find the workers to perform the data annotation tasks that they need. For these firms, these third-party firms, they are incentivized to pit workers against each other because they want this data annotation to happen at speed and as cheaply as possible so that they can also compete with one another in this middle layer to get theThe, the bi-- the, the contract from the-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... the client. And so all of these workers that were interviewed for this New York Magazine story talk about how they actually no longer have an ability to be human because they are waiting at their laptop to be pinged on Slack for when a project is gonna open up for data annotation because they've tried job hunting, they literally can't find anything else. This is the thing that's gonna help them put food on the table for their kids. And there was this one woman who said, like, "I have so much anxiety about when the project is gonna come, when it's gonna leave, that when the project came, it was right when my kid was coming off of, o-off of school, and I just started tasking furiously because I don't know when it's gonna go, and I need to earn as much money as possible in this window of opportunity. So then my w-- when my kid came home and tried to talk to me, I screamed at my child for t- for distracting me." And then she was like, "I've become a monster, and I'm not even allowed to go to the bathroom or take care of my kids, let alone myself, because this industry that is absorbing more and more of the workers that are being laid off is mechanizing my life, atomizing my work, devaluing my expertise, and then harvesting it for the perpetuation of this machine that all of these AI executives are saying is then gonna come for everyone else's jobs." And so what you were saying about these, this class of workers, the business owners that get to become more human because there are all of these AI models now doing the tasks that they don't have to do anymore, it is at the cost of the vast majority of people who are not business owners that are struggling to find work, getting absorbed into the work of then providing these technologies that the business-
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... owners can use. And instead of becoming more human, they feel like their humanity has been squeezed and diminished, and th-they have no ability to have control, agency, and dignity in their lives anymore.
- SBSteven Bartlett
I think this is a big, I think this is a big question that kind of pertains to this graph here, which is, you know, all of these people, uh, if we believe Anthropic's prediction of who will be disrupted, these people in these industries like arts and media, legal, um, life and social sciences, architecture and engineering, computer and maths, business and finance, and management, and also office and admin, these people, if we believe this, would have to retrain at something else. And unlike the Industrial Revolution, where you might get ten, 20 years to retrain because factories take a long time to build, the distribution layer that AI sits on top of is the open internet, so this is why ChatGPT can go pop and get hundreds of millions of users in no time at all and become the fastest growing company of all time. Um, one of my fears is that this disruption takes place at a speed where we can't transition.
- KHKaren Hao
And that was, you know, that... I think you, you, you said that sentence in the passive voice, the transition would happen at a speed. But who is driving that speed?
- SBSteven Bartlett
Um.
- KHKaren Hao
It's the companies.
- SBSteven Bartlett
The companies, yeah.
- KHKaren Hao
And their race with one another.
- SBSteven Bartlett
Yeah.
- KHKaren Hao
And so they are driving the transition to happen at a speed at which it would be really hard to take care of all of the people that would be bulldozed over by-
- SBSteven Bartlett
This is one of the-
- KHKaren Hao
... the advancement of technology
- SBSteven Bartlett
... crazy questions that no one can answer for me when I sit with these people that are AI CEOs. I go, "So what happens to the people if this is-- if you agree that this is gonna happen at super speed?" You know, I've spoke to that CEO of, uh, Uber, Dara, who said very similar things to what you're saying is, you know, there'll be lab- data labeling jobs, for example, for the drivers. But, um, they can't all become data labelers, and there's a question around meaning and purpose and fulfillment, and that comes from losing your meaning in life. I sit, also sit here with so many people who talk about how their father lost their job in Iran or some, some other country and came to the United States and had to be a, a toilet cleaner one particular case, was a doctor in Iran, but came to the US and was a toilet cleaner, and had to deal with the sense of shame that that particular person felt and the lack of dignity that that caused and how that made that person's self-esteem feel and the depression and alcoholism that transpired from that. Um, if this happens at a large scale across society, there's gonna be a ton of consequences like that.
- KHKaren Hao
I mean, this is, this is like the core themes of my work, and the reason why I'm critical of these companies is that they are creating technologies in a way that creates the haves and have-nots in, in extreme form that we have... That it's, it's, it's exacerbating the inequality that we already see in the world. Like the people who have things will have way more riches, they'll have way more free time, they'll be allowed to be more human. But the people who don't have things are ev- being squeezed even more. And it's not just from a work perspective. I mean, I talk in my book also about the environmental and public health crisis that these companies have created, where they are building these colossal supercomputer facilities that... And, and in, in commun- like, communities all around the world, and they specifically pick some of the most vulnerable communities. We're sitting in Texas right now. OpenAI's largest, one of its largest data center projects is being built in Abilene, Texas, as part of the Stargate initiative, which was an effort announced at the beginning of S-Trump's second administration to spend five hundred billion dollars on AI computing infrastructure. This facility consumes, will... when it's finished, will consume more than a gigawatt of power, which is over twenty percent. Yeah, over twenty percent. So this is actually a little bit inaccurate now. Um, this was something that circulated online for a while, but there's updated numbers.
- SBSteven Bartlett
Just for someone that can't see because they're listening on Spotify or something, it's a picture of the size of this facility.
- KHKaren Hao
So this is not theAbilene, Texas one. This is a meta facility
- SBSteven Bartlett
Ah.
- KHKaren Hao
So let's first talk about OpenAI's facility in Texas. That one would be the size of Central Park, and it would run a million computer chips, and it would require the power of more than twenty percent of New York City.
- SBSteven Bartlett
Do you know one of the things which I found confusing, so I'd like to, like, alleviate the dissonance, is I thought you were saying earlier that you didn't think the job disruption promises were real.
- KHKaren Hao
No, what I was saying is that when we talk about what these executives predict-
- SBSteven Bartlett
Yeah
- KHKaren Hao
... about the future, we need to understand that they are ultimately trying to influence the public in a way that allows them to continue maintaining control
- 1:51:12 – 1:56:24
How We Can Build AI Safely Before It’s Too Late
- KHKaren Hao
over the technology. So-
- SBSteven Bartlett
But objectively, do you think that the job disruption that they talk about where-
- KHKaren Hao
Yeah, yeah. I mean, I s- I mentioned-
- SBSteven Bartlett
You think this is real?
- KHKaren Hao
Well, I, uh-
- SBSteven Bartlett
Not necessarily-
- KHKaren Hao
I don't wanna comment specifically on, like, this chart, but it's like we've already seen in job reports that there is a restructuring of the economy happening-
- SBSteven Bartlett
Okay
- KHKaren Hao
... right now. Yeah.
- SBSteven Bartlett
Yeah.
- KHKaren Hao
But, but going back to, like, the data center, so this supercomputer facility, it's a meta supercomputer facility, is being built in Louisiana-
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... and it would be four times the size of the Abilene, Texas one and use half of the average power demand of New York City. So it's one-fifth the size of Manhattan. This makes it seem like almost all of Manhattan, but it's-- it would be one-fifth the size of Manhattan. When these facilities go into these communities, what happens? Power utility increases, grid reliability decreases. The facilities also need fresh water to generate the power for powering them, as well as fresh water to cool, and there have been lots of documented stories of communities that are already really constrained in their fresh water resource. They're under a drought when a facility comes in. And then there are people, the community is actually, like, competing [chuckles] with-
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... this facility for fresh water. I talk about one of those communities in my book. And also, sometimes these facilities, instead of connecting to the grid, they instead a, a power plant pops up next to it. So in Memphis, Tennessee, where Musk built Colossus, the supercomputer for training Grok, he used thirty-five methane gas turbines to power the facility. This is a working-class community, a Black and brown community, a rural community that was not even told that they would be the hosts of this facility. And they discovered it because they literally smelled what seemed like a gas leak in all of their living rooms, and that's when it, they discovered that these methane gas turbines were taking [chuckles] away their right to clean air. And this is a community that's already been facing a history of environmental racism. They had already had lots of struggles to access their right to clean air, and now there's this huge supercomputer that's landed in their midst that is pumping thousands of tons of toxins into their air, exacerbating the asthmatic symptoms of the children, exacerbating the respiratory illnesses of other people. The-- It's, it's one of the communities that has the highest rates of, um, lung cancer. And so-
- SBSteven Bartlett
And, um, com-- su-supercomputers taking their jobs.
- KHKaren Hao
And then they also have supercomputers taking their jobs. So, so this is what I mean is, like, the haves and have-nots are fundamentally being pulled apart even further. Like, if you, in this version of Silicon Valley's future, are in the m-misfortunate category of being a have-not, we are [chuckles] talking about you now getting a job that is way worse than what you had, because you might be doing data annotation-
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... and you might be treated as a machine rather than as a human to extract value, the value of your labor for perpetuating this labor automating machine that these people are building. You might be competing with these [chuckles] facilities for fresh water resources. They're also polluting your air.
- SBSteven Bartlett
So-
- KHKaren Hao
Your bills have increased, so the affordability crisis is getting worse. Like, how is that making people able to be more human?
- SBSteven Bartlett
What do we do about it?
- KHKaren Hao
Yes. Okay. So one of the analogies that I always use is AI is like the word transportation. Transportation can literally refer to everything from a bicycle to a rocket, and we have nuanced conversations about transportation, where we always say we need to transition our transportation towards more, uh, sustainable options. We need to transition towards, you know, public transport, electric vehicles. And we don't, we don't ever say everyone should get a rocket to do every-- to serve all of their transportation needs, right? Like we're in Austin. If you used a rocket to fly from Dallas to Austin, like that would just make not s- no sense. It's just a disproportionate use of resources to get the benefit [chuckles] of getting from point A to point B. That's how we should think about AI. So all of the models that we've been talking about, I like to think of them as the rockets of AI. They use an extraordinary amount of resources, and they provide benefit, some dramatic benefit to some people, but they're also cr- exacting an extraordinary cost on a large swath of people because of the, uh, like, the costs of developing this technology. Why don't we build more bicycles [chuckles] of AI? This is things like DeepMind's AlphaFold, which is a system that predicts how proteins will fold based on amino acid sequences. It's really important for accelerating drug discovery, for understanding human disease, and it won the Nobel Prize in Chemistry in twenty twenty-four. And the reason why it's a bicycle of AI is because you're using small curated datasets.
- 1:56:24 – 2:09:11
Will The AI Race Ever Slow Down Or Are We Past The Point Of Control?
- KHKaren Hao
You're just, you just have data that has amino acid sequences and protein folding. So that means you need significantly lessComputational resources to develop the system, which means significantly less energy, which means less emissions, so on and so forth, and you're providing enormous benefit to people
- SBSteven Bartlett
It feels like the horse has left the stable in this regard because they've already taken people's IP, they've taken media. They, they train on this podcast. We know they do because it sh- it shows that they do. Um, I think there's a button actually in the back end of YouTube now that allows you just to click it, and it says, "We will train on your YouTube channel." Um, so the ho- the horse has kind of left the stable
- KHKaren Hao
Here's the thing. If the horse truly had left the stables, they wouldn't have to train on anything anymore. Why is it that their appetite for data has actually expanded? It's because in order to build the next generations of their technologies, in order to have the technologies continue to be relevant and continue to update with the pace of new knowledge creation and society's evolvement, they need to train again and again and again and again. And why are they employing actually more and more and more data annotation workers over time? It's because they need [chuckles] more and more of that work over time.
- SBSteven Bartlett
They believe they can brute force-
- KHKaren Hao
I mean, I've been reporting on d- data annotation work for over seven years now, and it's not gone down. It's gone-- It, it's increased.
- SBSteven Bartlett
Do you think there's any chance of it going down? Do you think there's any chance of this sort of brute force scaling approach where you take data, you take com- computational power, energy, and you, you know, you have, um, the data labelers and, you know, building out more and more parameters for the models. Do you think there's any chance it's gonna stop or go in a different direction other than the one it's going in now?
- KHKaren Hao
I would love to reframe the question-
- SBSteven Bartlett
Okay
- KHKaren Hao
... and say, what should we be doing in this moment where it's not going down, where we do recognize that actually these companies, in this moment, need continued resources, inputs, and labor to perpetuate what they are doing?
- SBSteven Bartlett
Yeah.
- KHKaren Hao
This is-
- SBSteven Bartlett
Because this sounds like stop, and I just feel like stop is, like, a hard... It feels like... [sighs] I just think, you know, with the government in place, they're supporting these companies like crazy. Globally, this is happening. So I'm like, stop doesn't feel-
- KHKaren Hao
I always say we need to break up the empire, and we need to develop alternatives. And we are already seeing a flourishing of incredible grassroots movements that are pr- applying an enormous amount of pressure to the way that the empire is trying to unfold its agenda.
- SBSteven Bartlett
Mm-hmm.
- KHKaren Hao
Eighty percent of Americans in the most recent poll think that the AI industry need to be regulated.
- SBSteven Bartlett
Yeah, I've seen that.
- KHKaren Hao
When was the last time that eighty percent of Americans were on the same side of an issue?
- SBSteven Bartlett
No, yeah. When I have these conversations on the podcast, the comment section are clear.
- KHKaren Hao
Yeah.
- SBSteven Bartlett
There's no, there's no disagreement. There's no one in there going, "Oh, no, I think they should crack on."
- KHKaren Hao
Yeah.
- SBSteven Bartlett
So-
- KHKaren Hao
Dozens, dozens of protests against data centers have broken out all around this country in the US-
- SBSteven Bartlett
Mm-hmm
- KHKaren Hao
... all around the world.
- SBSteven Bartlett
So what do we do about it?
- KHKaren Hao
So these are th- people that are doing something about it. They are actually reasserting their agency and exercising democratic contestation against the ways that the empires are going about their business.
- SBSteven Bartlett
And what goal should we be aiming at? So if I said to my audience, Jan at home, `cause this is kinda what I see in the comments-
- KHKaren Hao
Yeah
- SBSteven Bartlett
... it's hopelessness. It's like, what can I do? I'm just a-
Episode duration: 2:09:12
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode Cn8HBj8QAbk
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome