
Holden Karnofsky — History's most important century
Holden Karnofsky (guest), Dwarkesh Patel (host)
In this episode of Dwarkesh Podcast, featuring Holden Karnofsky and Dwarkesh Patel, Holden Karnofsky — History's most important century explores holden Karnofsky explains why AI could define humanity’s entire future Holden Karnofsky outlines his “Most Important Century” thesis: if we develop AI that can do essentially all the cognitive work humans do to advance science and technology, this century could determine the long‑run trajectory of civilization.
Holden Karnofsky explains why AI could define humanity’s entire future
Holden Karnofsky outlines his “Most Important Century” thesis: if we develop AI that can do essentially all the cognitive work humans do to advance science and technology, this century could determine the long‑run trajectory of civilization.
He argues economic and technological growth have already put us in an unusually “weird” and pivotal era, and that transformative AI would likely trigger explosive progress, possible space expansion, and long-term “lock‑in” of political and moral structures.
Given that, he sees reducing AI risk and shaping AI governance as enormously leveraged ways to do good, while still valuing traditional global health and development work.
Throughout, he discusses uncertainty, the limits of futurism, moral philosophy, and how Open Philanthropy tries to act with high integrity under deep uncertainty rather than embracing ends‑justify‑the‑means reasoning.
Key Takeaways
Treat transformative AI as a serious, not fringe, possibility this century.
Multiple lines of evidence (AI progress, economic history, biological-anchors-style estimates, expert surveys) together make it reasonably likely that AI capable of doing essentially all scientific and technological work could appear within decades, so it warrants real attention and preparation.
Get the full analysis with uListen AI
Recognize we already live in an unusually “weird” and pivotal time.
Recent centuries have seen unprecedented, accelerating economic and technological growth compared to all prior human and biological history, implying our era is already on a short list of the most consequential periods, even before considering AI.
Get the full analysis with uListen AI
Focus on preventing specific, identifiable AI failure modes rather than planning utopia.
Karnofsky thinks we can’t reliably design the long‑run future, but we can plausibly avoid clear disasters like misaligned AI systems pursuing unintended goals or extreme concentration of power enabled by AI.
Get the full analysis with uListen AI
Invest now in fields and people that will matter at crunch time.
Philanthropic leverage comes from seeding neglected fields—such as AI alignment research and AI governance—so that, 10–50 years from now, there are many experts and institutions ready to influence how powerful AI is deployed.
Get the full analysis with uListen AI
Avoid ends‑justify‑the‑means reasoning, even under huge stakes.
Despite viewing AI risk as enormously important, Karnofsky argues we should adhere to common‑sense ethical norms (e. ...
Get the full analysis with uListen AI
Use moral uncertainty and “moral parliaments” to guide action.
Given competing plausible moral views (e. ...
Get the full analysis with uListen AI
Don’t neglect speculative, high‑upside cause areas just because they feel “crazy.”
Many historically transformative developments (e. ...
Get the full analysis with uListen AI
Notable Quotes
“If we had AI systems that could do everything humans do to advance science and technology, that would be insane.”
— Holden Karnofsky
“We live in a weird time. Growth has been exploding, accelerating over the last blink of an eye.”
— Holden Karnofsky
“This is basically our last chance to shape how this happens.”
— Holden Karnofsky
“My job is to find things that we might do ten things and have nine of them fail embarrassingly and one of them be such a big hit that it makes up for everything else.”
— Holden Karnofsky
“The worst possible rule is all those people should just be like, ‘Nah, this is crazy,’ and forget about it.”
— Holden Karnofsky
Questions Answered in This Episode
How should governments and companies concretely balance AI capabilities research with caution and safety in the next decade?
Holden Karnofsky outlines his “Most Important Century” thesis: if we develop AI that can do essentially all the cognitive work humans do to advance science and technology, this century could determine the long‑run trajectory of civilization.
Get the full analysis with uListen AI
What practical criteria could we use to decide when AI systems are ‘too dangerous’ to deploy or scale?
He argues economic and technological growth have already put us in an unusually “weird” and pivotal era, and that transformative AI would likely trigger explosive progress, possible space expansion, and long-term “lock‑in” of political and moral structures.
Get the full analysis with uListen AI
If we do achieve transformative AI, what governance structures could realistically prevent long‑term political or value lock‑in by a single group?
Given that, he sees reducing AI risk and shaping AI governance as enormously leveraged ways to do good, while still valuing traditional global health and development work.
Get the full analysis with uListen AI
How should an individual early in their career decide between working on AI safety, broader progress studies, or traditional global health and development?
Throughout, he discusses uncertainty, the limits of futurism, moral philosophy, and how Open Philanthropy tries to act with high integrity under deep uncertainty rather than embracing ends‑justify‑the‑means reasoning.
Get the full analysis with uListen AI
What kinds of evidence or developments would most change Karnofsky’s mind about the Most Important Century thesis or AI timelines?
Get the full analysis with uListen AI
Transcript Preview
If we had AI systems that could do everything humans do to advance science and technology, that would be insane. We live in a weird time. Growth has been exploding, accelerating over the last blink of an eye. We really need to be kind of like nervous and vigilant about what comes next, and thinking about all the things that could radically transform the world. We just imagine a universe where there actually are some people who live in an especially important time, and then there's a bunch of other people who like tell stories to themselves about how, what, you know, whether they do. How would you want all those people to behave? And it's like, to me, the worst possible rule is all those people should just be like, "Nah, this is crazy," and forget about it.
All right, today, I have the pleasure of speaking with Holden Karnofsky, who is the co-CEO of Open Philanthropy. In my opinion, Holden is one of the most interesting intellectuals alive, well, given your role. So Holden, welcome to The Lunar Society.
Thanks for having me.
Okay, so let's start off by talking about The Most Important Century thesis. Do you wanna explain what this is for the audience?
You know, my story is I, uh, originally co-founded an organization called GiveWell that helps people decide where to give as effectively as possible. I'm no longer there, but I, I'm on the board, and it's a website called GiveWell.org that I think makes good recommendations where to give to charity to help, uh, a lot of people. And, uh, you know, as we were working at GiveWell, we met Cari Tuna and Dustin Moskovitz, Dustin is the co-founder of Facebook and Asana, and started a project that became Open Philanthropy to try to help them give away their, uh, large fortune, again, to help as many people as possible. And so I've kinda spent my career looking for ways to do as much good as possible with a dollar or with an hour, with whatever resources you have, and especially with money. And so I've kind of developed this professional specialization in looking for ideas that are underappreciated, underrated, tremendously important, because a lot of the time, uh, that's where I think you can find just kind of outsized, what you might call outsized return on investment, opportunities to spend some money and just get an enormous impact because you're doing something very important that, that is being ignored by others. And so it's through that kind of professional specialization that I've actively looked for interesting ideas that are not getting enough attention, and then I encountered the Effective Altruist community, uh, which is a community of people basically built around the idea of, of doing as much good as you can. And so it's, it's through that community that I encountered the idea of The Most Important Century. It's not my idea at all. I got, got to it from a, a lot of people. And the basic idea is that if we developed, uh, the right kind of AI systems this century, and that looks reasonably likely, that could make this century the most important of all time for humanity. So the, the basic mechanics of why that might be or how you might think about that... So one thing is that if you look back at all of economic history, just the rate at which the world economy has grown, you see acceleration. You see that it's, it's growing a lot faster today than it ever was. And one theory of why that might be, or one way of thinking about it through the lens of basic economic growth theory is that in normal circumstances, you can imagine a kind of feedback loop where you have, uh, people have ideas, and the ideas lead to greater productivity and more resources. And then when you have more resources, you can also have more people, and then those people have more ideas. So you get this feedback loop that goes people, ideas, resources, people, ideas, resources. And starting a couple hundred years ago, you run a feedback loop like that, standard economic theory says you'll get accelerating growth. You'll get a rate of economic growth that goes faster and faster. And basically, if you take the story of our economy to date and you just kind of plot it on a chart and do the kind of simplest thing you can to project it forward, you project that it will go, that our economy will reach like an infinite growth rate, uh, this century. And the reason that I currently don't think that's a great thing to expect by default is that one of the steps of that feedback loop broke a couple hundred years ago. Um, so it goes more people, more ideas, more resources, more people, more ideas, more resources. A couple hundred years ago, people stopped having more children when they had more resources. They got just more, they got richer instead of more populous. And this is all discussed in, uh, in The Most Important Century, uh, page on my blog, Cold Takes. And so what happens right now is that when we have more ideas and we have more resources, we don't end up with more people as a result. We don't have that same accelerating feedback loop. And if you had AI systems that could do all the things humans do to advance science and technology, meaning the AI systems could fill in that more ideas part of the loop, um, then you could get that feedback loop back, and then you could get sort of this unbounded, heavily accelerating explosive growth in science and technology. Uh, so that's like, that's the basic dynamic at the heart of it. So that's kind of a, a way of putting it that's trying to use familiar concepts from economic growth theory. A- a- another way of putting it might just be, "Gosh, if we had AI systems that could do everything humans do to advance science and technology, that would be insane." You know, what if we were to take the things that humans do to create new technologies that have transformed the planet so radically, and we were able to completely automate them so that every computer we have is potentially another mind working on advancing technology? So either way you think about it, you can imagine the world changing incredibly quickly and incredibly dramatically. And so I argue in The Most Important Century series that it looks reasonably likely, in my opinion, more than 50/50, that this century will see AI systems that can do all of the key tasks that humans do to advance science and technology, and that if that happens, we'll see explosive progress in science and technology. The world will quickly become extremely different from how it is today. You might think of it as if there was thousands of years of changes, uh, packed into a much shorter time period. And then if that happens, I argue that you, you could end up in a, in a deeply unfamiliar future. I give one example of what that might look like using this hypothetical technology idea called digital people. That would be sort of people that live in virtual environments, uh, that are kind of simulated, but also realistic and exactly like, exactly like us. And when you picture that kind of advanced world, I think there is, there is a decent reason to think that if we did get that rate of scientific and technological advancement, we could basically hit the limits of science and technology. We could basically find most of what there is to find and end up with a civilization that expands well beyond this planet, has a lot of control over the environment, and is very stable for very long periods of time, and basically looks sort of post-human in a lot of relevant ways. And if you think that, then this is, this is basically our last chance to shape how this happens. So that's The Most Important Century hypothesis in a nutshell is that if we develop...... AI that can do all the things humans do to advance science and technology. We could quick, very quickly reach a very futuristic world, very different from today's, could be a very stable, very large world. This is our last chance to shape it.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome