Dwarkesh PodcastHolden Karnofsky — History's most important century
At a glance
WHAT IT’S REALLY ABOUT
Holden Karnofsky explains why AI could define humanity’s entire future
- Holden Karnofsky outlines his “Most Important Century” thesis: if we develop AI that can do essentially all the cognitive work humans do to advance science and technology, this century could determine the long‑run trajectory of civilization.
- He argues economic and technological growth have already put us in an unusually “weird” and pivotal era, and that transformative AI would likely trigger explosive progress, possible space expansion, and long-term “lock‑in” of political and moral structures.
- Given that, he sees reducing AI risk and shaping AI governance as enormously leveraged ways to do good, while still valuing traditional global health and development work.
- Throughout, he discusses uncertainty, the limits of futurism, moral philosophy, and how Open Philanthropy tries to act with high integrity under deep uncertainty rather than embracing ends‑justify‑the‑means reasoning.
IDEAS WORTH REMEMBERING
5 ideasTreat transformative AI as a serious, not fringe, possibility this century.
Multiple lines of evidence (AI progress, economic history, biological-anchors-style estimates, expert surveys) together make it reasonably likely that AI capable of doing essentially all scientific and technological work could appear within decades, so it warrants real attention and preparation.
Recognize we already live in an unusually “weird” and pivotal time.
Recent centuries have seen unprecedented, accelerating economic and technological growth compared to all prior human and biological history, implying our era is already on a short list of the most consequential periods, even before considering AI.
Focus on preventing specific, identifiable AI failure modes rather than planning utopia.
Karnofsky thinks we can’t reliably design the long‑run future, but we can plausibly avoid clear disasters like misaligned AI systems pursuing unintended goals or extreme concentration of power enabled by AI.
Invest now in fields and people that will matter at crunch time.
Philanthropic leverage comes from seeding neglected fields—such as AI alignment research and AI governance—so that, 10–50 years from now, there are many experts and institutions ready to influence how powerful AI is deployed.
Avoid ends‑justify‑the‑means reasoning, even under huge stakes.
Despite viewing AI risk as enormously important, Karnofsky argues we should adhere to common‑sense ethical norms (e.g., no lying or law‑breaking for the cause), because historically ends‑justify‑the‑means thinking has led to severe harms.
WORDS WORTH SAVING
5 quotesIf we had AI systems that could do everything humans do to advance science and technology, that would be insane.
— Holden Karnofsky
We live in a weird time. Growth has been exploding, accelerating over the last blink of an eye.
— Holden Karnofsky
This is basically our last chance to shape how this happens.
— Holden Karnofsky
My job is to find things that we might do ten things and have nine of them fail embarrassingly and one of them be such a big hit that it makes up for everything else.
— Holden Karnofsky
The worst possible rule is all those people should just be like, ‘Nah, this is crazy,’ and forget about it.
— Holden Karnofsky
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome