Skip to content
Dwarkesh PodcastDwarkesh Podcast

Holden Karnofsky — History's most important century

Holden Karnofsky is the co-CEO of Open Philanthropy and co-founder of GiveWell. He is also the author of one of the most interesting blogs on the internet, Cold Takes. We discuss: * Are we living in the most important century? * Does he regret OpenPhil’s 30 million dollar grant to OpenAI in 2016? * How does he think about AI, progress, digital people, & ethics? Highly recommend! 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkeshpatel.com/p/holden-karnofsky * Spotify: https://spoti.fi/3Qi11SY * Apple Podcasts: https://apple.co/3CmZXav 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Intro 00:00:58 - The Most Important Century 00:06:44 - The Weirdness of Our Time 00:21:20 - The Industrial Revolution  00:35:40 - AI Success Scenario 00:52:36 - Competition, Innovation , & AGI Bottlenecks 01:00:14 - Lock-in & Weak Points 01:06:04 - Predicting the Future 01:20:40 - Choosing Which Problem To Solve 01:26:56 - $30M OpenAI Investment 01:30:22 - Future Proof Ethics 01:37:28 - Integrity vs Utilitarianism 01:40:46 - Bayesian Mindset & Governance 01:46:56 - Career Advice

Holden KarnofskyguestDwarkesh Patelhost
Jan 2, 20231h 56mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Holden Karnofsky explains why AI could define humanity’s entire future

  1. Holden Karnofsky outlines his “Most Important Century” thesis: if we develop AI that can do essentially all the cognitive work humans do to advance science and technology, this century could determine the long‑run trajectory of civilization.
  2. He argues economic and technological growth have already put us in an unusually “weird” and pivotal era, and that transformative AI would likely trigger explosive progress, possible space expansion, and long-term “lock‑in” of political and moral structures.
  3. Given that, he sees reducing AI risk and shaping AI governance as enormously leveraged ways to do good, while still valuing traditional global health and development work.
  4. Throughout, he discusses uncertainty, the limits of futurism, moral philosophy, and how Open Philanthropy tries to act with high integrity under deep uncertainty rather than embracing ends‑justify‑the‑means reasoning.

IDEAS WORTH REMEMBERING

5 ideas

Treat transformative AI as a serious, not fringe, possibility this century.

Multiple lines of evidence (AI progress, economic history, biological-anchors-style estimates, expert surveys) together make it reasonably likely that AI capable of doing essentially all scientific and technological work could appear within decades, so it warrants real attention and preparation.

Recognize we already live in an unusually “weird” and pivotal time.

Recent centuries have seen unprecedented, accelerating economic and technological growth compared to all prior human and biological history, implying our era is already on a short list of the most consequential periods, even before considering AI.

Focus on preventing specific, identifiable AI failure modes rather than planning utopia.

Karnofsky thinks we can’t reliably design the long‑run future, but we can plausibly avoid clear disasters like misaligned AI systems pursuing unintended goals or extreme concentration of power enabled by AI.

Invest now in fields and people that will matter at crunch time.

Philanthropic leverage comes from seeding neglected fields—such as AI alignment research and AI governance—so that, 10–50 years from now, there are many experts and institutions ready to influence how powerful AI is deployed.

Avoid ends‑justify‑the‑means reasoning, even under huge stakes.

Despite viewing AI risk as enormously important, Karnofsky argues we should adhere to common‑sense ethical norms (e.g., no lying or law‑breaking for the cause), because historically ends‑justify‑the‑means thinking has led to severe harms.

WORDS WORTH SAVING

5 quotes

If we had AI systems that could do everything humans do to advance science and technology, that would be insane.

Holden Karnofsky

We live in a weird time. Growth has been exploding, accelerating over the last blink of an eye.

Holden Karnofsky

This is basically our last chance to shape how this happens.

Holden Karnofsky

My job is to find things that we might do ten things and have nine of them fail embarrassingly and one of them be such a big hit that it makes up for everything else.

Holden Karnofsky

The worst possible rule is all those people should just be like, ‘Nah, this is crazy,’ and forget about it.

Holden Karnofsky

The “Most Important Century” thesis and transformative AIHistorical economic growth, technological acceleration, and why our era is unusualAI alignment risks, misaligned goals, and catastrophic failure modesLock‑in: stable future civilizations and long-term political/moral trajectoriesPhilanthropic strategy: global health vs. longtermist AI workForecasting, biological anchors, and how to reason about AI timelinesMoral philosophy: future‑proof ethics, utilitarianism, and moral uncertainty

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome