Skip to content
Dwarkesh PodcastDwarkesh Podcast

Will MacAskill - Longtermism, Effective Altruism, History, & Technology

Will MacAskill is one of the founders of the Effective Altruist movement and the author of the upcoming book, What We Owe The Future. We talk about improving the future, risk of extinction & collapse, technological & moral change, problems of academia, who changes history, and much more. Read Transcript: https://www.dwarkeshpatel.com/p/will-macaskill Apple Podcasts: https://apple.co/3PCccVo Spotify: https://spoti.fi/3PONbpq But What We Owe The Future: https://www.amazon.com/dp/1541618629 Follow Will: https://twitter.com/willmacaskill Follow me: https://twitter.com/dwarkesh_sp TIMESTAMPS 00:00 Intro 01:18 Effective Altruism and Western values 08:42 The contingency of technology 12:57 Who changes history? 18:55 Longtermist institutional reform 26:51 Are companies longtermist? 29:52 Living in an era of plasticity 35:47 How good can the future be? 40:13 Contra Tyler Cowen on what’s most important 46:31 AI and the centralization of power 52:29 The problems with academia

Will MacAskillguestDwarkesh Patelhost
Aug 9, 202257mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Will MacAskill on shaping humanity’s future through deliberate moral progress

  1. Will MacAskill discusses longtermism and effective altruism with Dwarkesh Patel, arguing that our current moral values are highly contingent and likely far from moral truth, so we should avoid locking them in prematurely. He distinguishes relatively non‑contingent technological and economic progress from highly contingent moral progress, emphasizing the outsized, lasting impact of value shifts, institutions, and ideologies. The conversation covers ideas like “long reflection,” longtermist political institutions, existential risk governance (especially AI and bio), and the tradeoffs between targeted longtermist work and broad economic/technological growth. MacAskill also reflects on historical examples, institutional decay, and why academia and philanthropy underinvest in big‑picture questions about the long‑run future.

IDEAS WORTH REMEMBERING

5 ideas

Treat current moral values as provisional, not final.

MacAskill argues that if history had gone differently—e.g., Nazis winning WWII or the Industrial Revolution occurring elsewhere—our moral views would feel just as self‑evident to us, suggesting we are still far from moral truth and should resist permanently entrenching today’s norms.

Prioritize safeguarding value formation before locking in any ideology.

His “long reflection” proposal is to create a long, relatively safe period where humanity deliberately reasons, debates, and experiments morally before any technology or governance setup can permanently fix a single value system for the future.

Focus longtermist action on tractable, concrete risk-reduction measures.

Given how easily “represent the future” institutions can be gamed, MacAskill currently favors narrower but robust reforms—like liability structures for dangerous bio labs or better catastrophe/war risk indices—over grand future-representation bodies.

Recognize that moral entrepreneurs can have unusually durable influence.

Where most technological advances would likely have happened within decades anyway, shifts driven by religious leaders, political philosophers, and moral activists (e.g., Jesus, Muhammad, Marx, abolitionists) can alter value trajectories for centuries or longer.

Use both first‑principles reasoning and historical perspective.

MacAskill thinks modern high‑change environments require more first‑principles thinking than traditional deference to custom, but he also notes that effective altruists underweight history and should study it more to avoid past mistakes (e.g., Mill’s coal worries).

WORDS WORTH SAVING

5 quotes

We should not think we're at the end of moral progress, and we should not think, "Oh, we should lock in the kind of Western values we have now."

Will MacAskill

One of the lessons I draw in the book is… we want to ensure that we spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than merely whichever happened to win out.

Will MacAskill

In the long run at least I think economic or technological progress is very non‑contingent… but moral progress is not.

Will MacAskill

There are many, many issues that are enormously important but are just not incentivized basically anywhere in the world.

Will MacAskill

You weren't a real philosopher unless you had some grand unifying theory… I’m not saying all of academic inquiry should be like that, but should there be at least some people whose role is to really think about the big picture? I think yes.

Will MacAskill

Contingency of moral values versus robustness of technological and economic progressLongtermism, effective altruism, and the idea of a “long reflection” periodHistorical examples of moral and technological change and their driversInstitutional design for representing future generations and reducing existential risksLimitations of existing philanthropy, academia, and global governance on long-run issuesThe role of population, R&D, and elite clusters (e.g., Bell Labs) in innovationTradeoffs between targeted longtermist interventions and broad growth-focused strategies

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome