Dwarkesh PodcastWill MacAskill - Longtermism, Effective Altruism, History, & Technology
At a glance
WHAT IT’S REALLY ABOUT
Will MacAskill on shaping humanity’s future through deliberate moral progress
- Will MacAskill discusses longtermism and effective altruism with Dwarkesh Patel, arguing that our current moral values are highly contingent and likely far from moral truth, so we should avoid locking them in prematurely. He distinguishes relatively non‑contingent technological and economic progress from highly contingent moral progress, emphasizing the outsized, lasting impact of value shifts, institutions, and ideologies. The conversation covers ideas like “long reflection,” longtermist political institutions, existential risk governance (especially AI and bio), and the tradeoffs between targeted longtermist work and broad economic/technological growth. MacAskill also reflects on historical examples, institutional decay, and why academia and philanthropy underinvest in big‑picture questions about the long‑run future.
IDEAS WORTH REMEMBERING
5 ideasTreat current moral values as provisional, not final.
MacAskill argues that if history had gone differently—e.g., Nazis winning WWII or the Industrial Revolution occurring elsewhere—our moral views would feel just as self‑evident to us, suggesting we are still far from moral truth and should resist permanently entrenching today’s norms.
Prioritize safeguarding value formation before locking in any ideology.
His “long reflection” proposal is to create a long, relatively safe period where humanity deliberately reasons, debates, and experiments morally before any technology or governance setup can permanently fix a single value system for the future.
Focus longtermist action on tractable, concrete risk-reduction measures.
Given how easily “represent the future” institutions can be gamed, MacAskill currently favors narrower but robust reforms—like liability structures for dangerous bio labs or better catastrophe/war risk indices—over grand future-representation bodies.
Recognize that moral entrepreneurs can have unusually durable influence.
Where most technological advances would likely have happened within decades anyway, shifts driven by religious leaders, political philosophers, and moral activists (e.g., Jesus, Muhammad, Marx, abolitionists) can alter value trajectories for centuries or longer.
Use both first‑principles reasoning and historical perspective.
MacAskill thinks modern high‑change environments require more first‑principles thinking than traditional deference to custom, but he also notes that effective altruists underweight history and should study it more to avoid past mistakes (e.g., Mill’s coal worries).
WORDS WORTH SAVING
5 quotesWe should not think we're at the end of moral progress, and we should not think, "Oh, we should lock in the kind of Western values we have now."
— Will MacAskill
One of the lessons I draw in the book is… we want to ensure that we spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than merely whichever happened to win out.
— Will MacAskill
In the long run at least I think economic or technological progress is very non‑contingent… but moral progress is not.
— Will MacAskill
There are many, many issues that are enormously important but are just not incentivized basically anywhere in the world.
— Will MacAskill
You weren't a real philosopher unless you had some grand unifying theory… I’m not saying all of academic inquiry should be like that, but should there be at least some people whose role is to really think about the big picture? I think yes.
— Will MacAskill
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome