Will MacAskill - Longtermism, Effective Altruism, History, & Technology

Will MacAskill - Longtermism, Effective Altruism, History, & Technology

Dwarkesh PodcastAug 9, 202257m

Will MacAskill (guest), Dwarkesh Patel (host), Narrator

Contingency of moral values versus robustness of technological and economic progressLongtermism, effective altruism, and the idea of a “long reflection” periodHistorical examples of moral and technological change and their driversInstitutional design for representing future generations and reducing existential risksLimitations of existing philanthropy, academia, and global governance on long-run issuesThe role of population, R&D, and elite clusters (e.g., Bell Labs) in innovationTradeoffs between targeted longtermist interventions and broad growth-focused strategies

In this episode of Dwarkesh Podcast, featuring Will MacAskill and Dwarkesh Patel, Will MacAskill - Longtermism, Effective Altruism, History, & Technology explores will MacAskill on shaping humanity’s future through deliberate moral progress Will MacAskill discusses longtermism and effective altruism with Dwarkesh Patel, arguing that our current moral values are highly contingent and likely far from moral truth, so we should avoid locking them in prematurely. He distinguishes relatively non‑contingent technological and economic progress from highly contingent moral progress, emphasizing the outsized, lasting impact of value shifts, institutions, and ideologies. The conversation covers ideas like “long reflection,” longtermist political institutions, existential risk governance (especially AI and bio), and the tradeoffs between targeted longtermist work and broad economic/technological growth. MacAskill also reflects on historical examples, institutional decay, and why academia and philanthropy underinvest in big‑picture questions about the long‑run future.

Will MacAskill on shaping humanity’s future through deliberate moral progress

Will MacAskill discusses longtermism and effective altruism with Dwarkesh Patel, arguing that our current moral values are highly contingent and likely far from moral truth, so we should avoid locking them in prematurely. He distinguishes relatively non‑contingent technological and economic progress from highly contingent moral progress, emphasizing the outsized, lasting impact of value shifts, institutions, and ideologies. The conversation covers ideas like “long reflection,” longtermist political institutions, existential risk governance (especially AI and bio), and the tradeoffs between targeted longtermist work and broad economic/technological growth. MacAskill also reflects on historical examples, institutional decay, and why academia and philanthropy underinvest in big‑picture questions about the long‑run future.

Key Takeaways

Treat current moral values as provisional, not final.

MacAskill argues that if history had gone differently—e. ...

Get the full analysis with uListen AI

Prioritize safeguarding value formation before locking in any ideology.

His “long reflection” proposal is to create a long, relatively safe period where humanity deliberately reasons, debates, and experiments morally before any technology or governance setup can permanently fix a single value system for the future.

Get the full analysis with uListen AI

Focus longtermist action on tractable, concrete risk-reduction measures.

Given how easily “represent the future” institutions can be gamed, MacAskill currently favors narrower but robust reforms—like liability structures for dangerous bio labs or better catastrophe/war risk indices—over grand future-representation bodies.

Get the full analysis with uListen AI

Recognize that moral entrepreneurs can have unusually durable influence.

Where most technological advances would likely have happened within decades anyway, shifts driven by religious leaders, political philosophers, and moral activists (e. ...

Get the full analysis with uListen AI

Use both first‑principles reasoning and historical perspective.

MacAskill thinks modern high‑change environments require more first‑principles thinking than traditional deference to custom, but he also notes that effective altruists underweight history and should study it more to avoid past mistakes (e. ...

Get the full analysis with uListen AI

Design long-lived institutions with broad, flexible missions.

Examples like Benjamin Franklin’s narrowly specified blacksmith-apprentice fund illustrate that overly specific mandates age poorly; better is to anchor institutions to broad, enduring goals (e. ...

Get the full analysis with uListen AI

Targeted longtermism must prove its predictive edge over generic growth boosting.

MacAskill acknowledges the Tyler Cowen/Patrick Collison critique that detailed long-run forecasting may be a mug’s game, but counters that trends in AI, bio, and pandemic preparedness look predictable enough to justify specific risk-focused work and then should be tested against outcomes.

Get the full analysis with uListen AI

Notable Quotes

We should not think we're at the end of moral progress, and we should not think, "Oh, we should lock in the kind of Western values we have now."

Will MacAskill

One of the lessons I draw in the book is… we want to ensure that we spend a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than merely whichever happened to win out.

Will MacAskill

In the long run at least I think economic or technological progress is very non‑contingent… but moral progress is not.

Will MacAskill

There are many, many issues that are enormously important but are just not incentivized basically anywhere in the world.

Will MacAskill

You weren't a real philosopher unless you had some grand unifying theory… I’m not saying all of academic inquiry should be like that, but should there be at least some people whose role is to really think about the big picture? I think yes.

Will MacAskill

Questions Answered in This Episode

If our current values are highly contingent, how can we practically avoid locking in bad values while still making decisive progress on global risks?

Will MacAskill discusses longtermism and effective altruism with Dwarkesh Patel, arguing that our current moral values are highly contingent and likely far from moral truth, so we should avoid locking them in prematurely. ...

Get the full analysis with uListen AI

What kinds of institutional experiments today could realistically approximate MacAskill’s ‘long reflection’ without being captured by present-day interests?

Get the full analysis with uListen AI

How should policymakers balance broad growth-enhancing policies against targeted interventions on AI and bio risks when both claim long-term impact?

Get the full analysis with uListen AI

In designing long-lived foundations or universities, what governance structures could reduce harmful value drift while still allowing necessary adaptation?

Get the full analysis with uListen AI

Given that many transformative moral shifts have historically come from small groups or individuals, where are the most promising leverage points for moral entrepreneurship now?

Get the full analysis with uListen AI

Transcript Preview

Will MacAskill

... you know, we should not think we're at the end of moral progress, and we should not think, "Oh, we should lock in the kind of Western values we have now." Instead, we should think, "We wanna s- ensure that we spend like a lot of time trying to figure out what's actually morally right, so that the future is guided by the right values, rather than merely whichever happened to win out." Perhaps if the Industrial Revolution had happened in India rather than in, uh, Western Europe, then perhaps we wouldn't have wide-scale factory farming. And then academia, I think, has just developed a culture where you don't tackle such problems in academia. (laughs) Partly that's because they fall through cracks of different disciplines, and partly because they just seem too big, or too grand, or too speculative. The idea of long reflection is getting society into a state that before we take any drastic actions that might lock in a particular set of values, we allow this force of reason, and empathy, and debate, and good-hearted kind of moral inquiry to guide which values we end up with.

Dwarkesh Patel

(Instrumental music) Okay. Today, I have the pleasure of interviewing William MacAskill. Will is one of the founders of the Effective Altruism movement, and most recently, the author of the upcoming book, What We Owe The Future. Will, thanks for coming on the podcast.

Will MacAskill

Thanks so much for having me on.

Dwarkesh Patel

So my first question is, what is the high-level explanation for the success of the Effective Altruism movement? Is it itself an example of the contingencies you talk about in the book?

Will MacAskill

Uh, yeah, I p- think it probably is kind of contingent. Maybe not on the order of like this would never have happened, but at least on the order of decades. Uh, evidence for the reason why Effective Altruism is somewhat contingent is just that similar ideas have been promoted at many times during history, and not taken on. So we can go all the way back to ancient China, the Mohists defended kind of impartial view of morality, and took very strategic actions to try and, um, help all people, in particular, um, providing defensive, um, assistance to cities under siege. Uh, then of course there were the early utilitarians. Effective altruism is broader than utilitarianism, but has some similarities. Uh, and then even Peter Singer in the '70s, he had been promoting the idea that we should be giving most of our income to help the very poor, and hadn't had a lot of traction until, yeah, even like early 2010 after GiveWell launched, after Giving What We Can launched. What explains the rise of it? I mean, I think it was a good idea waiting to happen at some point. Uh, I think the internet helped to gather together a lot of like-minded people that weren't possible otherwise, and I think there were some particularly lucky events, like Ellie meeting Holden and me meeting Toby, that helped catalyze it at the particular time it did.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome