Modern WisdomHow Long Could Humanity Continue For? - Will MacAskill
At a glance
WHAT IT’S REALLY ABOUT
Will MacAskill on safeguarding humanity’s vast, fragile long-term future
- Will MacAskill argues that humanity is at the very beginning of an unimaginably long potential future, and that our actions this century could shape millions or even trillions of years to come. He outlines long-termism: taking the interests of future generations seriously and focusing on events that can alter civilization’s overall trajectory, such as engineered pandemics, advanced AI, world war, and value lock-in. MacAskill distinguishes between preventing extinction/collapse and improving the quality of any surviving civilization by influencing its values, institutions, and culture. He stresses that we should buy time, reduce existential risks, and preserve moral flexibility so that much wiser future generations can decide what a truly good future looks like.
IDEAS WORTH REMEMBERING
5 ideasTreat future generations as real stakeholders in today’s decisions.
Given humanity could last millions to trillions of years, current choices about technology, risk, and governance affect an almost unimaginably large number of future lives. Long-termism suggests we should allocate at least a small but serious share of resources to protecting and improving their prospects.
Prioritize reducing man-made existential risks, especially bio and AI.
Natural risks like asteroids appear relatively low and somewhat managed, but we are creating new, more dangerous risks via engineered pathogens, powerful AI, and potential world wars. Investing in things like advanced biosecurity, AI safety, and peace-stabilizing institutions is unusually high leverage.
Accelerate defensive technologies while slowing or banning offensive ones.
MacAskill endorses ‘differential technological progress’: push hard on tools that protect (e.g., far-UVC lighting, early pathogen detection, better PPE) while regulating, delaying, or foregoing technologies mainly useful for harm (e.g., gain-of-function bioweapons capabilities). This reduces risk without requiring global de-growth.
Guard against value lock-in and preserve moral flexibility.
Future technologies and political structures could allow a single ideology or value system to dominate the world—and, with AI or global governance, potentially stay dominant for eons. Since our current morals are almost certainly incomplete, we should design institutions and cultures that allow ongoing moral reflection and change rather than permanent lock-in.
Recognize culture as a powerful driver of large-scale outcomes.
Norms around status, consumption, and morality (e.g., slavery abolition, philanthropy vs. conspicuous luxury) have historically been shaped more by cultural and moral shifts than by pure economics or law. Influencing narratives, status markers, and public discourse can meaningfully affect civilization’s trajectory.
WORDS WORTH SAVING
5 quotesIf we don’t go extinct in the near future, then we are at the very beginning of history. Future generations will see us as the ancients living in the distant past.
— Will MacAskill
Long-termism is about taking the interests of future generations seriously and appreciating just how big that future could be if we play our cards right.
— Will MacAskill
Most technologies can be used for good or ill. Fission gave us nuclear reactors; it also gave us the bomb.
— Will MacAskill
We want to ensure that we don’t end moral progress too soon. If anyone came to power and said, ‘I’m going to lock in my values now,’ I think that would be very bad.
— Will MacAskill
It’s not just about making sure that the future is long; it’s also about making sure that it’s good.
— Will MacAskill
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome