Skip to content
Modern WisdomModern Wisdom

How Long Could Humanity Continue For? - Will MacAskill

Will MacAskill is a philosopher, ethicist, and one of the originators of the Effective Altruism movement. Humans understand that long term thinking is a good idea, that we need to provide a good place for future generations to live. We try to leave the world better than when we arrived for this very reason. But what about the world in one hundred thousand years? Or 8 billion? If there's trillions of human lives still to come, how should that change the way we act right now? Expect to learn why we're living through a particularly crucial time in the history of the future, the dangers of locking in any set of values, how to avoid the future being ruled by a malevolent dictator, whether the world has too many or too few people on it, how likely a global civilisational collapse is, why technological stagnation is a death sentence and much more... Sponsors: Get a Free Sample Pack of all LMNT Flavours at https://www.drinklmnt.com/modernwisdom (discount automatically applied) Get 20% discount on the highest quality CBD Products from Pure Sport at https://bit.ly/cbdwisdom (use code: MW20) Get 5 Free Travel Packs, Free Liquid Vitamin D and Free Shipping from Athletic Greens at https://athleticgreens.com/modernwisdom (discount automatically applied) Extra Stuff: Buy What We Owe The Future - https://amzn.to/3PDqghm Check out Effective Altruism - https://www.effectivealtruism.org/ Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #philosophy #longtermism #existentialrisk - 00:00 Intro 00:25 Introduction to Long-Term-ism 07:35 Why Our Present Choices Matter 15:13 Possible Futures of Technology & Morals 23:49 Will’s Problem with ‘Happy Birthday’ 34:35 Preventing Human Extinction 42:50 Is Earth Over-Populated? 48:25 Risks of Technological Stagnation 59:17 The Current Focus on Climate Change 1:07:19 Extinction Vs Collapse 1:22:04 Our Goal for the Future of Humanity 1:31:42 Where to Find Will - Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/ Listen to all episodes on audio: Apple Podcasts: https://apple.co/2MNqIgw Spotify: https://spoti.fi/2LSimPn - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

Will MacAskillguestChris Williamsonhost
Aug 12, 20221h 34mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Will MacAskill on safeguarding humanity’s vast, fragile long-term future

  1. Will MacAskill argues that humanity is at the very beginning of an unimaginably long potential future, and that our actions this century could shape millions or even trillions of years to come. He outlines long-termism: taking the interests of future generations seriously and focusing on events that can alter civilization’s overall trajectory, such as engineered pandemics, advanced AI, world war, and value lock-in. MacAskill distinguishes between preventing extinction/collapse and improving the quality of any surviving civilization by influencing its values, institutions, and culture. He stresses that we should buy time, reduce existential risks, and preserve moral flexibility so that much wiser future generations can decide what a truly good future looks like.

IDEAS WORTH REMEMBERING

5 ideas

Treat future generations as real stakeholders in today’s decisions.

Given humanity could last millions to trillions of years, current choices about technology, risk, and governance affect an almost unimaginably large number of future lives. Long-termism suggests we should allocate at least a small but serious share of resources to protecting and improving their prospects.

Prioritize reducing man-made existential risks, especially bio and AI.

Natural risks like asteroids appear relatively low and somewhat managed, but we are creating new, more dangerous risks via engineered pathogens, powerful AI, and potential world wars. Investing in things like advanced biosecurity, AI safety, and peace-stabilizing institutions is unusually high leverage.

Accelerate defensive technologies while slowing or banning offensive ones.

MacAskill endorses ‘differential technological progress’: push hard on tools that protect (e.g., far-UVC lighting, early pathogen detection, better PPE) while regulating, delaying, or foregoing technologies mainly useful for harm (e.g., gain-of-function bioweapons capabilities). This reduces risk without requiring global de-growth.

Guard against value lock-in and preserve moral flexibility.

Future technologies and political structures could allow a single ideology or value system to dominate the world—and, with AI or global governance, potentially stay dominant for eons. Since our current morals are almost certainly incomplete, we should design institutions and cultures that allow ongoing moral reflection and change rather than permanent lock-in.

Recognize culture as a powerful driver of large-scale outcomes.

Norms around status, consumption, and morality (e.g., slavery abolition, philanthropy vs. conspicuous luxury) have historically been shaped more by cultural and moral shifts than by pure economics or law. Influencing narratives, status markers, and public discourse can meaningfully affect civilization’s trajectory.

WORDS WORTH SAVING

5 quotes

If we don’t go extinct in the near future, then we are at the very beginning of history. Future generations will see us as the ancients living in the distant past.

Will MacAskill

Long-termism is about taking the interests of future generations seriously and appreciating just how big that future could be if we play our cards right.

Will MacAskill

Most technologies can be used for good or ill. Fission gave us nuclear reactors; it also gave us the bomb.

Will MacAskill

We want to ensure that we don’t end moral progress too soon. If anyone came to power and said, ‘I’m going to lock in my values now,’ I think that would be very bad.

Will MacAskill

It’s not just about making sure that the future is long; it’s also about making sure that it’s good.

Will MacAskill

Long-termism and humanity’s potential lifespan in cosmic contextExistential risks: extinction, civilizational collapse, and technological stagnationDifferential technological development and biosecurity (e.g., engineered pandemics, far-UVC)Value lock-in, culture, and moral progress over timeGovernance, surveillance, and the tension between safety and authoritarianismCivilizational backup strategies and recovery from global collapsePractical pathways for individuals to positively influence the long-term future

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome