How Long Could Humanity Continue For? - Will MacAskill

How Long Could Humanity Continue For? - Will MacAskill

Modern WisdomAug 13, 20221h 34m

Will MacAskill (guest), Chris Williamson (host), Narrator

Long-termism and humanity’s potential lifespan in cosmic contextExistential risks: extinction, civilizational collapse, and technological stagnationDifferential technological development and biosecurity (e.g., engineered pandemics, far-UVC)Value lock-in, culture, and moral progress over timeGovernance, surveillance, and the tension between safety and authoritarianismCivilizational backup strategies and recovery from global collapsePractical pathways for individuals to positively influence the long-term future

In this episode of Modern Wisdom, featuring Will MacAskill and Chris Williamson, How Long Could Humanity Continue For? - Will MacAskill explores will MacAskill on safeguarding humanity’s vast, fragile long-term future Will MacAskill argues that humanity is at the very beginning of an unimaginably long potential future, and that our actions this century could shape millions or even trillions of years to come. He outlines long-termism: taking the interests of future generations seriously and focusing on events that can alter civilization’s overall trajectory, such as engineered pandemics, advanced AI, world war, and value lock-in. MacAskill distinguishes between preventing extinction/collapse and improving the quality of any surviving civilization by influencing its values, institutions, and culture. He stresses that we should buy time, reduce existential risks, and preserve moral flexibility so that much wiser future generations can decide what a truly good future looks like.

Will MacAskill on safeguarding humanity’s vast, fragile long-term future

Will MacAskill argues that humanity is at the very beginning of an unimaginably long potential future, and that our actions this century could shape millions or even trillions of years to come. He outlines long-termism: taking the interests of future generations seriously and focusing on events that can alter civilization’s overall trajectory, such as engineered pandemics, advanced AI, world war, and value lock-in. MacAskill distinguishes between preventing extinction/collapse and improving the quality of any surviving civilization by influencing its values, institutions, and culture. He stresses that we should buy time, reduce existential risks, and preserve moral flexibility so that much wiser future generations can decide what a truly good future looks like.

Key Takeaways

Treat future generations as real stakeholders in today’s decisions.

Given humanity could last millions to trillions of years, current choices about technology, risk, and governance affect an almost unimaginably large number of future lives. ...

Get the full analysis with uListen AI

Prioritize reducing man-made existential risks, especially bio and AI.

Natural risks like asteroids appear relatively low and somewhat managed, but we are creating new, more dangerous risks via engineered pathogens, powerful AI, and potential world wars. ...

Get the full analysis with uListen AI

Accelerate defensive technologies while slowing or banning offensive ones.

MacAskill endorses ‘differential technological progress’: push hard on tools that protect (e. ...

Get the full analysis with uListen AI

Guard against value lock-in and preserve moral flexibility.

Future technologies and political structures could allow a single ideology or value system to dominate the world—and, with AI or global governance, potentially stay dominant for eons. ...

Get the full analysis with uListen AI

Recognize culture as a powerful driver of large-scale outcomes.

Norms around status, consumption, and morality (e. ...

Get the full analysis with uListen AI

Plan for recovery, not just prevention, of civilizational collapse.

MacAskill argues global collapse is survivable and likely recoverable if we preserve knowledge, resources (like accessible fossil fuels), and perhaps maintain refuges with skilled people. ...

Get the full analysis with uListen AI

Use your career and donations strategically to help the long term.

Individuals can meaningfully influence the far future by donating to highly effective organizations working on existential risk and by shaping their careers toward high-impact paths (e. ...

Get the full analysis with uListen AI

Notable Quotes

If we don’t go extinct in the near future, then we are at the very beginning of history. Future generations will see us as the ancients living in the distant past.

Will MacAskill

Long-termism is about taking the interests of future generations seriously and appreciating just how big that future could be if we play our cards right.

Will MacAskill

Most technologies can be used for good or ill. Fission gave us nuclear reactors; it also gave us the bomb.

Will MacAskill

We want to ensure that we don’t end moral progress too soon. If anyone came to power and said, ‘I’m going to lock in my values now,’ I think that would be very bad.

Will MacAskill

It’s not just about making sure that the future is long; it’s also about making sure that it’s good.

Will MacAskill

Questions Answered in This Episode

How should we decide which existential risks deserve the highest priority, given limited attention and resources?

Will MacAskill argues that humanity is at the very beginning of an unimaginably long potential future, and that our actions this century could shape millions or even trillions of years to come. ...

Get the full analysis with uListen AI

What concrete institutional safeguards could prevent dangerous value lock-in by a future world government or AI system?

Get the full analysis with uListen AI

How can ordinary people effectively influence culture and status norms in ways that improve the long-term trajectory of civilization?

Get the full analysis with uListen AI

Where is the line between beneficial risk-reducing surveillance and an authoritarian safety regime that destroys most of the value of the future?

Get the full analysis with uListen AI

If moral progress is as important as MacAskill suggests, what would a serious, global effort to accelerate moral reflection and philosophy actually look like in practice?

Get the full analysis with uListen AI

Transcript Preview

Will MacAskill

We are at the very beginning of history. Future generations will see us as the ancients living in the distant past. What are the events that really could have civilizational trajectory level impacts? And then, finally taking action, trying to figure out, okay, what can we do to ensure that we navigate these challenges and try to bring about a wonderful future for our grandchildren and their grandchildren? (air whooshing)

Chris Williamson

Given the fact that we're seeing James Webb Telescope images all over the place at the moment, kind of seems like a smart time to be thinking about far-flung futures and potentials for civilization and stuff like that.

Will MacAskill

Absolutely. James Webb is making very vivid, um, and in high resolution, uh, an inc- incredibly important fact, which is just that we are at the moment, uh, both very small in the universe, (laughs) um, and also very early in it. So almost all of the universe's development is still to come ahead of us.

Chris Williamson

That's wild to think about the fact, e- especially on our time scales, right? You know, you think about 20 years as being a very long time in a human lifespan, and then you start to scale stuff up to continents, to the size of a planet, to the size of a solar system or a galaxy or the universe, and it, it puts things into perspective.

Will MacAskill

Yeah. Well, we're used to long-term thinking being o- on the order of a couple of decades or maybe a century at most, but really that's being very myopic. Um, I mean, how long has history gone on for? Well, that's a couple of thousand years. Homo sapiens have been around for a few hundred thousand years. The Earth formed 4 billion years ago. First, um, the Big Bang was a little under 14 billion years ago. And if we don't go extinct in the near future, which we might do, and we might cause our own extinction, then we are at the very beginning of history. Uh, future generations will see us as the ancients living in the distant past. And to see that, we can just use some kind of comparisons. So a typical mammal species lives around a million years. We've been going for 300,000 years. That would put 700,000 years to come. So already on that, by that metric, our life expectancy is very large indeed. But we're not a typical species. We can do a lot of things that other mammals can't. That creates grave risks such as, um, from engineered pathogens or AI that could bring our own, um, demise. But it also means that if we survive those challenges, then we could last much longer again, where the Earth will remain habitable for hundreds of millions of years. And if one- we one day take to the stars, well, the sun itself, um, will, uh, only stop burning in about 8 billion years, and the last stars will be shining in 100 trillion years. So on any of these measures, humanity's life expectancy is truly vast. If you give just a 1% chance to us spreading to, you know, the stars and staying as long as, uh, lasting as long as the stars shine, well, we've got a life expectancy of a trillion years. But even if we stay on Earth, the life expectancy is still tens, many tens of millions of years. And that just means that when we look to the future and when we think about events that might occur in our lifetime that could impact that future, that could change humanity's course, well, you know, we should just boggle at the stakes that are involved.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome