Joseph Carlsmith - Utopia, AI, & Infinite Ethics

Joseph Carlsmith - Utopia, AI, & Infinite Ethics

Dwarkesh PodcastAug 3, 20221h 32m

Joseph (Joe) Carlsmith (guest), Dwarkesh Patel (host)

Nature and importance of utopia as a radically better, yet finite futureAI timelines, brain compute estimates, and risks from transformative AIDangers and benefits of utopian thinking within effective altruism and longtermismInfinite ethics: moral reasoning under infinite worlds and infinite impactAnthropic reasoning: self‑indication vs self‑sampling assumptions and the doomsday argumentMoral uncertainty about non‑human animals and insect sufferingThe limits of futurism, abstraction vs concreteness, and Carlsmith’s writing process

In this episode of Dwarkesh Podcast, featuring Joseph (Joe) Carlsmith and Dwarkesh Patel, Joseph Carlsmith - Utopia, AI, & Infinite Ethics explores joseph Carlsmith explores utopia, AI risk, and infinite ethics Joseph Carlsmith, a philosopher and AI x‑risk researcher at Open Philanthropy, discusses how seriously we should take the possibility of a radically better future (utopia), and the risks posed by transformative AI. He argues that utopia should be seen as a profoundly better but still concrete, finite world, likely reachable only after a long process of civilizational wisdom and coordination, not a simple hedonistic tiling of the universe.

Joseph Carlsmith explores utopia, AI risk, and infinite ethics

Joseph Carlsmith, a philosopher and AI x‑risk researcher at Open Philanthropy, discusses how seriously we should take the possibility of a radically better future (utopia), and the risks posed by transformative AI. He argues that utopia should be seen as a profoundly better but still concrete, finite world, likely reachable only after a long process of civilizational wisdom and coordination, not a simple hedonistic tiling of the universe.

He explains his work estimating the computational power of the human brain as an input to AI timelines, stressing large uncertainties both in neuroscience and in how current machine learning might scale to human‑level intelligence. The conversation then turns to “infinite ethics” and anthropics: how to reason morally and probabilistically when the universe, or our influence within it, might be infinite.

Carlsmith defends taking low‑probability, high‑stakes possibilities (like infinite impact or insect suffering) seriously without letting them completely derail practical decision‑making, emphasizing epistemic humility, moral uncertainty, and the goal of preserving options so a wiser future civilization can handle the hardest questions. He also reflects on why much futurism feels abstract or unreal, and on his own writing practice and influences.

Key Takeaways

Treat utopia as a real, finite possibility, not mystical perfection.

Carlsmith defines utopia as a ‘profoundly better’ but still resource‑constrained world, arguing we systematically underestimate how much better things could be and should relate to utopia as a concrete, achievable target rather than an abstract fantasy.

Get the full analysis with uListen AI

Plan for transformative AI under deep uncertainty about timelines and methods.

His work uses neuroscience‑based estimates of brain compute as an anchor for AI timelines, but he stresses large uncertainty—both about how much compute is needed and whether current deep learning paradigms will scale—implying we must prepare for both sooner‑than‑expected and later‑than‑expected breakthroughs.

Get the full analysis with uListen AI

Handle utopian ambition with caution, not denial.

Utopian thinking has historically fueled dangerous ideologies, but Carlsmith argues the solution is not to ignore the possibility of a much better world, but to monitor rigidity, fanaticism, and willingness to ‘break things’ in pursuit of moral visions.

Get the full analysis with uListen AI

Preserve options so a wiser future can address infinite‑ethics puzzles.

Because infinite worlds and acausal impacts can dominate expected‑value reasoning yet remain philosophically unsettled, he favors focusing now on survival, wisdom‑growth, and option‑preservation so future, more capable agents can make better‑informed choices.

Get the full analysis with uListen AI

Acknowledge moral uncertainty about animals and insects without paralysis.

He thinks it’s unreasonable to assign zero moral weight to creatures like ants, but instead of adopting extreme Jain‑like behavior, he advocates recognizing the trade‑offs, accepting residual risk, and taking responsibility for one’s chosen level of concern.

Get the full analysis with uListen AI

Use anthropic reasoning carefully; both major approaches have serious costs.

Carlsmith finds the self‑indication assumption (SIA) less problematic than the self‑sampling assumption (SSA), because SSA leads to odd ‘telekinetic’ predictions and strong doomsday updates, but he emphasizes that SIA also faces severe issues, especially in infinite universes.

Get the full analysis with uListen AI

Keep futurism tethered to reality by cycling between abstraction and concrete imagination.

He argues that our minds can’t directly model the full future, so good futurism must use high‑level abstractions while regularly grounding them in vivid, concrete scenarios—even though any specific scenario will be wrong in detail—to retain a sense of talking about the real world.

Get the full analysis with uListen AI

Notable Quotes

“Utopia for me just means a kind of profoundly better future… something that we could do… a world that is radically better than the world we live in today.”

Joseph Carlsmith

“The future is a big thing to try to model with this tiny mind, and so, of necessity, you need to use these extremely lossy abstractions.”

Joseph Carlsmith

“I don’t sit around thinking that we sort of know what utopia is right now, and it’s hedonium… I really don’t assume that that’s what utopia is about at all.”

Joseph Carlsmith

“There’s a middle ground between ‘I shall ignore this completely’ and ‘I shall be a Jain,’ which is recognizing that this is a real trade‑off, there’s uncertainty here, and taking responsibility for how you’re responding to that.”

Joseph Carlsmith

“I think the right path forward is to survive long enough for our civilization to become much wiser… and then to use that position of wisdom and empowerment to act better with respect to these issues.”

Joseph Carlsmith

Questions Answered in This Episode

If a truly ‘profoundly better’ future might initially feel alien or disturbing to us, how should we decide now which paths toward it are worth pursuing?

Joseph Carlsmith, a philosopher and AI x‑risk researcher at Open Philanthropy, discusses how seriously we should take the possibility of a radically better future (utopia), and the risks posed by transformative AI. ...

Get the full analysis with uListen AI

Given the large uncertainties in brain‑compute estimates and AI scaling, what concrete policy or research priorities does Carlsmith think are most robust across different AI‑timeline scenarios?

He explains his work estimating the computational power of the human brain as an input to AI timelines, stressing large uncertainties both in neuroscience and in how current machine learning might scale to human‑level intelligence. ...

Get the full analysis with uListen AI

How should an individual practically balance concerns about long‑term, potentially infinite impacts with more immediate and tangible moral issues like animal welfare or global poverty?

Carlsmith defends taking low‑probability, high‑stakes possibilities (like infinite impact or insect suffering) seriously without letting them completely derail practical decision‑making, emphasizing epistemic humility, moral uncertainty, and the goal of preserving options so a wiser future civilization can handle the hardest questions. ...

Get the full analysis with uListen AI

Is there a plausible way to refine anthropic reasoning so that it avoids both SSA’s ‘telekinetic’ implications and SIA’s strong push toward highly populated or infinite universes?

Get the full analysis with uListen AI

What would meaningful ‘wisdom growth’ at a civilizational scale look like in practice, and how could current institutions be steered toward that goal without falling into dangerous utopianism?

Get the full analysis with uListen AI

Transcript Preview

Joseph (Joe) Carlsmith

So utopia for me just means a kind of profoundly better future. And I think it's important because I think it's just actually possible. I just think it's actually something that we could do. If, if we sort of play our cards right, we could just build a world that is radically better than the world we live in today. (screen swooshes) Infinite ethics is ethics that tries to grapple with how we should, uh, act with respect to kind of infinite worlds. (screen swooshes) There's a middle ground between "I shall ignore this completely" and "I shall, you know, be a Jane," um, which is recognizing that this is a, this is a real trade-off, there's uncertainty here, and, and taking responsibility for how you're responding to that. (screen swooshes) The future is a big thing to try to model with this tiny mind, and so, you know, o- of necessity, you need to use these extremely lossy abstractions. (cheerful music)

Dwarkesh Patel

Today, I have the pleasure of interviewing Joe Carlsmith, who's a senior research a- analyst at Open Fo- uh, Philanthropy and a doctoral student in philosophy at the University of Oxford. Um, it, uh, j- uh, Joe has a really interesting blog that I got to check out, uh, called Hands and Cities, um, and that's the reason that I wanted to have him on the podcast, 'cause it has a bunch of thought-provoking and insightful, uh, uh, posts on there about philosophy, morality, ethics, uh, the future. And yeah, so I, I really wanted to talk to you, Joe, uh, but are you... Do, do you wanna give a, a bit of a longer intro on what you're up to?

Joseph (Joe) Carlsmith

Sure. So I work at Open Philanthropy on existential risk from artificial intelligence, um, and so, you know, I think about what's gonna happen with AI, how can we make sure it goes well, and in particular, how can we make sure that advanced AI systems are safe? Uh, and then, uh, I have a side project, which is this blog, uh, where I write about philosophy and, uh, and the future and things like that. And that emerges partly from the, um, a sort of, my background, which is, um, I was, I was... Before, before getting into, uh, into AI and working at Open, Open Philanthropy, I was in, uh, academic philosophy.

Dwarkesh Patel

Okay, yeah. That, that's, uh, that, that, that's a quite, quite an ambitious side project. I mean, g- given the length and the regularity of those posts, it's, it's actually quite stunning. Um, do, do you want to talk more about what you're working on a- about AI at Op- uh, at, uh, Open Philanthropy?

Joseph (Joe) Carlsmith

So it's a mix of things. Right now, I'm thinking about AI timelines and what's called takeoff speeds, sort of, sort of how fast the transition is from pretty impressive AI systems to AI systems that are, uh, kind of radically transformative, um, and I'm trying to use that, uh, to provide more perspective on the probability that, um, that everything goes terribly wrong.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome