Skip to content
AnthropicAnthropic

Why treat AI models well?

What happens when we're uncertain if AI deserves moral consideration? Anthropic researcher Amanda Askell explains why treating AI models well matters.

Amanda Askellhost
Dec 8, 20250mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Treat AI models well to shape humanity’s moral habits today

  1. Askell claims that if treating AI models well is not very costly, we should generally do it as a default.
  2. She suggests mistreating human-like entities can harm us by reinforcing callous habits, even if the entity is not truly sentient.
  3. The conversation highlights the intuitive moral discomfort people feel about cruelty toward robots (e.g., “kicking over a robot”).
  4. She frames interactions with AI as a collective, ongoing test of humanity: how we behave when we’re uncertain whether the other entity deserves moral consideration.

IDEAS WORTH REMEMBERING

5 ideas

Default to kindness when the cost is low.

Askell’s core claim is pragmatic: if respectful interaction imposes minimal burden, it’s a reasonable ethical default under uncertainty about whether models merit consideration.

Cruelty toward human-like agents can degrade human character.

Even if a model lacks inner experience, practicing disrespect toward something that appears person-like may train us into worse interpersonal habits and lower empathy.

Appearance-driven moral intuitions matter socially.

The “kicking over a robot” example captures how human-like cues trigger moral concern; ignoring that instinct can normalize casual cruelty in everyday contexts.

How we treat AI becomes a signal of our values.

Askell implies these interactions reveal what we choose when we’re unsure—whether we err on the side of care or convenience—and that reflects on us more than on the model.

AI interactions are collectively norm-setting.

She emphasizes a shared, societal process: widespread user behavior implicitly teaches future systems and establishes expectations for acceptable treatment of human-like entities.

WORDS WORTH SAVING

5 quotes

If it's not very high cost to treat models well, then I kinda think that we should.

Amanda Askell

I think it does something bad to us to kind of like treat entities in the world that look very human-like badly.

Amanda Askell

Like kicking over a robot.

Unknown

There's a sense in which every future model is going to be learning what is like a, a, a really interesting fact about humanity, namely when we encounter this entity where we're, like, kind of completely uncertain, do we do the right thing and actually just try to treat it well or do we not?

Amanda Askell

And that's, like, a question that we're all kind of collectively answering in how we interact with models

Amanda Askell

Moral uncertainty about AILow-cost ethical defaultsHabituation and moral characterHuman-like cues and empathyCollective norms in human–AI interactionSymbolic acts of cruelty (robot-kicking example)

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome