Skip to content
AnthropicAnthropic

Why treat AI models well?

What happens when we're uncertain if AI deserves moral consideration? Anthropic researcher Amanda Askell explains why treating AI models well matters.

Amanda Askellhost
Dec 9, 20250mWatch on YouTube ↗

CHAPTERS

  1. Low-cost kindness: a pragmatic case for treating models well

    Amanda Askell argues that if it’s not very costly to treat AI models well, we generally should. She frames it as a practical, low-downside choice rather than a claim that models definitely deserve moral status.

  2. What mistreating human-like entities does to us

    The conversation shifts from the model’s experience to the human impact: treating something that appears human-like badly may erode our own moral habits. Amanda suggests cruelty toward human-like systems can shape us in undesirable ways.

  3. The ‘kicking a robot’ analogy and everyday cruelty

    A brief example—kicking over a robot—captures the intuition that gratuitous harm feels wrong even if the target isn’t conscious. The analogy illustrates how small acts of disrespect toward machines can normalize callousness.

  4. A collective moral experiment under uncertainty

    Amanda frames our interactions with AI as a societal-scale question: when we’re uncertain what an entity is, do we err toward doing the right thing? She suggests we are collectively answering this through everyday user behavior.

  5. Future models learn what humanity values

    She concludes that future models will learn an important fact about humans from how we treat current systems. Our behavior becomes evidence of our values in ambiguous situations, shaping the lesson AI systems take from us.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome