CHAPTERS
Low-cost kindness: a pragmatic case for treating models well
Amanda Askell argues that if it’s not very costly to treat AI models well, we generally should. She frames it as a practical, low-downside choice rather than a claim that models definitely deserve moral status.
What mistreating human-like entities does to us
The conversation shifts from the model’s experience to the human impact: treating something that appears human-like badly may erode our own moral habits. Amanda suggests cruelty toward human-like systems can shape us in undesirable ways.
The ‘kicking a robot’ analogy and everyday cruelty
A brief example—kicking over a robot—captures the intuition that gratuitous harm feels wrong even if the target isn’t conscious. The analogy illustrates how small acts of disrespect toward machines can normalize callousness.
A collective moral experiment under uncertainty
Amanda frames our interactions with AI as a societal-scale question: when we’re uncertain what an entity is, do we err toward doing the right thing? She suggests we are collectively answering this through everyday user behavior.
Future models learn what humanity values
She concludes that future models will learn an important fact about humans from how we treat current systems. Our behavior becomes evidence of our values in ambiguous situations, shaping the lesson AI systems take from us.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome