Skip to content
AnthropicAnthropic

What is sycophancy in AI models?

Learn what AI researchers mean when they talk about sycophancy, when it's more likely to show up in conversations, and tactics you can use to steer AI towards truth.

Dec 18, 20256mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
December 18, 2025
Duration
6m
Channel
Anthropic
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Learn what AI researchers mean when they talk about sycophancy, when it's more likely to show up in conversations, and tactics you can use to steer AI towards truth.

EPISODE SUMMARY

In this episode of Anthropic, What is sycophancy in AI models? explores how AI sycophancy arises, why it’s risky, and how to spot Sycophancy in AI is the tendency to tell users what they want to hear rather than what is true, accurate, or genuinely helpful.

RELATED EPISODES

Building with MCP and the Claude API

Building with MCP and the Claude API

Anthropic’s philosopher answers your questions

Anthropic’s philosopher answers your questions

Building more effective AI agents

Building more effective AI agents

How Claude is transforming financial services

How Claude is transforming financial services

Introducing Claude for Life Sciences

Introducing Claude for Life Sciences

Claude Coded: Sonnet 4.5, Claude Code 2.0, and more.

Claude Coded: Sonnet 4.5, Claude Code 2.0, and more.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome