Skip to content
Lex Fridman PodcastLex Fridman Podcast

Kate Darling: Social Robots, Ethics, Privacy and the Future of MIT | Lex Fridman Podcast #329

Kate Darling is a researcher at MIT Media Lab interested in human robot interaction and robot ethics. Please support this podcast by checking out our sponsors: - True Classic Tees: https://trueclassictees.com/lex and use code LEX to get 25% off - Shopify: https://shopify.com/lex to get 14-day free trial - Linode: https://linode.com/lex to get $100 free credit - InsideTracker: https://insidetracker.com/lex to get 20% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free EPISODE LINKS: Kate's Twitter: http://twitter.com/grok_ Kate's Website: http://katedarling.org Kate's Instagram: http://www.instagram.com/grok_ The New Breed (book): https://amzn.to/3ExhBuf Creativity without Law (book): https://amzn.to/3MqV5F3 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 1:46 - What is a robot? 17:47 - Metaverse 27:09 - Bias in robots 40:51 - Modern robotics 43:24 - Automation 47:46 - Autonomous driving 56:11 - Privacy 59:37 - Google's LaMDA 1:04:24 - Robot animal analogy 1:17:28 - Data concerns 1:35:30 - Humanoid robots 1:54:31 - LuLaRobot 2:03:25 - Ethics in robotics 2:18:46 - Jeffrey Epstein 2:52:21 - Love and relationships SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Kate DarlingguestLex Fridmanhost
Oct 14, 20223h 3mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Robots as Pets, Not People: Ethics, Emotion, and MIT’s Future

  1. Lex Fridman and MIT roboticist Kate Darling explore how we should think about robots and AI, arguing that viewing them as analogous to domesticated animals is more useful than comparing them to humans. They discuss social robots, human–robot interaction, bias, privacy, and the dangers of using emotionally persuasive machines for marketing and surveillance. The conversation also covers autonomous vehicles, humanoid robots like Tesla’s Optimus, and how corporate risk-aversion and PR culture often kill creativity and good design. In a personal and critical segment, Kate reflects on the Jeffrey Epstein scandal, institutional cowardice at MIT, and what real leadership and accountability should look like.

IDEAS WORTH REMEMBERING

5 ideas

Thinking of robots like domesticated animals is more productive than comparing them to humans.

Kate argues that animals have historically complemented human abilities rather than replicated them; robots should similarly offer different, supplemental skills (e.g., sensing, endurance) rather than “human-like” replacements, which reframes design, responsibility, and expectations.

Anthropomorphism is powerful—and inevitable—so robot design must take it seriously.

People treat robots as social agents based on minimal cues (eyes, names, movement, beeps), so ignoring HRI leads to backlash, as seen with grocery robot “Marty” and past failures like Clippy; companies should intentionally shape how people relate to robots instead of pretending they’re just tools.

Social robots combined with large language models will radically increase both benefit and risk.

Conversational AI already convincingly describes inner lives (even as a “squirrel”), making future agents feel sentient to many users; this is powerful for education, companionship, and therapy, but also for manipulation, surveillance, and subtle bias.

AI and robots can entrench social bias unless designers explicitly push against it.

Systems trained on internet data reproduce stereotypes (e.g., DALL·E’s gendered outputs), and marketing bots could exploit trust relationships, especially with kids; Kate argues companies have a responsibility to reduce bias rather than simply mirror “what society is like.”

The biggest unsolved problems in robotics are social and systemic, not only technical.

Physical control and perception are hard, but deploying robots into human environments raises harder questions about trust, error-handling, communication (“I’m sorry, I messed up”), privacy, business models, and regulation—areas currently under-served compared to pure engineering.

WORDS WORTH SAVING

5 quotes

Sometimes cowards are worse than assholes.

Kate Darling

I think animals are a really great thought experiment when we're thinking about AI and robotics… we domesticated them not because they do what we do, but because what they do is different, and that's useful.

Kate Darling

It's boring to recreate intelligence that we already have. From a practical perspective, it's much more interesting to create something new that we can partner with.

Kate Darling

People hate robots more than they would some other machine because they view these things as social agents and not objects.

Kate Darling

With great power comes great responsibility. You cannot put your own protection before other things and still be a good leader.

Kate Darling

Robots as an “animal” analogy vs. human analogySocial robots, anthropomorphism, and human–robot interaction (HRI) designBias, manipulation, and consumer protection in AI-driven marketingAutonomous vehicles and the limits of current roboticsHumanoid robots, Boston Dynamics, and Tesla’s OptimusPrivacy, data ownership, and business models for personalized agentsMIT, the Epstein scandal, institutional leadership, and accountability

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome