Kate Darling: Social Robots, Ethics, Privacy and the Future of MIT | Lex Fridman Podcast #329

Kate Darling: Social Robots, Ethics, Privacy and the Future of MIT | Lex Fridman Podcast #329

Lex Fridman PodcastOct 15, 20223h 3m

Kate Darling (guest), Lex Fridman (host)

Robots as an “animal” analogy vs. human analogySocial robots, anthropomorphism, and human–robot interaction (HRI) designBias, manipulation, and consumer protection in AI-driven marketingAutonomous vehicles and the limits of current roboticsHumanoid robots, Boston Dynamics, and Tesla’s OptimusPrivacy, data ownership, and business models for personalized agentsMIT, the Epstein scandal, institutional leadership, and accountability

In this episode of Lex Fridman Podcast, featuring Kate Darling and Lex Fridman, Kate Darling: Social Robots, Ethics, Privacy and the Future of MIT | Lex Fridman Podcast #329 explores robots as Pets, Not People: Ethics, Emotion, and MIT’s Future Lex Fridman and MIT roboticist Kate Darling explore how we should think about robots and AI, arguing that viewing them as analogous to domesticated animals is more useful than comparing them to humans. They discuss social robots, human–robot interaction, bias, privacy, and the dangers of using emotionally persuasive machines for marketing and surveillance. The conversation also covers autonomous vehicles, humanoid robots like Tesla’s Optimus, and how corporate risk-aversion and PR culture often kill creativity and good design. In a personal and critical segment, Kate reflects on the Jeffrey Epstein scandal, institutional cowardice at MIT, and what real leadership and accountability should look like.

Robots as Pets, Not People: Ethics, Emotion, and MIT’s Future

Lex Fridman and MIT roboticist Kate Darling explore how we should think about robots and AI, arguing that viewing them as analogous to domesticated animals is more useful than comparing them to humans. They discuss social robots, human–robot interaction, bias, privacy, and the dangers of using emotionally persuasive machines for marketing and surveillance. The conversation also covers autonomous vehicles, humanoid robots like Tesla’s Optimus, and how corporate risk-aversion and PR culture often kill creativity and good design. In a personal and critical segment, Kate reflects on the Jeffrey Epstein scandal, institutional cowardice at MIT, and what real leadership and accountability should look like.

Key Takeaways

Thinking of robots like domesticated animals is more productive than comparing them to humans.

Kate argues that animals have historically complemented human abilities rather than replicated them; robots should similarly offer different, supplemental skills (e. ...

Get the full analysis with uListen AI

Anthropomorphism is powerful—and inevitable—so robot design must take it seriously.

People treat robots as social agents based on minimal cues (eyes, names, movement, beeps), so ignoring HRI leads to backlash, as seen with grocery robot “Marty” and past failures like Clippy; companies should intentionally shape how people relate to robots instead of pretending they’re just tools.

Get the full analysis with uListen AI

Social robots combined with large language models will radically increase both benefit and risk.

Conversational AI already convincingly describes inner lives (even as a “squirrel”), making future agents feel sentient to many users; this is powerful for education, companionship, and therapy, but also for manipulation, surveillance, and subtle bias.

Get the full analysis with uListen AI

AI and robots can entrench social bias unless designers explicitly push against it.

Systems trained on internet data reproduce stereotypes (e. ...

Get the full analysis with uListen AI

The biggest unsolved problems in robotics are social and systemic, not only technical.

Physical control and perception are hard, but deploying robots into human environments raises harder questions about trust, error-handling, communication (“I’m sorry, I messed up”), privacy, business models, and regulation—areas currently under-served compared to pure engineering.

Get the full analysis with uListen AI

Autonomy doesn’t mean job extinction; it means disruptive reconfiguration of work.

From mining trucks to Amazon warehouses, robots tend to automate tasks, not entire jobs; they can make dangerous work safer and create better roles (e. ...

Get the full analysis with uListen AI

Institutions often prioritize self-protection over integrity, and cowardice can be worse than malice.

Reflecting on Epstein at MIT and her own harassment case, Kate describes leaders who hid behind PR and legal risk, allowing one person (like Joi Ito) to absorb blame while systemic issues went unaddressed—arguing that real leadership requires humility, risk-taking, and accepting shared responsibility.

Get the full analysis with uListen AI

Notable Quotes

Sometimes cowards are worse than assholes.

Kate Darling

I think animals are a really great thought experiment when we're thinking about AI and robotics… we domesticated them not because they do what we do, but because what they do is different, and that's useful.

Kate Darling

It's boring to recreate intelligence that we already have. From a practical perspective, it's much more interesting to create something new that we can partner with.

Kate Darling

People hate robots more than they would some other machine because they view these things as social agents and not objects.

Kate Darling

With great power comes great responsibility. You cannot put your own protection before other things and still be a good leader.

Kate Darling

Questions Answered in This Episode

If we fully embraced the “robots as animals” analogy, how would that concretely change laws, liability, and design standards for robots in public spaces and homes?

Lex Fridman and MIT roboticist Kate Darling explore how we should think about robots and AI, arguing that viewing them as analogous to domesticated animals is more useful than comparing them to humans. ...

Get the full analysis with uListen AI

What safeguards—technical, legal, and business-model—are realistically needed to prevent social robots from becoming hyper-personalized, manipulative advertising channels?

Get the full analysis with uListen AI

Where should we draw the line between acceptable anthropomorphism that helps humans bond with robots and deceptive design that misleads people about agency or sentience?

Get the full analysis with uListen AI

Given the trade-offs, should society invest more in redesigning infrastructure for diverse robot forms (like wheelchairs and drones) instead of pushing toward humanoid robots?

Get the full analysis with uListen AI

How can universities like MIT structurally incentivize courage and accountability in leadership so that future scandals are handled transparently rather than defensively?

Get the full analysis with uListen AI

Transcript Preview

Kate Darling

... I think that animals are a really great thought experiment when we're thinking about AI and robotics, because again, there's comparing them to humans, that leads us down the wrong path, both because it's not accurate, but also I think for the future, we don't want that. We want something that's a supplement. But I think animals, because we've used them throughout history for so many different things, we, we domesticated them not because they do what we do, but because what they do is different, and that's useful. And I, it just, like whether we're talking about companionship, whether we're talking about work integration, whether we're talking about responsibility for harm, there are just so many things we can draw on in that history from these entities that can sense, think, make autonomous decisions, and learn, that are applicable to how we should be thinking about robots and AI.

Lex Fridman

The following is a conversation with Kate Darling, her second time on the podcast. She's a research scientist at MIT Media Lab interested in human-robot interaction and robot ethics, which she writes about in her recent book called The New Breed: What Our History with Animals Reveals About Our Future with Robots. Kate is one of my favorite people at MIT. She was a courageous voice of reason and compassion through the time of the Jeffrey Epstein scandal at MIT three years ago. We reflect on this time in this very conversation, including the lessons it revealed about human nature and our optimistic vision for the future of MIT, a university we both love and believe in. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Kate Darling. Last time we talked a few years back, you wore a Justin Bieber shirt for the podcast.

Kate Darling

(laughs)

Lex Fridman

So now looking back, you're a respected, um, researcher, all the amazing accomplishments in robotics, uh, you're an author. Was this one of the proudest moments of your life, uh, proudest decisions you've ever made?

Kate Darling

Definitely. You handled it really well, though. It was cool 'cause I walked in, I didn't know you were gonna be filming. I walked in and you're in a-

Lex Fridman

Right.

Kate Darling

... fucking suit.

Lex Fridman

Yeah.

Kate Darling

And I'm like, "Why are you all dressed up?"

Lex Fridman

Yeah. (laughs)

Kate Darling

And then you were so nice about it. You, like made some excuse. You were like, "Oh, well, I'm interviewing some art..." Didn't you say you were interviewing some military general afterwards to like-

Lex Fridman

Oh yeah, that was a-

Kate Darling

... make me feel better?

Lex Fridman

... CTO of Lockheed Martin, I think.

Kate Darling

Oh, that's what it was.

Lex Fridman

Yeah.

Kate Darling

You didn't tell me, oh, I was dressed like this.

Lex Fridman

(laughs) Are you an actual Bieber fan, or was that like one of those T-shirts that's in the back of the closet that you use for painting?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome