Skip to content
Lex Fridman PodcastLex Fridman Podcast

Ayanna Howard: Human-Robot Interaction & Ethics of Safety-Critical Systems | Lex Fridman Podcast #66

Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments. This episode is presented by Cash App. Download it & use code "LexPodcast": Cash App (App Store): https://apple.co/2sPrUHe Cash App (Google Play): https://bit.ly/2MlvP5w PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 EPISODE LINKS: Ayanna Website: https://howard.ece.gatech.edu/ Ayanna Twitter: https://twitter.com/robotsmarts OUTLINE: 0:00 - Introduction 2:09 - Favorite robot 5:05 - Autonomous vehicles 8:43 - Tesla Autopilot 20:03 - Ethical responsibility of safety-critical algorithms 28:11 - Bias in robotics 38:20 - AI in politics and law 40:35 - Solutions to bias in algorithms 47:44 - HAL 9000 49:57 - Memories from working at NASA 51:53 - SpotMini and Bionic Woman 54:27 - Future of robots in space 57:11 - Human-robot interaction 1:02:38 - Trust 1:09:26 - AI in education 1:15:06 - Andrew Yang, automation, and job loss 1:17:17 - Love, AI, and the movie Her 1:25:01 - Why do so many robotics companies fail? 1:32:22 - Fear of robots 1:34:17 - Existential threats of AI 1:35:57 - Matrix 1:37:37 - Hang out for a day with a robot CONNECT: - Subscribe to this YouTube channel - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/LexFridmanPage - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostAyanna Howardguest
Jan 17, 20201h 39mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Ayanna Howard on Imperfect Robots, Human Trust, and Ethical AI

  1. Ayanna Howard and Lex Fridman explore how real-world robots must be less than theoretically ‘perfect’ to function well alongside messy, unpredictable humans. They discuss autonomous vehicles, bias and fairness in AI, and why every developer inevitably bears ethical responsibility when their code impacts human lives. Howard emphasizes that AI is often still less biased than humans, but must be systematically audited, corrected, and designed with adaptation to people at its core. The conversation ranges from medical and educational robots to robot rights, future labor displacement, and whether humans might one day fall in love with AI systems.

IDEAS WORTH REMEMBERING

5 ideas

Robots must adapt to human reality, not just follow rules perfectly.

Theoretical ‘perfection’ as 100% rule‑following and accuracy doesn’t work in dynamic human environments (e.g., driving); what people really want is robots that flexibly adapt to context and human behavior, like Rosie from The Jetsons seemed to do.

Trust in robots is behavioral, and over‑trust is a major risk.

Howard defines trust by what people actually do, not what they say on surveys; once early experiences with a system are positive, users can quickly swing from hyper‑vigilance to complacency, which is dangerous for tech like autopilot or medical AI.

Developers cannot avoid ethics; their code can directly cost lives.

From self‑driving systems to medical decision support, Howard stresses that programmers hold power akin to physicians: their design choices can kill or save people, so ethical reflection and ‘self‑testing’ for harms must be part of everyday development.

AI can be biased yet still less biased than humans—and easier to fix.

Historical data encodes social biases (in healthcare, criminal justice, etc.), so models inherit them, but unlike opaque human institutions, AI systems can be audited, measured, and iteratively corrected, if organizations are transparent and incentivize scrutiny.

Systematic mechanisms for finding ethical ‘bugs’ are missing.

Today, bias problems are caught ad hoc by individual researchers; Howard proposes ‘ethics bug bounties’ where companies pay outsiders to find unfair outcomes, mirroring security bug programs and leveraging diverse perspectives on harm.

WORDS WORTH SAVING

5 quotes

We really want perfection with respect to a robot’s ability to adapt to us, not perfection with respect to rules we just made up anyway.

Ayanna Howard

Robotic algorithms don’t kill people; developers of robotic algorithms kill people.

Ayanna Howard

The worst AI is still better than us—at least in terms of these bias decisions.

Ayanna Howard

Great gift, being a developer, great responsibility—and this is how you combine those.

Ayanna Howard

I don’t believe we should have an AI for president, but I do believe that a president should use AI as an advisor.

Ayanna Howard

Perfection vs. practicality in robotics and autonomous vehiclesHuman trust, over‑trust, and behavior around semi‑autonomous systemsEthics, responsibility, and bias in AI and safety‑critical applicationsHuman‑robot interaction in healthcare, education, and assistive domainsData bias, fairness, and how AI compares to human decision‑makingFuture of work, automation, and workforce retrainingLong‑term questions: robot rights, love, and human–AI symbiosis

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome