Rosalind Picard: Affective Computing, Emotion, Privacy, and Health | Lex Fridman Podcast #24

Rosalind Picard: Affective Computing, Emotion, Privacy, and Health | Lex Fridman Podcast #24

Lex Fridman PodcastJun 17, 20191h 0m

Lex Fridman (host), Rosalind Picard (guest)

Definition and evolution of affective computing (emotion-aware machines, internal ‘emotion-like’ mechanisms)Emotional intelligence in human–computer interaction (from Clippy to Alexa-like assistants)Privacy, surveillance, and potential governmental misuse of emotion recognition (e.g., China)Wearables, physiological sensing, and health prediction (stress, mood, epilepsy, SUDEP)Regulation, data ownership, and ethical design of emotion AIEmbodiment, consciousness, and future human–AI relationships (e.g., Her, social robots)Limits of scientism, faith, and the search for meaning, purpose, and truth

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Rosalind Picard, Rosalind Picard: Affective Computing, Emotion, Privacy, and Health | Lex Fridman Podcast #24 explores rosalind Picard on Emotional AI, Privacy Risks, Wearables, and Meaning Rosalind Picard explains affective computing as technology that senses, interprets, and responds to human emotion, and potentially even incorporates mechanisms analogous to emotion inside machines. She emphasizes both the technical difficulty and the ethical stakes, especially around surveillance, manipulation, and misuse by powerful actors and governments. A major focus is on wearable sensing and smartphones for predicting stress, mood, and health, and how these tools can help vulnerable populations (e.g., people with epilepsy, depression, or limited resources). She closes by reflecting on the limits of science, her Christian faith, and the importance of building AI that enhances human freedom, dignity, and well‑being rather than just profit or control.

Rosalind Picard on Emotional AI, Privacy Risks, Wearables, and Meaning

Rosalind Picard explains affective computing as technology that senses, interprets, and responds to human emotion, and potentially even incorporates mechanisms analogous to emotion inside machines. She emphasizes both the technical difficulty and the ethical stakes, especially around surveillance, manipulation, and misuse by powerful actors and governments. A major focus is on wearable sensing and smartphones for predicting stress, mood, and health, and how these tools can help vulnerable populations (e.g., people with epilepsy, depression, or limited resources). She closes by reflecting on the limits of science, her Christian faith, and the importance of building AI that enhances human freedom, dignity, and well‑being rather than just profit or control.

Key Takeaways

Emotion AI must be context-aware and socially intelligent, not just technically smart.

Clippy was ‘intelligent’ about task context but emotionally tone-deaf; future systems must read and respond to users’ affect (e. ...

Get the full analysis with uListen AI

Non-contact emotion recognition poses serious privacy and power risks, especially under authoritarian regimes.

Cameras can infer affective states—like skepticism toward political leaders—without consent; combined with repressive policies, this can enable social control, punishment, and fear-based compliance.

Get the full analysis with uListen AI

Wearables and smartphone data can accurately forecast next-day stress, mood, and health.

By combining signals like skin conductance, movement, sleep, texting patterns, GPS, and weather over about a week, machine learning models can predict tomorrow’s stress and mood with over 80% accuracy among studied students.

Get the full analysis with uListen AI

Physiological signals can reveal serious neurological events and save lives.

Skin conductance changes on the wrist can correlate with deep-brain electrical activity and seizures; Picard’s company Empatica built the FDA-cleared Embrace device to detect seizures and may help reduce SUDEP risk, especially when people are not alone.

Get the full analysis with uListen AI

Data ownership and consent must be central to emotion and health sensing.

Picard argues individuals—not platforms—should own their data, and that emotion recognition and mental-health-predictive data should be regulated like lie detection and medical information, with strong protections and informed consent.

Get the full analysis with uListen AI

AI design should prioritize empowering the ‘have-nots’ rather than enriching the already powerful.

She urges AI researchers to focus on low-cost, accessible tools that help people facing poverty, illness, and limited opportunities, instead of primarily building systems that increase wealth and influence for tech elites.

Get the full analysis with uListen AI

Science is powerful but not sufficient for answering questions of meaning, love, and ultimate truth.

Picard distinguishes scientific knowledge from historical, philosophical, and spiritual knowledge, critiques ‘scientism’ (the belief science is the only route to truth), and describes how faith and scripture inform her sense of purpose in doing science.

Get the full analysis with uListen AI

Notable Quotes

It was so emotionally unintelligent… if you’re annoying your customer, don’t smile in their face when you do it.

Rosalind Picard

What if our technology can read your underlying affective state… without your prior consent?

Rosalind Picard

Maybe we want to rethink AI… not about a general intelligence, but about extending the intelligence and capability to the have-nots so that we close these gaps in society.

Rosalind Picard

A good thinker recognizes that science is one of many ways to get knowledge. It’s not the only way.

Rosalind Picard

We see but through a glass dimly in this life. We see only a part of all there is to know.

Rosalind Picard

Questions Answered in This Episode

How can we practically enforce meaningful consent and data ownership in a world where emotion sensing becomes ubiquitous and often invisible?

Rosalind Picard explains affective computing as technology that senses, interprets, and responds to human emotion, and potentially even incorporates mechanisms analogous to emotion inside machines. ...

Get the full analysis with uListen AI

What concrete regulatory frameworks would best balance innovation in emotion AI with protections against manipulation, discrimination, and abuse?

Get the full analysis with uListen AI

To what extent should emotionally aware AI systems be allowed to influence our spending, political attitudes, or life decisions, even if users ‘opt in’?

Get the full analysis with uListen AI

How might focusing AI research on marginalized communities and low-resource settings change the types of algorithms and products we build?

Get the full analysis with uListen AI

Where should we draw the ethical line between simulating empathy in machines for user benefit and potentially deceiving users into believing a system ‘truly feels’?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Rosalind Picard. She's a professor at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of two companies, Affectiva and Empatica. Over two decades ago, she launched the field of affective computing with her book of the same name. This book described the importance of emotion in artificial and natural intelligence, the vital role emotional communication has to the relationship between people in general, and human-robot interaction. I really enjoyed talking with Roz over so many topics, including emotion, ethics, privacy, wearable computing, and her recent research in epilepsy, and even love and meaning. This conversation is part of the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D. And now, here's my conversation with Rosalind Picard. More than 20 years ago, you've coined the term affective computing and led a lot of research in this area since then. As I understand, the goal is to make the machine detect and interpret the emotional state of a human being, and adapt the behavior of the machine based on the emotional state. So, how has your understanding of the problem space defined by affective computing changed in the past 24 years? So it's the scope, the applications, the challenges, uh, what's involved. How has that evolved over the years?

Rosalind Picard

Yeah. Actually, originally when I defined the term affective computing, it was a bit broader than just recognizing and responding intelligently to human emotion, although those are probably the two pieces that we've worked on the hardest. The original concept also encompassed machines that would have mechanisms that functioned like human emotion does inside them.

Lex Fridman

Ah.

Rosalind Picard

It would be any computing that relates to, arises from, or deliberately influences human emotion. So, the human-computer interaction part is the part that people tend to see. Like, if I'm, you know, really ticked off at my computer and I'm scowling at it and I'm cursing at it and it just keeps acting smiling and happy like that little paper clip used to do...

Lex Fridman

Yeah.

Rosalind Picard

... like dancing, winking, um, that kind of thing just makes you even more frustrated, right? And I thought, "That stupid thing needs to see my affect. And if it's gonna be intelligent," which Microsoft researchers had worked really hard on, it actually had some of the most sophisticated AI in it at the time, "If that thing's gonna actually be smart, it needs to respond to me and you."

Lex Fridman

Mm-hmm.

Rosalind Picard

"And we can, um, send it very different signals."

Lex Fridman

So by the way, just a quick interruption. The Clippy, maybe it's in Word, uh, 95, 98...

Rosalind Picard

Mm-hmm.

Lex Fridman

I don't remember when it was born. But, uh, m- many people... Do you find yourself with that reference, that people recognize what you're talking about still to this point?

Rosalind Picard

The... I don't expect the newest students to these days, but I've mentioned it to a lot of audiences, like, "How many of you know this Clippy thing?" And still the majority of people seem to know it.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome