Skip to content
Lex Fridman PodcastLex Fridman Podcast

Michael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50

Lex Fridman and Michael Kearns on designing Fair, Private Algorithms Without Letting Machines Define Morality.

Lex FridmanhostMichael Kearnsguest
Nov 19, 20191h 48mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Designing Fair, Private Algorithms Without Letting Machines Define Morality

  1. Michael Kearns and Lex Fridman discuss how to formalize ethics—especially fairness and privacy—inside algorithms without asking machines to decide what is morally right. Kearns explains the technical progress and limits of algorithmic fairness, contrasting its messiness with the more mature framework of differential privacy. They explore trade‑offs between accuracy and fairness, individual versus group protections, and how social media and financial systems illustrate game‑theoretic dynamics. Throughout, Kearns stresses that society must define the norms; computer scientists should encode and expose trade‑offs, not quietly choose our values for us.

IDEAS WORTH REMEMBERING

5 ideas

Fairness can be quantified, but only after humans make hard moral choices.

Technical fairness metrics (e.g., equal false rejection rates across groups) allow you to say an algorithm is ‘97% fair,’ but only once people specify which groups matter and what counts as harm; those normative choices are not technical.

Group-level fairness is a weak proxy for individual fairness and can hide ‘fairness gerrymandering.’

Even if an algorithm treats each protected group fairly in aggregate, combinations of attributes (e.g., older low-income Hispanic women) can still be systematically disadvantaged; moving toward individual fairness means protecting exponentially many subgroups.

Differential privacy offers strong, mathematically robust privacy guarantees where anonymization fails.

Simply stripping names or coarsening data is fragile because disparate datasets can be joined to re-identify people; differential privacy instead adds carefully calibrated noise so including your data doesn’t materially change what can be inferred about you.

Ethical algorithms expose trade-offs between accuracy and social goals rather than magically eliminating them.

Techniques like fairness-aware learning produce Pareto curves showing how reducing discrimination typically costs some predictive accuracy (or money, data collection, development effort), and policymakers—not engineers—should choose where to sit on that curve.

Algorithms increasingly drive us toward game-theoretic equilibria that may be socially suboptimal.

From navigation apps to news feeds, ML systems optimize for individual objectives (like shortest travel time or engagement), which can yield equilibria that are stable but collectively worse—more traffic, more polarization—than alternative coordinated solutions.

WORDS WORTH SAVING

5 quotes

We’re not in the business of deciding what’s fair for society; we’re in the business of encoding whatever society decides into algorithms.

Michael Kearns

If police are racist in who they decide to stop and frisk, and that goes into the data, there’s no undoing that downstream by clever algorithmic methods.

Michael Kearns

Differential privacy basically says that any harms that might come to you from including your data are essentially the same harms that would have come to you if your data had never been used.

Michael Kearns

Just because we’re at equilibrium doesn’t mean there isn’t another solution where some or even all of us would be better off.

Michael Kearns

I don’t think we’re notably closer to general artificial intelligence now than when I started my career.

Michael Kearns

Algorithmic fairness: group vs individual definitions and inherent trade-offsDifferential privacy vs traditional anonymization and data re-identification risksHuman values, social norms, and who should define ‘fair’ for algorithmsGame theory, equilibria, and algorithmic influence on social media and trafficMachine learning’s limits, interpretability, and the need for human-subject studiesData economics, user control, and future markets for personal dataAlgorithmic trading, short-term vs long-term prediction, and where humans still excel

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome