Lex Fridman PodcastMichael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50
Lex Fridman and Michael Kearns on designing Fair, Private Algorithms Without Letting Machines Define Morality.
In this episode of Lex Fridman Podcast, featuring Lex Fridman and Michael Kearns, Michael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50 explores designing Fair, Private Algorithms Without Letting Machines Define Morality Michael Kearns and Lex Fridman discuss how to formalize ethics—especially fairness and privacy—inside algorithms without asking machines to decide what is morally right. Kearns explains the technical progress and limits of algorithmic fairness, contrasting its messiness with the more mature framework of differential privacy. They explore trade‑offs between accuracy and fairness, individual versus group protections, and how social media and financial systems illustrate game‑theoretic dynamics. Throughout, Kearns stresses that society must define the norms; computer scientists should encode and expose trade‑offs, not quietly choose our values for us.
At a glance
WHAT IT’S REALLY ABOUT
Designing Fair, Private Algorithms Without Letting Machines Define Morality
- Michael Kearns and Lex Fridman discuss how to formalize ethics—especially fairness and privacy—inside algorithms without asking machines to decide what is morally right. Kearns explains the technical progress and limits of algorithmic fairness, contrasting its messiness with the more mature framework of differential privacy. They explore trade‑offs between accuracy and fairness, individual versus group protections, and how social media and financial systems illustrate game‑theoretic dynamics. Throughout, Kearns stresses that society must define the norms; computer scientists should encode and expose trade‑offs, not quietly choose our values for us.
IDEAS WORTH REMEMBERING
7 ideasFairness can be quantified, but only after humans make hard moral choices.
Technical fairness metrics (e.g., equal false rejection rates across groups) allow you to say an algorithm is ‘97% fair,’ but only once people specify which groups matter and what counts as harm; those normative choices are not technical.
Group-level fairness is a weak proxy for individual fairness and can hide ‘fairness gerrymandering.’
Even if an algorithm treats each protected group fairly in aggregate, combinations of attributes (e.g., older low-income Hispanic women) can still be systematically disadvantaged; moving toward individual fairness means protecting exponentially many subgroups.
Differential privacy offers strong, mathematically robust privacy guarantees where anonymization fails.
Simply stripping names or coarsening data is fragile because disparate datasets can be joined to re-identify people; differential privacy instead adds carefully calibrated noise so including your data doesn’t materially change what can be inferred about you.
Ethical algorithms expose trade-offs between accuracy and social goals rather than magically eliminating them.
Techniques like fairness-aware learning produce Pareto curves showing how reducing discrimination typically costs some predictive accuracy (or money, data collection, development effort), and policymakers—not engineers—should choose where to sit on that curve.
Algorithms increasingly drive us toward game-theoretic equilibria that may be socially suboptimal.
From navigation apps to news feeds, ML systems optimize for individual objectives (like shortest travel time or engagement), which can yield equilibria that are stable but collectively worse—more traffic, more polarization—than alternative coordinated solutions.
We badly need empirical, psychological work on how people actually perceive fairness and interpretability.
Most fairness and explainability definitions are ‘received wisdom’ from scholars; very few studies ask ordinary users what they experience as fair or understandable, so our technical objectives may be misaligned with human intuitions.
Future data ecosystems may require new economic models where individuals control and potentially sell their data.
Because current platforms treat users as the product in two-sided ad markets, giving people real privacy control and compensation could upend today’s business models and push us toward multi-sided markets where user data has explicit, priced value.
WORDS WORTH SAVING
5 quotesWe’re not in the business of deciding what’s fair for society; we’re in the business of encoding whatever society decides into algorithms.
— Michael Kearns
If police are racist in who they decide to stop and frisk, and that goes into the data, there’s no undoing that downstream by clever algorithmic methods.
— Michael Kearns
Differential privacy basically says that any harms that might come to you from including your data are essentially the same harms that would have come to you if your data had never been used.
— Michael Kearns
Just because we’re at equilibrium doesn’t mean there isn’t another solution where some or even all of us would be better off.
— Michael Kearns
I don’t think we’re notably closer to general artificial intelligence now than when I started my career.
— Michael Kearns
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsHow should societies formally decide which fairness definitions and protected groups to prioritize when they inevitably conflict?
Michael Kearns and Lex Fridman discuss how to formalize ethics—especially fairness and privacy—inside algorithms without asking machines to decide what is morally right. Kearns explains the technical progress and limits of algorithmic fairness, contrasting its messiness with the more mature framework of differential privacy. They explore trade‑offs between accuracy and fairness, individual versus group protections, and how social media and financial systems illustrate game‑theoretic dynamics. Throughout, Kearns stresses that society must define the norms; computer scientists should encode and expose trade‑offs, not quietly choose our values for us.
What would a practical user interface for meaningful control over one’s data and privacy actually look like on major platforms?
How far can differential privacy be pushed before the added noise makes important scientific or policy analyses unreliable?
In social media recommender systems, what metrics—beyond engagement—could we realistically optimize to reduce polarization without killing the product?
Could markets where people sell selective access to their data truly empower individuals, or would they just create new forms of inequality and exploitation?
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome