Michael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50

Michael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50

Lex Fridman PodcastNov 19, 20191h 48m

Lex Fridman (host), Michael Kearns (guest)

Algorithmic fairness: group vs individual definitions and inherent trade-offsDifferential privacy vs traditional anonymization and data re-identification risksHuman values, social norms, and who should define ‘fair’ for algorithmsGame theory, equilibria, and algorithmic influence on social media and trafficMachine learning’s limits, interpretability, and the need for human-subject studiesData economics, user control, and future markets for personal dataAlgorithmic trading, short-term vs long-term prediction, and where humans still excel

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Michael Kearns, Michael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50 explores designing Fair, Private Algorithms Without Letting Machines Define Morality Michael Kearns and Lex Fridman discuss how to formalize ethics—especially fairness and privacy—inside algorithms without asking machines to decide what is morally right. Kearns explains the technical progress and limits of algorithmic fairness, contrasting its messiness with the more mature framework of differential privacy. They explore trade‑offs between accuracy and fairness, individual versus group protections, and how social media and financial systems illustrate game‑theoretic dynamics. Throughout, Kearns stresses that society must define the norms; computer scientists should encode and expose trade‑offs, not quietly choose our values for us.

Designing Fair, Private Algorithms Without Letting Machines Define Morality

Michael Kearns and Lex Fridman discuss how to formalize ethics—especially fairness and privacy—inside algorithms without asking machines to decide what is morally right. Kearns explains the technical progress and limits of algorithmic fairness, contrasting its messiness with the more mature framework of differential privacy. They explore trade‑offs between accuracy and fairness, individual versus group protections, and how social media and financial systems illustrate game‑theoretic dynamics. Throughout, Kearns stresses that society must define the norms; computer scientists should encode and expose trade‑offs, not quietly choose our values for us.

Key Takeaways

Fairness can be quantified, but only after humans make hard moral choices.

Technical fairness metrics (e. ...

Get the full analysis with uListen AI

Group-level fairness is a weak proxy for individual fairness and can hide ‘fairness gerrymandering.’

Even if an algorithm treats each protected group fairly in aggregate, combinations of attributes (e. ...

Get the full analysis with uListen AI

Differential privacy offers strong, mathematically robust privacy guarantees where anonymization fails.

Simply stripping names or coarsening data is fragile because disparate datasets can be joined to re-identify people; differential privacy instead adds carefully calibrated noise so including your data doesn’t materially change what can be inferred about you.

Get the full analysis with uListen AI

Ethical algorithms expose trade-offs between accuracy and social goals rather than magically eliminating them.

Techniques like fairness-aware learning produce Pareto curves showing how reducing discrimination typically costs some predictive accuracy (or money, data collection, development effort), and policymakers—not engineers—should choose where to sit on that curve.

Get the full analysis with uListen AI

Algorithms increasingly drive us toward game-theoretic equilibria that may be socially suboptimal.

From navigation apps to news feeds, ML systems optimize for individual objectives (like shortest travel time or engagement), which can yield equilibria that are stable but collectively worse—more traffic, more polarization—than alternative coordinated solutions.

Get the full analysis with uListen AI

We badly need empirical, psychological work on how people actually perceive fairness and interpretability.

Most fairness and explainability definitions are ‘received wisdom’ from scholars; very few studies ask ordinary users what they experience as fair or understandable, so our technical objectives may be misaligned with human intuitions.

Get the full analysis with uListen AI

Future data ecosystems may require new economic models where individuals control and potentially sell their data.

Because current platforms treat users as the product in two-sided ad markets, giving people real privacy control and compensation could upend today’s business models and push us toward multi-sided markets where user data has explicit, priced value.

Get the full analysis with uListen AI

Notable Quotes

We’re not in the business of deciding what’s fair for society; we’re in the business of encoding whatever society decides into algorithms.

Michael Kearns

If police are racist in who they decide to stop and frisk, and that goes into the data, there’s no undoing that downstream by clever algorithmic methods.

Michael Kearns

Differential privacy basically says that any harms that might come to you from including your data are essentially the same harms that would have come to you if your data had never been used.

Michael Kearns

Just because we’re at equilibrium doesn’t mean there isn’t another solution where some or even all of us would be better off.

Michael Kearns

I don’t think we’re notably closer to general artificial intelligence now than when I started my career.

Michael Kearns

Questions Answered in This Episode

How should societies formally decide which fairness definitions and protected groups to prioritize when they inevitably conflict?

Michael Kearns and Lex Fridman discuss how to formalize ethics—especially fairness and privacy—inside algorithms without asking machines to decide what is morally right. ...

Get the full analysis with uListen AI

What would a practical user interface for meaningful control over one’s data and privacy actually look like on major platforms?

Get the full analysis with uListen AI

How far can differential privacy be pushed before the added noise makes important scientific or policy analyses unreliable?

Get the full analysis with uListen AI

In social media recommender systems, what metrics—beyond engagement—could we realistically optimize to reduce polarization without killing the product?

Get the full analysis with uListen AI

Could markets where people sell selective access to their data truly empower individuals, or would they just create new forms of inequality and exploitation?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Michael Kearns. He's a professor at the University of Pennsylvania and a co-author of the new book, Ethical Algorithm, that is the focus of much of this conversation. It includes algorithmic fairness, bias, privacy, and ethics in general. But that is just one of many fields that Michael's a world-class researcher in, some of which we touch on quickly including learning theory or the theoretical foundation of machine learning, game theory, quantitative finance, computational social science, and much more. But on a personal note, when I was an undergrad early on, I worked with Michael on an algorithmic training project and competition that he led. That's when I first fell in love with algorithmic game theory. While most of my research life has been in machine learning and human robot interaction, the systematic way that game theory reveals the beautiful structure in our competitive and cooperating world of humans has been a continued inspiration to me. So for that, and other things, I'm deeply thankful to Michael, and really enjoyed having this conversation again in person after so many years. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. This episode is supported by an amazing podcast called Pessimists Archive. Jason, the host of the show, reached out to me looking to support this podcast, and so I listened to it to check it out. And by listened, I mean I went through it Netflix binge style, at least five episodes in a row. It's now one of my favorite podcasts, and I think it should be one of the top podcasts in the world, frankly. It's a history show about why people resist new things. Each episode looks at a moment in history when something new was introduced, something that today we think of as commonplace, like recorded music, umbrellas, bicycles, cars, chess, coffee, the elevator, and the show explores why it freaked everyone out. The latest episode on mirrors and vanity still stays with me as I think about vanity in the modern day, of the Twitter world. That's the fascinating thing about this show is that stuff that happened long ago, especially in terms of our fear of new things, repeats itself in the modern day, and so has many lessons for us to think about in terms of human psychology and the role of technology in our society. Anyway, you should subscribe and listen to Pessimists Archive. I highly recommend it. And now here's my conversation with Michael Kearns. You mentioned reading Fear and Loathing in Las Vegas in high school, and having a more or a bit more of a literary mind. So what books, non-technical, non-computer science, would you say had the biggest impact on your life, either intellectually or emotionally?

Michael Kearns

You've dug deep into my history, I see. Um-

Lex Fridman

Went deep.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome