Skip to content
Lex Fridman PodcastLex Fridman Podcast

Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42

Lex Fridman and Peter Norvig on peter Norvig on AI’s Past, Present, Ethics, and Human-Centered Future.

Lex FridmanhostPeter Norvigguest
Sep 30, 20191h 3mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Peter Norvig on AI’s Past, Present, Ethics, and Human-Centered Future

  1. Peter Norvig discusses how AI has evolved since the 1990s, reflected through successive editions of *Artificial Intelligence: A Modern Approach*, emphasizing shifts from logic and knowledge engineering to probability, machine learning, and now deep learning. He highlights that defining the “utility function” and human values in AI systems is becoming harder than optimization itself, especially around fairness, bias, and societal impact. The conversation also explores explainability versus trust, the adversarial nature of search and the web, and how education and programming have changed with MOOCs, high‑level tools, and ubiquitous data. Norvig closes by stressing useful, human-aligned AI (e.g., conversational assistants, programming tools) over abstract pursuit of “human-level” intelligence, while acknowledging real risks like inequality and misuse of powerful technologies.

IDEAS WORTH REMEMBERING

5 ideas

Defining what we want from AI (utility) is now harder than optimizing it.

Early AI textbooks framed the problem as maximizing expected utility given a clear utility function; Norvig argues the real challenge today is specifying individual and collective values in a form machines can act on, especially when those values are contested or fuzzy.

Fairness metrics in AI can be mutually incompatible, forcing explicit trade-offs.

Work on recidivism prediction shows you cannot simultaneously equalize calibration (scores mean the same risk across groups) and error rates across protected classes; this means policy and ethics, not just math, must guide which fairness criteria to prioritize.

Deep learning must be complemented by better representation and reasoning mechanisms.

While deep learning has excelled at perception and is moving into actions and planning, Norvig believes we still need advances in structured representation, one-shot learning, and guided reasoning—possibly reusing ideas from symbolic AI but in more flexible, data-driven forms.

Trust in AI goes beyond explanations to verification, testing, and dialogue.

Norvig suggests focusing on whether systems are trustworthy—via aggregate audits, adversarial tests, and interactive “conversations” about decisions—because a plausible explanation alone (from a person or a model) does not guarantee the decision was actually fair or valid.

Motivation and community matter more than content quality in online education.

From his MOOC experience, Norvig learned that even excellent materials fail without learner motivation and social structures; completion rates are less important than enabling self-driven learners worldwide, while broader impact requires stronger community and commitment mechanisms.

WORDS WORTH SAVING

5 quotes

Maybe optimization is the easy part, and the hard part is deciding what is my utility function and what do we want as a society.

Peter Norvig

It’s theoretically impossible to achieve both of those fairness goals at once, so you have to trade them off.

Peter Norvig

We’re seduced by our low-dimensional metaphors… but really it’s a million-dimensional space, and if you step a little bit off the path in any direction, you’re in Nowhere’s land.

Peter Norvig

I don’t think human-level intelligence is one thing, and I don’t think it should be the goal.

Peter Norvig

We love to fall in love with things that aren’t necessarily real. My kids fell in love with their teddy bear, and the teddy bear was not very interactive.

Peter Norvig

Evolution of AI as reflected in multiple editions of *Artificial Intelligence: A Modern Approach*Utility functions, ethics, fairness, and bias in real-world AI systemsLimits of deep learning, symbolic AI, and the need for better representations and reasoningExplainability versus trust, robustness, and adversarial examples in modern AIOnline education, MOOCs, motivation, and the changing nature of learningProgramming practice, abstraction, and what “mastery” means in modern software developmentSearch quality at Google, adversarial web dynamics, and the broader impact of the internet

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome