Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42

Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42

Lex Fridman PodcastSep 30, 20191h 3m

Lex Fridman (host), Peter Norvig (guest), Narrator

Evolution of AI as reflected in multiple editions of *Artificial Intelligence: A Modern Approach*Utility functions, ethics, fairness, and bias in real-world AI systemsLimits of deep learning, symbolic AI, and the need for better representations and reasoningExplainability versus trust, robustness, and adversarial examples in modern AIOnline education, MOOCs, motivation, and the changing nature of learningProgramming practice, abstraction, and what “mastery” means in modern software developmentSearch quality at Google, adversarial web dynamics, and the broader impact of the internet

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Peter Norvig, Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42 explores peter Norvig on AI’s Past, Present, Ethics, and Human-Centered Future Peter Norvig discusses how AI has evolved since the 1990s, reflected through successive editions of *Artificial Intelligence: A Modern Approach*, emphasizing shifts from logic and knowledge engineering to probability, machine learning, and now deep learning. He highlights that defining the “utility function” and human values in AI systems is becoming harder than optimization itself, especially around fairness, bias, and societal impact. The conversation also explores explainability versus trust, the adversarial nature of search and the web, and how education and programming have changed with MOOCs, high‑level tools, and ubiquitous data. Norvig closes by stressing useful, human-aligned AI (e.g., conversational assistants, programming tools) over abstract pursuit of “human-level” intelligence, while acknowledging real risks like inequality and misuse of powerful technologies.

Peter Norvig on AI’s Past, Present, Ethics, and Human-Centered Future

Peter Norvig discusses how AI has evolved since the 1990s, reflected through successive editions of *Artificial Intelligence: A Modern Approach*, emphasizing shifts from logic and knowledge engineering to probability, machine learning, and now deep learning. He highlights that defining the “utility function” and human values in AI systems is becoming harder than optimization itself, especially around fairness, bias, and societal impact. The conversation also explores explainability versus trust, the adversarial nature of search and the web, and how education and programming have changed with MOOCs, high‑level tools, and ubiquitous data. Norvig closes by stressing useful, human-aligned AI (e.g., conversational assistants, programming tools) over abstract pursuit of “human-level” intelligence, while acknowledging real risks like inequality and misuse of powerful technologies.

Key Takeaways

Defining what we want from AI (utility) is now harder than optimizing it.

Early AI textbooks framed the problem as maximizing expected utility given a clear utility function; Norvig argues the real challenge today is specifying individual and collective values in a form machines can act on, especially when those values are contested or fuzzy.

Get the full analysis with uListen AI

Fairness metrics in AI can be mutually incompatible, forcing explicit trade-offs.

Work on recidivism prediction shows you cannot simultaneously equalize calibration (scores mean the same risk across groups) and error rates across protected classes; this means policy and ethics, not just math, must guide which fairness criteria to prioritize.

Get the full analysis with uListen AI

Deep learning must be complemented by better representation and reasoning mechanisms.

While deep learning has excelled at perception and is moving into actions and planning, Norvig believes we still need advances in structured representation, one-shot learning, and guided reasoning—possibly reusing ideas from symbolic AI but in more flexible, data-driven forms.

Get the full analysis with uListen AI

Trust in AI goes beyond explanations to verification, testing, and dialogue.

Norvig suggests focusing on whether systems are trustworthy—via aggregate audits, adversarial tests, and interactive “conversations” about decisions—because a plausible explanation alone (from a person or a model) does not guarantee the decision was actually fair or valid.

Get the full analysis with uListen AI

Motivation and community matter more than content quality in online education.

From his MOOC experience, Norvig learned that even excellent materials fail without learner motivation and social structures; completion rates are less important than enabling self-driven learners worldwide, while broader impact requires stronger community and commitment mechanisms.

Get the full analysis with uListen AI

Modern programming is increasingly about assembling and modeling, not low-level coding.

With powerful libraries and tools (e. ...

Get the full analysis with uListen AI

AI risks today are more about socio-economic disruption and misuse than “robot apocalypse.”

Norvig is not worried about Terminator-style scenarios, but is concerned about automation exacerbating inequality, and powerful technologies—including but not limited to AI—being weaponized or destabilizing, requiring thoughtful societal adaptation.

Get the full analysis with uListen AI

Notable Quotes

Maybe optimization is the easy part, and the hard part is deciding what is my utility function and what do we want as a society.

Peter Norvig

It’s theoretically impossible to achieve both of those fairness goals at once, so you have to trade them off.

Peter Norvig

We’re seduced by our low-dimensional metaphors… but really it’s a million-dimensional space, and if you step a little bit off the path in any direction, you’re in Nowhere’s land.

Peter Norvig

I don’t think human-level intelligence is one thing, and I don’t think it should be the goal.

Peter Norvig

We love to fall in love with things that aren’t necessarily real. My kids fell in love with their teddy bear, and the teddy bear was not very interactive.

Peter Norvig

Questions Answered in This Episode

How should societies decide which fairness criteria to prioritize when they mathematically conflict in AI systems used for high-stakes decisions?

Peter Norvig discusses how AI has evolved since the 1990s, reflected through successive editions of *Artificial Intelligence: A Modern Approach*, emphasizing shifts from logic and knowledge engineering to probability, machine learning, and now deep learning. ...

Get the full analysis with uListen AI

What concrete approaches could effectively combine deep learning with symbolic or structured reasoning to achieve more robust, general intelligence?

Get the full analysis with uListen AI

How can we design AI-powered attention and recommendation systems so they work with users’ long-term interests rather than exploiting short-term engagement?

Get the full analysis with uListen AI

What kinds of institutional and educational changes are needed to help workers adapt to AI-driven automation without worsening inequality?

Get the full analysis with uListen AI

In practice, what would a truly trustworthy conversational assistant look like, and how would we test and certify its behavior across diverse real-world scenarios?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Peter Norvig. He's the director of research at Google and the co-author with Stuart Russell of the book Artificial Intelligence: A Modern Approach that educated and inspired a whole generation of researchers, including myself, to get into the field of artificial intelligence. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. And now, here's my conversation with Peter Norvig. Most researchers in the AI community, including myself, own all three editions, red, green, and blue, of the, uh, Artificial Intelligence: A Modern Approach. It's a field-defining textbook, as many people are aware, that you wrote with Stuart Russell. How has the book changed and how have you changed-

Peter Norvig

(laughs) Yeah.

Lex Fridman

... uh, in relation to it from the first edition to the second to the third and now fourth edition as you work on it?

Peter Norvig

Yeah. So it's, so it's been a lot of years, a lot of changes. One of the things changing from the first to m- m- maybe the second or third was just the rise of, uh, computing power, right? So I think in the, in the first edition we said, uh, "Here's predicate logic, but, uh, that only goes so far 'cause pretty soon you have millions of, uh, short little predicate expressions and they couldn't possibly fit in memory. Uh, so we're gonna use first-order logic that's, uh, more concise." And then we quickly r- realized, "Oh, predicate logic is pretty nice because there are really fast SAT solvers and other things, and look, there's only millions of expressions and that fits easily into memory, or maybe even billions fit into memory now." So, that was a change of, uh, the type of technology we needed just because the hardware expanded.

Lex Fridman

Even to the second edition?

Peter Norvig

Yeah. Yeah.

Lex Fridman

So resource constraints were loosened significantly for the second edition?

Peter Norvig

Yeah. Yeah. And then-

Lex Fridman

And that was early 2000s, second edition?

Peter Norvig

Right. So '95-

Lex Fridman

Yeah.

Peter Norvig

... was the first and then, uh, 2000, 2001 or so. And then, uh, moving on from there, I think we're s- we're starting to see that again with the, uh, GPUs and then, uh, more specific type of, uh, machinery like the TPUs and w- we're seeing custom ASICs and so on, uh, for deep learning. So, we're seeing another advance in terms of the hardware. Then I think another thing that we especially noticed this time around is in all three of the first editions, we kind of said, "Well, we're gonna find AI as maximizing expected utility."

Lex Fridman

Mm-hmm.

Peter Norvig

"And you tell me your utility function and now we've got 27 chapters worth of cool techniques for how to optimize that." I think in this edition, we're saying more, "You know what? Maybe that optimization part is the easy part-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome