Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61

Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61

Lex Fridman PodcastDec 28, 20191h 52m

Lex Fridman (host), Melanie Mitchell (guest), Narrator, Narrator

Definitions and boundaries of intelligence, artificial intelligence, and general intelligenceConcepts, analogy-making, and the CopyCat cognitive architectureLimits of deep learning, common sense, and mental modelsAutonomous driving and the long-tail problem in real-world AI systemsEmbodiment, emotion, and social cognition as parts of intelligenceAI safety, superintelligence, and value alignment debatesComplex systems, emergence, and the work of the Santa Fe Institute

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Melanie Mitchell, Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61 explores melanie Mitchell Explores Concepts, Analogies, Common Sense, Future AI Lex Fridman and Melanie Mitchell discuss what intelligence and artificial intelligence really mean, questioning terms like AI, AGI, and superintelligence and how poorly we actually understand human cognition.

Melanie Mitchell Explores Concepts, Analogies, Common Sense, Future AI

Lex Fridman and Melanie Mitchell discuss what intelligence and artificial intelligence really mean, questioning terms like AI, AGI, and superintelligence and how poorly we actually understand human cognition.

Mitchell explains why analogy-making and concept formation are central to thought, drawing on her CopyCat cognitive architecture and broader work in complexity and emergent behavior.

They examine the limits of current deep learning, the challenges of common sense and perception, and whether scaling today’s methods can ever reach human-level intelligence without new architectures or innate structure.

The conversation also covers autonomous vehicles, embodied intelligence, AI safety and value alignment, and the role of interdisciplinary research at the Santa Fe Institute.

Key Takeaways

Analogy-making is foundational to human cognition and concept formation.

Mitchell, following Hofstadter, argues that concepts are built and used via analogies—recognizing when different situations are 'essentially the same' underlies perception, language, and reasoning, making analogy-making a central AI challenge.

Get the full analysis with uListen AI

Current deep learning systems lack robust concepts and common sense.

Examples like Atari Breakout show that neural nets can perform impressively but often fail under small changes, suggesting they haven’t learned human-like concepts (e. ...

Get the full analysis with uListen AI

Understanding human minds may be necessary to build truly intelligent machines.

Mitchell is skeptical that brute-force scaling of today’s architectures will reach human-level intelligence, betting that deeper understanding of human cognition, internal models, and development will be required.

Get the full analysis with uListen AI

Autonomous driving exposes the gap between narrow AI and full common sense.

Self-driving systems handle routine conditions but struggle with open-ended, rare 'edge cases' and nuanced human behavior, indicating that full autonomy likely demands rich intuitive physics, social reasoning, and common sense knowledge.

Get the full analysis with uListen AI

Embodiment and emotion may be integral to human-level intelligence.

Mitchell suggests that having a body, self-preservation drives, emotions, and social motivations deeply shape human perception and decision-making, so trying to isolate 'pure rational intelligence' may miss what actually makes us intelligent.

Get the full analysis with uListen AI

Concerns about superintelligent AI often oversimplify what intelligence is.

She criticizes thought experiments like the paperclip maximizer for assuming a system can be super-intelligent in one narrow dimension yet utterly blind to obvious human values, arguing that real intelligence is more holistic and entangled with values.

Get the full analysis with uListen AI

Studying complexity and emergence can inform future AI architectures.

From cellular automata to brains and economies, simple interacting elements can yield rich emergent behavior; Mitchell sees this as both humbling and promising for engineering new, more dynamic cognitive architectures beyond today’s feedforward models.

Get the full analysis with uListen AI

Notable Quotes

Without concepts, there can be no thought, and without analogies, there can be no concepts.

Douglas Hofstadter (quoted by Lex Fridman and Melanie Mitchell)

How to form and fluidly use concepts is the most important open problem in AI.

Melanie Mitchell

I don't see any reason why we couldn't, in principle, create something that we would consider intelligent. I just don't think it will look like the machines we have now.

Melanie Mitchell

A super-intelligent AI that solves climate change by killing all the humans doesn't make sense to me. It’s hard to imagine something being that intelligent in one dimension and that stupid in another.

Melanie Mitchell

It's amazing how much stuff we know—not just facts, but all integrated into this thing that we can make analogies with.

Melanie Mitchell

Questions Answered in This Episode

If analogy-making is central to intelligence, what concrete steps could current AI research take to move beyond pattern recognition toward true analogy-based reasoning?

Lex Fridman and Melanie Mitchell discuss what intelligence and artificial intelligence really mean, questioning terms like AI, AGI, and superintelligence and how poorly we actually understand human cognition.

Get the full analysis with uListen AI

How might we experimentally test whether a system has formed human-like concepts, rather than just learning statistical associations in its training data?

Mitchell explains why analogy-making and concept formation are central to thought, drawing on her CopyCat cognitive architecture and broader work in complexity and emergent behavior.

Get the full analysis with uListen AI

To what extent can common sense be learned purely from raw sensory data at scale, and where do we need innate structure or 'built-in' priors?

They examine the limits of current deep learning, the challenges of common sense and perception, and whether scaling today’s methods can ever reach human-level intelligence without new architectures or innate structure.

Get the full analysis with uListen AI

What would an embodied AI system that genuinely uses emotion and self-preservation in its decision-making look like, and how would we control or align it?

The conversation also covers autonomous vehicles, embodied intelligence, AI safety and value alignment, and the role of interdisciplinary research at the Santa Fe Institute.

Get the full analysis with uListen AI

How can ideas from complexity science and emergent behavior be incorporated into new cognitive architectures that go beyond today’s feedforward deep learning models?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Melanie Mitchell. She's a professor of computer science at Portland State University, and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives, including adaptive complex systems, genetic algorithms, and the copycat cognitive architecture, which places the process of analogy-making at the core of human cognition. From her doctoral work, with her advisors Douglas Hofstadter and John Holland, to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter, @LexFridman, spelled F-R-I-D-M-A-N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. I provide timestamps for the start of the conversation, but it helps if you listen to the ad and support this podcast by trying out the product or service being advertised. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIBC. I'm excited to be working with Cash App to support one of my favorite organizations called FIRST, best known for their FIRST robotics and LEGO competitions. They educate and inspire hundreds of thousands of students in over 110 countries, and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Melanie Mitchell. The name of your new book is Artificial Intelligence, subtitle, A Guide for Thinking Humans. The name of this podcast is Artificial Intelligence. So let me take a step back and ask the old Shakespeare question about roses and, uh, what do you think of the term "artificial intelligence" for our big and complicated and interesting field?

Melanie Mitchell

I'm not crazy about the term. (laughs) I think it has a few problems, um, because it, it's, means so many different things to different people. And intelligence is one of those words that isn't very clearly defined either. There's so many different kinds of intelligence, degrees of intelligence, approaches to intelligence. John McCarthy was the one who came up with the term "artificial intelligence." And what, from what I read, he called it that to differentiate it from cybernetics, which was another related movement at the time. And he later regretted calling it artificial intelligence. Uh, Herbert Simon was pushing for calling it complex information processing. (laughs) Which got nixed, but you know, probably is equally vague, I guess.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome