Skip to content
Lex Fridman PodcastLex Fridman Podcast

Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13

Lex Fridman and Tomaso Poggio on tomaso Poggio on Intelligence, Brains, and the Limits of AI.

Lex FridmanhostTomaso Poggioguest
Jan 19, 20191h 20mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Tomaso Poggio on Intelligence, Brains, and the Limits of AI

  1. Lex Fridman and Tomaso Poggio explore the nature of intelligence by connecting modern AI to neuroscience, evolution, and human cognition. Poggio argues that understanding the brain—especially the visual cortex and cortical architecture—is likely essential for the biggest future breakthroughs in AI, even though current deep learning only loosely mimics biology. They discuss compositionality, overparameterized deep networks, and why today’s systems still lack true scene understanding, common sense, and human-like sample efficiency. The conversation ranges into ethics, consciousness, AGI timelines, and what it takes to do great science, emphasizing curiosity, fun, and collaboration.

IDEAS WORTH REMEMBERING

5 ideas

Biology has strongly shaped modern AI, and will likely drive key future breakthroughs.

Poggio points out that central techniques like deep learning and reinforcement learning were inspired by neuroscience and behavioral science (Hubel & Wiesel, Pavlov, Minsky). He bets that at least some of the next major advances in AI will again come from understanding brain circuits and learning mechanisms.

Deep networks gain power from compositional structure in the world and in our brains.

When problems can be decomposed into hierarchies of simpler sub-problems (vision, language), deep architectures can avoid the curse of dimensionality that plagues shallow models. Poggio argues that either physics is compositional—or our brains are wired as deep networks and we only experience and care about compositional problems we can solve.

Overparameterization makes neural networks easier to optimize than classical intuition suggests.

Modern deep nets often have far more parameters than data points, yet train successfully. Poggio notes that this creates an enormous number of global minima—“probably more minima than atoms in the universe”—so simple algorithms like stochastic gradient descent are surprisingly likely to find good solutions; the harder question is why some of those solutions generalize well.

Human learning is built on weak evolutionary priors and powerful bootstrapping, not millions of labels.

Where current AI needs vast labeled datasets, children learn from very few explicit labels. Poggio suggests evolution supplies weak biases (e.g., motion sensitivity, quickly imprinting common stimuli like faces), and then experience bootstraps richer representations, a process today’s supervised methods only dimly resemble.

The cortex may implement a general learning architecture reused across vision, language, and action.

Despite distinct functions (vision, audition, language), cortical regions share remarkably similar microcircuitry. Poggio sees this as evidence for a common underlying architecture that, via learning and connectivity, gets specialized—an important clue for designing more general AI systems.

WORDS WORTH SAVING

5 quotes

What about solving a problem whose solution allowed me to solve all the problems?

Tomaso Poggio

I think the problem of human intelligence is, for me, the most interesting problem—it’s really asking who we are.

Tomaso Poggio

The biological world is more n going to one. A child can learn from a very small number of labeled examples.

Tomaso Poggio

There are probably more minima for a typical deep network than atoms in the universe.

Tomaso Poggio

In the brain, the algorithms and the circuits are much more intertwined. That’s why the problem is more difficult than for computers.

Tomaso Poggio

Einstein, thought experiments, and the nature of scientific creativityAI vs. brain: how far we can go without understanding biologyDeep learning, compositionality, and why overparameterization worksSample efficiency: children vs. neural networks (N→∞ vs. N→1)Cortical architecture, modularity, and learning in the visual systemEthics, consciousness, and the neuroscience underlying moral judgmentAGI timelines, risk, and the role of curiosity and mentorship in science

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome