Skip to content
Lex Fridman PodcastLex Fridman Podcast

Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36

Lex Fridman and Yann LeCun on yann LeCun outlines path to human-level AI through self-supervision.

Lex FridmanhostYann LeCunguest
Aug 31, 20191h 15mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Yann LeCun outlines path to human-level AI through self-supervision

  1. Yann LeCun discusses the limitations of current AI, arguing that real progress toward human-level intelligence requires self-supervised learning and rich predictive models of the world rather than just bigger supervised or reinforcement learning systems.
  2. He contrasts symbolic, logic-based AI with gradient-based neural approaches, emphasizing continuous representations, working memory, and planning as keys to enabling reasoning in neural networks.
  3. LeCun explores ethical and societal issues via HAL 9000, legal systems as "objective functions," and the non-generality of human intelligence, stressing that grounding in physical reality and common sense is essential for true language understanding.
  4. He also reflects on deep learning’s history, why neural nets briefly fell out of favor, the role of benchmarks and open-source tools, and why emotions, causality, and model-based reinforcement learning will be central to future autonomous systems.

IDEAS WORTH REMEMBERING

5 ideas

AI safety parallels human lawmaking: objective functions are like legal codes.

LeCun frames AI alignment as an extension of what societies already do with laws—designing objective functions (rules, penalties) that shape behavior toward the common good, suggesting AI ethics will fuse computer science and jurisprudence rather than invent something entirely new.

Deep learning works by violating classical theory, and that’s informative.

Modern neural nets with huge parameter counts and non-convex objectives train successfully on relatively modest data via stochastic gradient descent, contradicting pre–deep learning textbooks; this empirical success implies our theoretical understanding of generalization and optimization was too narrow.

Reasoning in neural nets requires working memory, recurrence, and world models.

LeCun argues that human-like reasoning emerges from systems with hippocampus-like memory, recurrent access to that memory, and energy-minimization style planning (model predictive control), not from static feed-forward models alone.

Symbolic logic is brittle and hard to learn; continuous representations scale better.

He critiques logic- and graph-based expert systems for their brittleness and manual knowledge acquisition bottleneck, advocating vector-based “symbols” and continuous functions (à la Hinton and Bottou) as a way to make reasoning compatible with gradient-based learning.

Self-supervised learning is crucial for common sense and data efficiency.

LeCun sees self-supervised prediction (e.g., masked word prediction, video/frame prediction) as the primary route to learning rich world models that later make supervised and reinforcement learning vastly more sample-efficient, mirroring how babies learn physics and causality from observation.

WORDS WORTH SAVING

5 quotes

Machine learning is the science of sloppiness.

Yann LeCun

Intelligence is inseparable from learning. The idea you can create an intelligent machine by basically programming was a non-starter for me from the start.

Yann LeCun

We’re not going to have autonomous intelligence without emotions.

Yann LeCun

Human intelligence is nothing like general. It’s very, very specialized.

Yann LeCun

The main problem we need to solve is: how do we learn models of the world? That’s what self-supervised learning is all about.

Yann LeCun

Value alignment, ethics, and legal systems as objective functions for AIDeep learning history, ConvNets, and why neural nets succeeded despite theoryNeural networks, reasoning, and the need for working memory and planningSymbolic AI vs. continuous, gradient-based learning and causal inferenceSelf-supervised learning as the path to world models and common senseReinforcement learning limits, model-based RL, and autonomous drivingEmbodiment vs. grounding, human intelligence’s limits, and emotions in AI

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome