
Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36
Lex Fridman (host), Yann LeCun (guest)
In this episode of Lex Fridman Podcast, featuring Lex Fridman and Yann LeCun, Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36 explores yann LeCun outlines path to human-level AI through self-supervision Yann LeCun discusses the limitations of current AI, arguing that real progress toward human-level intelligence requires self-supervised learning and rich predictive models of the world rather than just bigger supervised or reinforcement learning systems.
Yann LeCun outlines path to human-level AI through self-supervision
Yann LeCun discusses the limitations of current AI, arguing that real progress toward human-level intelligence requires self-supervised learning and rich predictive models of the world rather than just bigger supervised or reinforcement learning systems.
He contrasts symbolic, logic-based AI with gradient-based neural approaches, emphasizing continuous representations, working memory, and planning as keys to enabling reasoning in neural networks.
LeCun explores ethical and societal issues via HAL 9000, legal systems as "objective functions," and the non-generality of human intelligence, stressing that grounding in physical reality and common sense is essential for true language understanding.
He also reflects on deep learning’s history, why neural nets briefly fell out of favor, the role of benchmarks and open-source tools, and why emotions, causality, and model-based reinforcement learning will be central to future autonomous systems.
Key Takeaways
AI safety parallels human lawmaking: objective functions are like legal codes.
LeCun frames AI alignment as an extension of what societies already do with laws—designing objective functions (rules, penalties) that shape behavior toward the common good, suggesting AI ethics will fuse computer science and jurisprudence rather than invent something entirely new.
Get the full analysis with uListen AI
Deep learning works by violating classical theory, and that’s informative.
Modern neural nets with huge parameter counts and non-convex objectives train successfully on relatively modest data via stochastic gradient descent, contradicting pre–deep learning textbooks; this empirical success implies our theoretical understanding of generalization and optimization was too narrow.
Get the full analysis with uListen AI
Reasoning in neural nets requires working memory, recurrence, and world models.
LeCun argues that human-like reasoning emerges from systems with hippocampus-like memory, recurrent access to that memory, and energy-minimization style planning (model predictive control), not from static feed-forward models alone.
Get the full analysis with uListen AI
Symbolic logic is brittle and hard to learn; continuous representations scale better.
He critiques logic- and graph-based expert systems for their brittleness and manual knowledge acquisition bottleneck, advocating vector-based “symbols” and continuous functions (à la Hinton and Bottou) as a way to make reasoning compatible with gradient-based learning.
Get the full analysis with uListen AI
Self-supervised learning is crucial for common sense and data efficiency.
LeCun sees self-supervised prediction (e. ...
Get the full analysis with uListen AI
Current RL is far from human learning; model-based approaches are needed.
He notes that deep RL systems need the equivalent of years or centuries of experience to reach human performance in games, whereas humans learn tasks like driving in tens of hours because they rely on internal predictive models of physics, not just trial-and-error reward signals.
Get the full analysis with uListen AI
Human intelligence is highly specialized and not truly “general.”
Using arguments about the structure of the visual system and the vast space of possible Boolean functions, LeCun contends that humans operate over a tiny subset of possible tasks and stimuli—our sense of “generality” is confined to what we can even conceptualize.
Get the full analysis with uListen AI
Notable Quotes
“Machine learning is the science of sloppiness.”
— Yann LeCun
“Intelligence is inseparable from learning. The idea you can create an intelligent machine by basically programming was a non-starter for me from the start.”
— Yann LeCun
“We’re not going to have autonomous intelligence without emotions.”
— Yann LeCun
“Human intelligence is nothing like general. It’s very, very specialized.”
— Yann LeCun
“The main problem we need to solve is: how do we learn models of the world? That’s what self-supervised learning is all about.”
— Yann LeCun
Questions Answered in This Episode
If self-supervised world modeling is so central, what specific architectures or objective functions might finally crack uncertainty-aware video and image prediction?
Yann LeCun discusses the limitations of current AI, arguing that real progress toward human-level intelligence requires self-supervised learning and rich predictive models of the world rather than just bigger supervised or reinforcement learning systems.
Get the full analysis with uListen AI
How can we rigorously benchmark “common sense” and grounding in AI systems beyond language tasks like the Winograd schemas?
He contrasts symbolic, logic-based AI with gradient-based neural approaches, emphasizing continuous representations, working memory, and planning as keys to enabling reasoning in neural networks.
Get the full analysis with uListen AI
What would a practical, legally-informed “objective function” for a powerful general-purpose AI actually look like in code or system design?
LeCun explores ethical and societal issues via HAL 9000, legal systems as "objective functions," and the non-generality of human intelligence, stressing that grounding in physical reality and common sense is essential for true language understanding.
Get the full analysis with uListen AI
To what extent can large language models acquire genuine causal understanding from text alone, and where will they fundamentally need non-linguistic grounding?
He also reflects on deep learning’s history, why neural nets briefly fell out of favor, the role of benchmarks and open-source tools, and why emotions, causality, and model-based reinforcement learning will be central to future autonomous systems.
Get the full analysis with uListen AI
How might model-based reinforcement learning and self-supervision be combined in real-world domains like autonomous driving to avoid the sample inefficiency of current RL methods?
Get the full analysis with uListen AI
Transcript Preview
The following is a conversation with Yann LeCun. He's considered to be one of the fathers of deep learning, which, if you've been hiding under a rock, is the recent revolution in AI that's captivated the world with the possibility of what machines can learn from data. He's a professor at New York University, a vice president and chief AI scientist at Facebook, and co-recipient of the Turing Award for his work on deep learning. He's probably best known as the founding father of convolutional neural networks, in particular, their application to optical character recognition and the famed MNIST dataset. He is also an outspoken personality, unafraid to speak his mind in a distinctive French accent and explore provocative ideas, both in the rigorous medium of academic research and the somewhat less rigorous medium of Twitter and Facebook. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter, @lexfridman, spelled F-R-I-D-M-A-N. And now, here's my conversation with Yann LeCun. You said that 2001: Space Odyssey is one of your favorite movies. HAL 9000 decides to get rid of the astronauts, for people who haven't seen the movie, spoiler alert, because he, it, she believes that the astronauts, they will interfere with the mission. Do you see HAL as flawed in some fundamental way, or even evil, or did he do the right thing?
Neither. There's no notion of evil in that- in that context, other than the fact that people die. But it was an example of what people call, uh, value misalignment, right? You give an objective to a machine and the machine str- strives to achieve this objective, and if you don't put any constraints on this objective, like, don't kill people and don't do things like this, the machine, given the power, will do stupid things just to achieve this- this objective, or damaging things to achieve this objective. It's a little bit like... I mean, we are used to this in the context of human society. We- we put in place laws to prevent people from doing bad things because
(laughs)
... spontaneously they would do those bad things, right? So, we have to shape their cost function, their objective function, if you want, through laws to kind of correct, and education, obviously, to sort of correct for- for those.
So, maybe just pushing a little further on- on that point, HAL, you know, there's a mission. There's a- there's fuzziness around the- the ambiguity around what the actual mission is, but, you know, d- do you think that there will be a time, from a utilitarian perspective, where an AI system, where it is not misalignment, where it is alignment for the greater good of society, that an AI system will make decisions that are difficult?
Well, that's the trick. I mean, uh, eventually, we'll have to figure out how to do this. And again, th- we're not starting from scratch because we've been doing this with humans for- for millennia.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome