Skip to content
Lex Fridman PodcastLex Fridman Podcast

Yann LeCun: Dark Matter of Intelligence and Self-Supervised Learning | Lex Fridman Podcast #258

Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the seminal researchers in the history of machine learning. Please support this podcast by checking out our sponsors: - Public Goods: https://publicgoods.com/lex and use code LEX to get $15 off - Indeed: https://indeed.com/lex to get $75 credit - ROKA: https://roka.com/ and use code LEX to get 20% off your first order - NetSuite: http://netsuite.com/lex to get free product tour - Magic Spoon: https://magicspoon.com/lex and use code LEX to get $5 off EPISODE LINKS: Yann's Twitter: https://twitter.com/ylecun Yann's Facebook: https://www.facebook.com/yann.lecun Yann's Website: http://yann.lecun.com/ Books and resources mentioned: Self-supervised learning (article): https://bit.ly/3Aau1DQ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 0:36 - Self-supervised learning 10:55 - Vision vs language 16:46 - Statistics 22:33 - Three challenges of machine learning 28:22 - Chess 36:25 - Animals and intelligence 46:09 - Data augmentation 1:07:29 - Multimodal learning 1:19:18 - Consciousness 1:24:03 - Intrinsic vs learned ideas 1:28:15 - Fear of death 1:36:07 - Artificial Intelligence 1:49:56 - Facebook AI Research 2:06:34 - NeurIPS 2:22:46 - Complexity 2:31:11 - Music 2:36:06 - Advice for young people SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostYann LeCunguest
Jan 22, 20222h 45mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Yann LeCun explains dark matter of intelligence: self-supervised learning

  1. Yann LeCun argues that today's dominant paradigms—supervised and reinforcement learning—are far too data- and trial-hungry compared to how humans and animals actually learn. He frames self-supervised learning as the 'dark matter of intelligence': the largely unexplored mechanism by which brains build rich world models from raw observation. LeCun details why predictive, gap-filling learning from video and multimodal data is likely our best path toward common-sense physical understanding and eventually human-level AI, stressing differentiable, gradient-based systems over symbolic logic. He also ranges into topics like emotions in AI, consciousness, complexity, the future of conferences and peer review, Meta/FAIR’s role, and the societal impact of social platforms.

IDEAS WORTH REMEMBERING

5 ideas

Self-supervised learning is likely the main driver of animal and human intelligence.

Most of our 'background knowledge'—intuitive physics, object permanence, common sense—is learned by passively observing the world and predicting missing or future information, not by labels or explicit rewards. Replicating this in machines via prediction and 'filling in the gaps' is, in LeCun’s view, the central unsolved problem in AI.

Current supervised and reinforcement learning are far too inefficient to reach general intelligence.

Supervised learning needs huge labeled datasets and reinforcement learning needs astronomically many trials; a tabula rasa RL agent would literally fall off the cliff thousands of times to learn a basic driving rule that humans infer instantly from prior world knowledge. We need systems that acquire rich world models before task-specific training.

Powerful AI will require differentiable world models and gradient-based reasoning and planning.

LeCun emphasizes that deep learning’s strength is efficient gradient-based optimization. He argues we must design world models, critics (value predictors), and hierarchical action planners to be differentiable so that learning, prediction, and model-predictive control can all be trained via gradients, rather than brittle symbolic logic.

Handling uncertainty and multimodal futures is the crux of learning from video and complex data.

Predicting the next video frame is hard because there are infinitely many plausible continuations. LeCun contrasts generative latent-variable approaches (predict pixels with latent codes) with 'joint embedding' approaches that predict abstract representations of future clips, discarding inherently unpredictable details while preserving what matters.

Non-contrastive joint-embedding methods may be a key breakthrough for representation learning.

LeCun is particularly excited about non-contrastive methods like Barlow Twins and VicReg, which learn invariant yet informative representations from different views of the same data without requiring negative examples. He sees them as the most promising tools in 15 years for building general-purpose world models from raw sensory streams.

WORDS WORTH SAVING

5 quotes

There is obviously a kind of learning that humans and animals are doing that we currently are not reproducing properly with machines.

Yann LeCun

Self-supervised learning is the dark matter of intelligence.

Yann LeCun

The essence of intelligence is the ability to predict.

Yann LeCun

I don’t think we can train a machine to be intelligent purely from text, because the amount of information about the world that’s contained in text is tiny.

Yann LeCun

There’s no question in my mind that machines at some point will become more intelligent than humans in all domains where humans are intelligent.

Yann LeCun

Self-supervised learning as the 'dark matter' and foundation of intelligenceLimits of supervised and reinforcement learning vs. animal and human learningWorld models, prediction, uncertainty, and gradient-based reasoning/planningContrastive vs. non-contrastive self-supervised methods (e.g., VicReg, Barlow Twins, BYOL)Vision vs. language learning, video prediction, and grounded intelligenceIntrinsic motivation, emotions, and potential rights for future intelligent machinesMeta/FAIR’s research strategy, peer review problems, and AI for scientific discovery

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome