WTF is Artificial Intelligence Really? | Yann LeCun x Nikhil Kamath  | People by WTF Ep #4

WTF is Artificial Intelligence Really? | Yann LeCun x Nikhil Kamath | People by WTF Ep #4

Nikhil KamathNov 27, 20241h 36m

Nikhil Kamath (host), Yann LeCun (guest)

Engineer vs scientist; building to understandAI as elephant: multiple facets of intelligenceGOFAI: logic, rules, search, planningMachine learning types: supervised, reinforcement, self-supervisedBackpropagation and multilayer neural networksConvNets vs Transformers (inductive biases/equivariance)LLMs: next-token prediction, strengths and limitationsMemory in AI: parameters vs context vs external memoryWorld models, planning, Kahneman System 1 vs System 2JEPA and learning from videoOpen-source platforms, distributed training, sovereign AIIndia-focused opportunities: data centers, inference cost, vertical apps

In this episode of Nikhil Kamath, featuring Nikhil Kamath and Yann LeCun, WTF is Artificial Intelligence Really? | Yann LeCun x Nikhil Kamath | People by WTF Ep #4 explores yann LeCun explains AI’s roots, limits, and next architecture frontier Yann LeCun frames AI as a long-running quest to understand and build intelligence, arguing the field historically split between top-down reasoning/search (GOFAI) and bottom-up learning from data (machine learning/deep learning).

Yann LeCun explains AI’s roots, limits, and next architecture frontier

Yann LeCun frames AI as a long-running quest to understand and build intelligence, arguing the field historically split between top-down reasoning/search (GOFAI) and bottom-up learning from data (machine learning/deep learning).

He explains core modern tools—backpropagation, convolutional nets, transformers, and self-supervised learning—and why LLMs excel at language while still lacking robust reasoning, persistent memory, and physical-world understanding.

LeCun argues the next leap requires systems that learn “world models” from video and support planning (System 2), not just token-by-token generation (System 1). He presents JEPA (Joint Embedding Predictive Architecture) as a path to predict in abstract representation space rather than pixel space.

He closes with pragmatic advice: build on open-source foundation models (e.g., Llama), fine-tune for vertical use-cases (legal, accounting, enterprise knowledge, health, local-language assistants), invest in local compute/inference infrastructure, and expect open-source dominance within ~5 years.

Key Takeaways

AI is a problem space, not a single technique.

LeCun emphasizes AI as the investigation of intelligence; different eras focused on different “parts of the elephant,” from reasoning/search to learning/perception. ...

Get the full analysis with uListen AI

Two historical branches shaped AI: search/reasoning and learning/perception.

GOFAI treated intelligence as planning and rule-based inference (dominant until the 1990s), while neural-net learning pursued brain-inspired adaptation. ...

Get the full analysis with uListen AI

Deep learning’s breakthrough was multilayer networks trained by backpropagation.

Single-layer perceptrons were too limited; stacking layers with nonlinearities enabled learning complex functions (e. ...

Get the full analysis with uListen AI

Architectures matter because they bake in “biases” that reduce data needs.

ConvNets exploit translation structure in images/audio (nearby pixels correlate), while transformers handle sets/sequences via permutation-equivariant blocks. ...

Get the full analysis with uListen AI

LLMs are powerful language manipulators but weak world-modelers.

Because autoregressive LLMs operate in discrete token spaces, they can learn linguistic/statistical regularities and retrieve knowledge, yet still make “stupid mistakes” about physics/causality. ...

Get the full analysis with uListen AI

Next-gen AI needs persistent memory and planning (System 2), not just generation (System 1).

LLMs mainly store information in parameters and short context windows; they lack a hippocampus-like episodic memory and efficient deliberative search. ...

Get the full analysis with uListen AI

JEPA aims to make video prediction tractable by predicting in representation space.

Pixel-level future prediction is intractable in high-dimensional continuous worlds; JEPA encodes video into abstract embeddings and predicts future embeddings, discarding unpredictable details. ...

Get the full analysis with uListen AI

Open-source foundation models will dominate; value shifts to fine-tuning and vertical expertise.

LeCun forecasts an ecosystem analogous to Linux: portable, flexible, cheaper, and not controlled by one entity. ...

Get the full analysis with uListen AI

Inference economics (not just training) will determine mass adoption in India.

He argues India needs local compute for both sovereignty and scale, and notes inference costs have dropped ~100× in two years. ...

Get the full analysis with uListen AI

Notable Quotes

AI is more of a problem than a solution.

Yann LeCun

LLMs are not the path to human-level intelligence.

Yann LeCun

The smartest LLMs are not as smart as your house cat.

Yann LeCun

Reinforcement learning is a situation where you don't tell the system what the correct answer is, you just tell it whether the answer it produced was good or bad.

Yann LeCun

Instead of predicting pixels, we predict abstract representations of those pixels, where all the things that are basically unpredictable have been eliminated.

Yann LeCun

Questions Answered in This Episode

If AI is a “problem space,” what practical definition of intelligence is most useful for builders—skills, learning speed, or zero-shot problem solving?

Yann LeCun frames AI as a long-running quest to understand and build intelligence, arguing the field historically split between top-down reasoning/search (GOFAI) and bottom-up learning from data (machine learning/deep learning).

Get the full analysis with uListen AI

What are the clearest examples where GOFAI-style search/planning should be combined with deep learning today (beyond toy demos)?

He explains core modern tools—backpropagation, convolutional nets, transformers, and self-supervised learning—and why LLMs excel at language while still lacking robust reasoning, persistent memory, and physical-world understanding.

Get the full analysis with uListen AI

You say LLM reasoning via generating many candidates and searching is inefficient—what would an efficient System-2 architecture look like operationally (modules, memory, objectives)?

LeCun argues the next leap requires systems that learn “world models” from video and support planning (System 2), not just token-by-token generation (System 1). ...

Get the full analysis with uListen AI

In JEPA, what exactly counts as “unpredictable details” that should be removed from representations, and how do you prevent removing important causal factors?

He closes with pragmatic advice: build on open-source foundation models (e. ...

Get the full analysis with uListen AI

What benchmarks would convincingly show a model has learned a real “world model” from video rather than shortcut correlations?

Get the full analysis with uListen AI

Transcript Preview

Nikhil Kamath

[upbeat music] I thought we could use today to figure out, A, what is AI? How did we get here? What likely next is. [upbeat music] As an Indian twenty-year-old who wants to build a business in AI, a career in AI, what do we do?

Yann LeCun

Today?

Nikhil Kamath

Today.

Yann LeCun

Like, right now?

Nikhil Kamath

Yeah. [upbeat music] Hi, Yann. Good morning.

Yann LeCun

And you too.

Nikhil Kamath

Uh, thank you for doing this.

Yann LeCun

Pleasure.

Nikhil Kamath

The very first thing we like to do is get to know you a bit more, uh, how you came to be what you are today. Uh, could you tell us a little bit about where you were born, where you grew up, leading up to today?

Yann LeCun

So, I, I grew up near Paris, uh, in the suburbs. Um, my dad was an engineer, and I learned almost everything from him. [chuckles] Um, and, um, [clears throat] um, always was interested in, in science and technology since I was a little kid. And, [clears throat] and always saw myself as, uh, perhaps becoming an engineer. I had no idea how you became a scientist, uh, but I became interested in this afterwards.

Nikhil Kamath

What is the difference between an engineer and a scientist?

Yann LeCun

Well, um, it's very difficult to, [chuckles] to define, and, uh, very often you have to be a little bit of both.

Nikhil Kamath

Mm-hmm.

Yann LeCun

[clears throat] But, uh, um, scientists, you try to understand the world. Um, engineer, you try to create new things, and very often, if you want to understand the world, you need to create new things. The progress of science very much is linked with progress in technology that allows to collect data. You know, the invention of the telescope allowed the discovery of planets, and that planets are, um, rotating around the sun and things like this, right? The microscope opened the door to all kinds of things. So, um, so technology enables science, and for... The problem that really has been my obsession [chuckles] uh, for a long time, is, uh, discovering the mysteries of, uncovering the mysteries of intelligence. Um, and as, as an engineer, I think the, the only way to do this is to build a machine that is intelligent, right? So there's both an aspect of, a scientific aspect of understanding intelligence, what it is, um, at a theoretical level and more practical, uh, side of things. And then, um, of course, the consequences of building intelligent machines could be, could have, you know, could be really important for humanity.

Nikhil Kamath

And school in Paris, studying what?

Yann LeCun

So I, I studied electrical engineering.

Nikhil Kamath

Mm-hmm.

Yann LeCun

Um, but as I progressed in my studies, I became more and more interested in sort of, uh, more fundamental questions in mathematics, physics, and, and AI.

Nikhil Kamath

Mm-hmm.

Yann LeCun

Um, I, I did not study computer science. [chuckles]

Nikhil Kamath

Right.

Yann LeCun

Uh, of course, there is always computers involved when you studied electrical engineering, even in the 1980s, and sev-- late '70s, actually, when I started. Um, but, um, but I got to do a few independent projects with mathematics professors on, on the questions of AI and, and, and things like that, and, uh, I really got hooked into research. I, I was, uh, um... You know, my, my, uh, m- my, my favorite activity is to, to build new things, invent new things-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome