Skip to content
Nikhil KamathNikhil Kamath

WTF is Artificial Intelligence Really? | Yann LeCun x Nikhil Kamath | People by WTF Ep #4

A lot of us have heard conjectures around A.I., edge cases of the positive and negative side of A.I., and a lot of us are trying to predict what’s next. Most of my understanding of A.I. comes only from viewing what is apparent today.. reinforcement learning spaces like chat gpt becoming a go-to.. In this episode of People by WTF, we uncover the basics of this mystery of artificial intelligence with one of the founding fathers of A.I., Yann LeCun. We spoke about popular AI myths and broke down complex concepts that can perhaps help the next generation of builders build in this space. To learn more about Machine Learning, we’ve collated some sources for your benefit and growth in this industry. (https://www.notion.so/Machine-Learning-Resource-Document-14ba9e22882a8022a878ee25a3738267?pvs=21) #NikhilKamath Co-founder of Zerodha, True Beacon and Gruhas Host of #wtfiswithnikhilkamath Twitter: https://x.com/nikhilkamathcio/ Instagram: https://www.instagram.com/nikhilkamathcio/ LinkedIn: https://www.linkedin.com/in/nikhilkamathcio?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app Facebook: https://www.facebook.com/nikhilkamathcio/ #YannLeCun Professor at NYU, VP & Chief AI Scientist at Meta Linkedin: https://www.linkedin.com/in/yann-lecun/ Instagram: https://www.instagram.com/yannlecun/ Facebook: https://www.facebook.com/yann.lecun Twitter: https://x.com/ylecun **Timestamps:** 00:00 Yann’s Intro 01:50 Difference between an Engineer and a Scientist 03:15 Yann’s interest in AI and Mathematics 04:05 Godfather of AI | Yann’s feelings about it 05:22 Teaching & fame at NYU 06:00 Heroes in Science 07:46 Three problems with the world - Yann’s lens 10:18 What is AI and how did we get here? 13:13 What is intelligence? | The Elephant Analogy 15:00 AI - perception & understanding 16:20 The two branches of AI - solving & learning 17:30 Emergence of classical computer science | Heuristic programming 20:15 Is A.I. inspired from biology? 26:36 Is building authentic models for finance possible through AI? 28:36 Different parts of A.I | GOFAI, Machine learning 30:18 What is GOFAI? 31:14 Different types of Machine Learning 32:22 What is Reinforcement learning? 33:19 What is Self supervised learning? Up & Coming 35:14 Is AI telling you what you want to hear? 38:00 What is a transformer? 40:24 What is a back propagation algorithm? 42:58 What’s happening in the reinforcement learning space? 48:06 What is a convolutional neural network ? 49:08 What is a Neuron - the Machine Learning perspective 50:00 What is a neural network language model & how does it work? 58:00 The AI tree | LLMs 59:55 The next challenge of AI 1:01:40 - Pictures/ Videos - what's happening there? 01:03:20 LLM’s limited memory | Types of memory 01:04:45 AI’s path to human like learning 01:10:26 What is JEPA ? 01:11:58 How far in the future can you predict through JEPA ? 01:14:10 AI’s future prediction - Utopian or Dystopian ? 01:16:30 The LLM Loop | What needs to change 01:18:50 Building data centers in India - Yann’s thoughts 01:21:09 What should a 25 y/o build in the AI space? | Careers in the AI space 01:26:18 What should an investor invest in | Yann’s future prediction 01:29:13 What happens to human intelligence in an AI world? 01:32:09 What is intelligence? | Final Verdict 01:34:00 The end or just the beginning?

Nikhil KamathhostYann LeCunguest
Nov 26, 20241h 36mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Yann LeCun explains AI’s roots, limits, and next architecture frontier

  1. Yann LeCun frames AI as a long-running quest to understand and build intelligence, arguing the field historically split between top-down reasoning/search (GOFAI) and bottom-up learning from data (machine learning/deep learning).
  2. He explains core modern tools—backpropagation, convolutional nets, transformers, and self-supervised learning—and why LLMs excel at language while still lacking robust reasoning, persistent memory, and physical-world understanding.
  3. LeCun argues the next leap requires systems that learn “world models” from video and support planning (System 2), not just token-by-token generation (System 1). He presents JEPA (Joint Embedding Predictive Architecture) as a path to predict in abstract representation space rather than pixel space.
  4. He closes with pragmatic advice: build on open-source foundation models (e.g., Llama), fine-tune for vertical use-cases (legal, accounting, enterprise knowledge, health, local-language assistants), invest in local compute/inference infrastructure, and expect open-source dominance within ~5 years.

IDEAS WORTH REMEMBERING

5 ideas

AI is a problem space, not a single technique.

LeCun emphasizes AI as the investigation of intelligence; different eras focused on different “parts of the elephant,” from reasoning/search to learning/perception. This framing helps avoid equating “AI” with today’s LLMs.

Two historical branches shaped AI: search/reasoning and learning/perception.

GOFAI treated intelligence as planning and rule-based inference (dominant until the 1990s), while neural-net learning pursued brain-inspired adaptation. Modern AI largely comes from the learning branch, but planning/search will return via world-models.

Deep learning’s breakthrough was multilayer networks trained by backpropagation.

Single-layer perceptrons were too limited; stacking layers with nonlinearities enabled learning complex functions (e.g., real-world vision). Backprop remains the foundation of most practical AI systems.

Architectures matter because they bake in “biases” that reduce data needs.

ConvNets exploit translation structure in images/audio (nearby pixels correlate), while transformers handle sets/sequences via permutation-equivariant blocks. Matching architecture to data structure improves sample efficiency and performance.

LLMs are powerful language manipulators but weak world-modelers.

Because autoregressive LLMs operate in discrete token spaces, they can learn linguistic/statistical regularities and retrieve knowledge, yet still make “stupid mistakes” about physics/causality. LeCun argues scaling alone won’t yield human-level intelligence.

WORDS WORTH SAVING

5 quotes

AI is more of a problem than a solution.

Yann LeCun

LLMs are not the path to human-level intelligence.

Yann LeCun

The smartest LLMs are not as smart as your house cat.

Yann LeCun

Reinforcement learning is a situation where you don't tell the system what the correct answer is, you just tell it whether the answer it produced was good or bad.

Yann LeCun

Instead of predicting pixels, we predict abstract representations of those pixels, where all the things that are basically unpredictable have been eliminated.

Yann LeCun

Engineer vs scientist; building to understandAI as elephant: multiple facets of intelligenceGOFAI: logic, rules, search, planningMachine learning types: supervised, reinforcement, self-supervisedBackpropagation and multilayer neural networksConvNets vs Transformers (inductive biases/equivariance)LLMs: next-token prediction, strengths and limitationsMemory in AI: parameters vs context vs external memoryWorld models, planning, Kahneman System 1 vs System 2JEPA and learning from videoOpen-source platforms, distributed training, sovereign AIIndia-focused opportunities: data centers, inference cost, vertical apps

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome