Skip to content
Nikhil KamathNikhil Kamath

WTF is Artificial Intelligence Really? | Yann LeCun x Nikhil Kamath | People by WTF Ep #4

A lot of us have heard conjectures around A.I., edge cases of the positive and negative side of A.I., and a lot of us are trying to predict what’s next. Most of my understanding of A.I. comes only from viewing what is apparent today.. reinforcement learning spaces like chat gpt becoming a go-to.. In this episode of People by WTF, we uncover the basics of this mystery of artificial intelligence with one of the founding fathers of A.I., Yann LeCun. We spoke about popular AI myths and broke down complex concepts that can perhaps help the next generation of builders build in this space. To learn more about Machine Learning, we’ve collated some sources for your benefit and growth in this industry. (https://www.notion.so/Machine-Learning-Resource-Document-14ba9e22882a8022a878ee25a3738267?pvs=21) #NikhilKamath Co-founder of Zerodha, True Beacon and Gruhas Host of #wtfiswithnikhilkamath Twitter: https://x.com/nikhilkamathcio/ Instagram: https://www.instagram.com/nikhilkamathcio/ LinkedIn: https://www.linkedin.com/in/nikhilkamathcio?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app Facebook: https://www.facebook.com/nikhilkamathcio/ #YannLeCun Professor at NYU, VP & Chief AI Scientist at Meta Linkedin: https://www.linkedin.com/in/yann-lecun/ Instagram: https://www.instagram.com/yannlecun/ Facebook: https://www.facebook.com/yann.lecun Twitter: https://x.com/ylecun **Timestamps:** 00:00 Yann’s Intro 01:50 Difference between an Engineer and a Scientist 03:15 Yann’s interest in AI and Mathematics 04:05 Godfather of AI | Yann’s feelings about it 05:22 Teaching & fame at NYU 06:00 Heroes in Science 07:46 Three problems with the world - Yann’s lens 10:18 What is AI and how did we get here? 13:13 What is intelligence? | The Elephant Analogy 15:00 AI - perception & understanding 16:20 The two branches of AI - solving & learning 17:30 Emergence of classical computer science | Heuristic programming 20:15 Is A.I. inspired from biology? 26:36 Is building authentic models for finance possible through AI? 28:36 Different parts of A.I | GOFAI, Machine learning 30:18 What is GOFAI? 31:14 Different types of Machine Learning 32:22 What is Reinforcement learning? 33:19 What is Self supervised learning? Up & Coming 35:14 Is AI telling you what you want to hear? 38:00 What is a transformer? 40:24 What is a back propagation algorithm? 42:58 What’s happening in the reinforcement learning space? 48:06 What is a convolutional neural network ? 49:08 What is a Neuron - the Machine Learning perspective 50:00 What is a neural network language model & how does it work? 58:00 The AI tree | LLMs 59:55 The next challenge of AI 1:01:40 - Pictures/ Videos - what's happening there? 01:03:20 LLM’s limited memory | Types of memory 01:04:45 AI’s path to human like learning 01:10:26 What is JEPA ? 01:11:58 How far in the future can you predict through JEPA ? 01:14:10 AI’s future prediction - Utopian or Dystopian ? 01:16:30 The LLM Loop | What needs to change 01:18:50 Building data centers in India - Yann’s thoughts 01:21:09 What should a 25 y/o build in the AI space? | Careers in the AI space 01:26:18 What should an investor invest in | Yann’s future prediction 01:29:13 What happens to human intelligence in an AI world? 01:32:09 What is intelligence? | Final Verdict 01:34:00 The end or just the beginning?

Nikhil KamathhostYann LeCunguest
Nov 27, 20241h 36mWatch on YouTube ↗

CHAPTERS

  1. Yann LeCun’s origin story: engineering roots and a lifelong obsession with intelligence

    LeCun shares his upbringing near Paris, the influence of his engineer father, and how early interests in science and technology led him toward AI. He frames intelligence as a mystery best studied by both building systems (engineering) and understanding principles (science).

  2. Engineer vs scientist: creating tools to understand the world

    LeCun discusses how science and engineering intertwine: scientists aim to understand reality, engineers aim to create, and progress often requires both. He uses examples like telescopes/microscopes enabling new scientific discovery to show how technology drives knowledge.

  3. Fame, “godfather” labels, and who gets credit in science

    He rejects the “godfather of AI” framing, emphasizing that science advances through communities and collisions of ideas. LeCun also reflects on academic visibility, teaching at NYU, and the role of public engagement in shaping scientific celebrity.

  4. Three problems with the world: knowledge gaps, irrationality, and coordination failures

    LeCun argues many global problems stem from insufficient knowledge and weak mental models—people making poor decisions and failing to coordinate. He connects this directly to AI’s promise: amplifying human intelligence to make better decisions and solve complex issues.

  5. What AI is (and why it’s like the blind men and the elephant)

    LeCun reframes ‘What is AI?’ as inseparable from ‘What is intelligence?’ using the elephant analogy: intelligence has many facets, and AI historically focused on narrow slices. He introduces early AI’s emphasis on reasoning/search as only one piece of the broader picture.

  6. Two branches of AI emerge: symbolic search vs learning from data

    The conversation contrasts the dominant ‘search/logic’ tradition with the alternative ‘learning’ tradition inspired by biology. LeCun positions learning as essential for perception (vision/audio) and introduces how these competing traditions shaped AI’s trajectory.

  7. GOFAI and heuristic programming: rules, search trees, and expert systems

    LeCun explains classical AI as manually programmed systems that use rules and heuristics to search huge spaces efficiently. He notes the explosion of possibilities in domains like chess and how expert systems and logic-based inference dominated parts of the 1980s.

  8. Neural networks begin: perceptrons, supervised learning, and why they stalled

    LeCun walks through the perceptron (1957) as a simple trainable classifier using weighted sums and thresholding. He explains supervised learning as iterative parameter adjustment, why perceptrons were too limited for complex vision, and how criticism (e.g., Minsky/Papert) slowed the field.

  9. AI’s modern taxonomy: AI → machine learning → deep learning (GOFAI still exists)

    LeCun organizes the field: AI is the problem space; GOFAI is rule/search-based; machine learning learns from data; deep learning is multilayer neural nets that fueled the last decade’s breakthroughs. He also situates major application areas like vision, speech, and language under these methods.

  10. Types of machine learning: supervised, reinforcement, and self-supervised (why SSL dominates now)

    LeCun distinguishes supervised learning (known targets), reinforcement learning (good/bad feedback), and self-supervised learning (predict missing parts of the input). He argues self-supervised learning is the key ingredient behind today’s chatbots and language understanding systems.

  11. Inside today’s deep learning: backprop, CNNs, transformers, neurons, and language models

    LeCun provides a guided tour of core mechanisms: backpropagation enabling multilayer learning, CNNs leveraging natural signal structure (translation equivariance), and transformers using attention over tokens (permutation equivariance). He then explains language models from Shannon’s n-grams to modern neural LMs trained on internet-scale text.

  12. Why LLMs hit a ceiling: discrete text, weak world understanding, and limited memory

    LeCun argues LLMs excel at language manipulation but don’t truly understand the physical world because they operate in discrete token spaces. He outlines their memory limitations (weights + context window) and explains why scaling LLMs alone won’t yield human-level intelligence or robust real-world robotics.

  13. The next frontier: learning world models from video + planning (JEPA and ‘system two’ AI)

    LeCun describes the goal of self-supervised learning from video to build predictive world models for planning and reasoning. He introduces JEPA (Joint Embedding Predictive Architecture) as predicting in an abstract representation space rather than pixel space, connecting this to hierarchical prediction and Kahneman’s system-one vs system-two thinking.

  14. Practical outlook: data pipelines, sovereign compute, open-source platforms, and what to build in India

    LeCun discusses what changes in the ‘LLM loop’ (data quality, filtering, fine-tuning) and argues for broader, less English-centric datasets representing diverse languages and cultures. He supports local compute/data centers for training and low-cost inference, and advises entrepreneurs to fine-tune open-source foundation models for vertical applications.

  15. Society and intelligence in an AI world: jobs shift upward, and intelligence is redefined

    LeCun predicts AI will shift human work toward higher-level decision-making—more ‘managing’ and defining goals than executing tasks. He closes by defining intelligence as a blend of accumulated skills, fast learning, and zero-shot problem solving, framing AI as an amplifier rather than an endpoint.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome