Skip to content
a16za16z

Why Scale Will Not Solve AGI | Vishal Misra - The a16z Show

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect. Timestamps 00:00 — Introduction 02:58 — LLM as Giant Matrix 08:24 — What Is In-Context Learning 13:00 — Bayesian Updating as Evidence 19:13 — Bayesian Wind Tunnel Tests 27:22 — Brains Simulate Causality 36:34 — Manifolds and New Representations 42:17 — Simulation as Short Program Read the full transcript here: https://www.a16z.news/s/podcast Resources: Follow Vishal Misra on X: https://x.com/vishalmisra Follow Martin Casado on X: https://x.com/martin_casado Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Show on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Show on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.

Vishal MisraguestErik Torenberghost
Mar 17, 202646mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
March 17, 2026
Duration
46m
Channel
a16z
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect. Timestamps 00:00 — Introduction 02:58 — LLM as Giant Matrix 08:24 — What Is In-Context Learning 13:00 — Bayesian Updating as Evidence 19:13 — Bayesian Wind Tunnel Tests 27:22 — Brains Simulate Causality 36:34 — Manifolds and New Representations 42:17 — Simulation as Short Program Read the full transcript here: https://www.a16z.news/s/podcast Resources: Follow Vishal Misra on X: https://x.com/vishalmisra Follow Martin Casado on X: https://x.com/martin_casado Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Show on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Show on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.

SPEAKERS

  • Vishal Misra

    guest

    Computer science professor (networking) and researcher, associated with Columbia University.

  • Erik Torenberg

    host

    Podcast host at a16z who interviews founders and researchers on technology and AI.

EPISODE SUMMARY

In this episode of a16z, featuring Vishal Misra and Erik Torenberg, Why Scale Will Not Solve AGI | Vishal Misra - The a16z Show explores lLMs do Bayesian inference, but lack plasticity and causality for AGI Misra models an LLM as an astronomically large, sparse matrix mapping every possible prompt to a probability distribution over next tokens, with training learning a compressed approximation of this matrix.

RELATED EPISODES

The Golden Age Thesis | Marc Andreessen on MTS

The Golden Age Thesis | Marc Andreessen on MTS

The Investor Behind Costco, Starbucks, and Blackstone | Tony James on The a16z Show

The Investor Behind Costco, Starbucks, and Blackstone | Tony James on The a16z Show

Digital Freedom, AI Regulation, and the Fight for the Western Internet | The a16z Show

Digital Freedom, AI Regulation, and the Fight for the Western Internet | The a16z Show

Crypto Experts Explain Stablecoins & the Future Financial System w/ Ali Yahya & Arianna Simpson

Crypto Experts Explain Stablecoins & the Future Financial System w/ Ali Yahya & Arianna Simpson

Emil Michael: The Department of War Is Moving Faster Than Silicon Valley on AI | The a16z Show

Emil Michael: The Department of War Is Moving Faster Than Silicon Valley on AI | The a16z Show

Inside The $100M Bet on the Future of Space | Northwood CEO on a16z

Inside The $100M Bet on the Future of Space | Northwood CEO on a16z

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome