Episode Details
EPISODE INFO
- Released
- March 17, 2026
- Duration
- 46m
- Channel
- a16z
- Watch on YouTube
- ▶ Open ↗
EPISODE DESCRIPTION
Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect. Timestamps 00:00 — Introduction 02:58 — LLM as Giant Matrix 08:24 — What Is In-Context Learning 13:00 — Bayesian Updating as Evidence 19:13 — Bayesian Wind Tunnel Tests 27:22 — Brains Simulate Causality 36:34 — Manifolds and New Representations 42:17 — Simulation as Short Program Read the full transcript here: https://www.a16z.news/s/podcast Resources: Follow Vishal Misra on X: https://x.com/vishalmisra Follow Martin Casado on X: https://x.com/martin_casado Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Show on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Show on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.
SPEAKERS
Vishal Misra
guestComputer science professor (networking) and researcher, associated with Columbia University.
Erik Torenberg
hostPodcast host at a16z who interviews founders and researchers on technology and AI.
EPISODE SUMMARY
In this episode of a16z, featuring Vishal Misra and Erik Torenberg, Why Scale Will Not Solve AGI | Vishal Misra - The a16z Show explores lLMs do Bayesian inference, but lack plasticity and causality for AGI Misra models an LLM as an astronomically large, sparse matrix mapping every possible prompt to a probability distribution over next tokens, with training learning a compressed approximation of this matrix.
RELATED EPISODES
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome





