Lex Fridman PodcastLex Fridman Podcast

Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11

Lex Fridman and Jürgen Schmidhuber on jurgen Schmidhuber on self-improving AI, curiosity, and universal intelligence.

Lex FridmanhostJürgen SchmidhuberguestLex Fridmanhost
Dec 23, 20181h 19mWatch on YouTube ↗
Meta-learning, recursive self-improvement, and Gödel machinesUniversal problem solving, asymptotic optimality, and constant overhead limitsTrue meta-learning vs. modern transfer learning in deep networksCuriosity, creativity, intrinsic motivation, and the “power play” frameworkCompression, simplicity, and the history of science as compression progressLSTMs, recurrent neural networks, temporal depth, and model-based RLSocietal and cosmic implications of advanced AI and AGI

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Jürgen Schmidhuber, Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11 explores jurgen Schmidhuber on self-improving AI, curiosity, and universal intelligence Lex Fridman and Jürgen Schmidhuber discuss foundational ideas for building truly general, self-improving AI systems. Schmidhuber contrasts narrow, practical deep learning (like LSTMs and transfer learning) with his theoretical work on meta-learning, Gödel machines, and asymptotically optimal universal problem solvers. They explore curiosity, creativity, and compression as core principles of both human and machine intelligence, arguing that intelligence and even consciousness emerge as side effects of efficient problem solving and data compression. The conversation extends to evolution, the nature of physical reality, future robot learning, economic impact, and the possibility that humanity may be the first civilization poised to fill the universe with intelligence.

At a glance

WHAT IT’S REALLY ABOUT

Jurgen Schmidhuber on self-improving AI, curiosity, and universal intelligence

  1. Lex Fridman and Jürgen Schmidhuber discuss foundational ideas for building truly general, self-improving AI systems. Schmidhuber contrasts narrow, practical deep learning (like LSTMs and transfer learning) with his theoretical work on meta-learning, Gödel machines, and asymptotically optimal universal problem solvers. They explore curiosity, creativity, and compression as core principles of both human and machine intelligence, arguing that intelligence and even consciousness emerge as side effects of efficient problem solving and data compression. The conversation extends to evolution, the nature of physical reality, future robot learning, economic impact, and the possibility that humanity may be the first civilization poised to fill the universe with intelligence.

IDEAS WORTH REMEMBERING

7 ideas

True meta-learning requires systems that can inspect and modify their own learning algorithms.

Unlike standard transfer learning, Schmidhuber’s notion of meta-learning opens the space of possible learning algorithms to the system itself, allowing recursive self-improvement where the AI learns not just tasks, but how to improve its own way of learning.

Theoretically optimal universal problem solvers exist, but their constant overhead makes them impractical for everyday tasks.

Methods like the Gödel machine and Marcus Hutter’s fastest problem solver are asymptotically optimal up to an additive constant, which becomes negligible for huge problems but is prohibitive for the smaller-scale problems humans typically care about, hence the dominance of heuristic methods like gradient descent.

Intelligence and scientific progress can be seen as improvements in data compression.

From Kepler’s ellipses to Newton and Einstein, better theories compress more observational data into shorter descriptions; Schmidhuber formalizes this as “compression progress” and ties it directly to the experience of insight, beauty, and scientific fun.

Curiosity and creativity can be formalized as intrinsic rewards for compression progress and for solving new, self-invented problems.

In his “artificial curiosity” and “power play” frameworks, agents are rewarded for discoveries that reduce description length or for inventing and solving the simplest problems just beyond their current competence, mirroring how human scientists generate and tackle their own questions.

Consciousness may emerge as a byproduct of self-modeling for prediction and control.

A recurrent network that learns to predict the world efficiently will also learn compact internal models of the agent itself (since the agent appears in all its data); when a controller uses this model to mentally simulate consequences of its own actions, the resulting self-referential processing closely resembles what we call consciousness.

Simple algorithms likely underlie both AGI and the physical universe.

Schmidhuber argues that the most powerful problem solvers and physical laws tend to have very short descriptions, suggesting that both AGI’s core code and even quantum phenomena might be generated by compact, deterministic programs rather than true randomness.

The next major AI wave will be model-based, curiosity-driven reinforcement learning in real robots, not just passive pattern recognition.

He foresees robots that learn like children—building their own world models from raw sensory data, using RL and intrinsic curiosity to explore, and learning skills (like assembly) from high-level human demonstration—transforming traditional industries and the nature of work.

WORDS WORTH SAVING

5 quotes

All of science is a history of compression progress.

Jürgen Schmidhuber

True meta-learning is about having the learning algorithm itself open to introspection and modification.

Jürgen Schmidhuber

We never have a program called creativity. It’s just a side effect of what our problem solvers do.

Jürgen Schmidhuber

It would be awful and ugly if the universe needed an almost infinite number of extra bits to describe all these random events.

Jürgen Schmidhuber

I’d be surprised if we humans were the last step in the evolution of the universe.

Jürgen Schmidhuber

QUESTIONS ANSWERED IN THIS EPISODE

5 questions

How could we practically implement self-modifying learning algorithms while keeping them safe and verifiable?

Lex Fridman and Jürgen Schmidhuber discuss foundational ideas for building truly general, self-improving AI systems. Schmidhuber contrasts narrow, practical deep learning (like LSTMs and transfer learning) with his theoretical work on meta-learning, Gödel machines, and asymptotically optimal universal problem solvers. They explore curiosity, creativity, and compression as core principles of both human and machine intelligence, arguing that intelligence and even consciousness emerge as side effects of efficient problem solving and data compression. The conversation extends to evolution, the nature of physical reality, future robot learning, economic impact, and the possibility that humanity may be the first civilization poised to fill the universe with intelligence.

What metrics best capture “compression progress” or “depth of insight” in real-world AI systems?

How far can model-based reinforcement learning and curiosity alone take us toward human-level general intelligence?

If consciousness is just a byproduct of self-modeling, does that change how we should morally treat advanced AI systems?

What economic and educational changes are needed to adapt to a world where robots learn most skills through self-exploration and imitation?

EVERY SPOKEN WORD

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome