Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics | Lex Fridman Podcast #15

Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics | Lex Fridman Podcast #15

Lex Fridman PodcastMar 12, 20191h 1m

Lex Fridman (host), Leslie Kaelbling (guest), Narrator

Path from philosophy to AI and robotics; relevance of logic and formal semanticsSymbolic reasoning, expert systems, and the historical cycles of AI and MLAbstraction, hierarchy, and planning under uncertainty (MDPs, POMDPs, belief space)Model-based vs model-free reinforcement learning and the role of optimalityPerception, representation, and structural biases (e.g., convolution, objects, graphs)Human-level intelligence, modularity, self-awareness, and value alignmentScientific publishing, open access (JMLR), and incentives in contemporary ML research

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Leslie Kaelbling, Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics | Lex Fridman Podcast #15 explores leslie Kaelbling on uncertainty, abstraction, and truly intelligent robots Leslie Kaelbling discusses her path from philosophy to AI and robotics, arguing that logic, formal semantics, and materialism naturally underpin her view that human-level robot behavior is a purely technical challenge. She traces the cyclical history of AI, contrasting expert systems, symbolic reasoning, and modern learning-based approaches, emphasizing the central role of abstraction, uncertainty, and hierarchical planning. A major theme is planning and acting under partial observability via POMDPs and belief-space planning, and how robots must reason about both the world and their own uncertainty. She also critiques current research culture and publishing, champions open access and deeper theory, and stresses the need to design good objectives and structural biases rather than rely on monolithic end-to-end learning.

Leslie Kaelbling on uncertainty, abstraction, and truly intelligent robots

Leslie Kaelbling discusses her path from philosophy to AI and robotics, arguing that logic, formal semantics, and materialism naturally underpin her view that human-level robot behavior is a purely technical challenge. She traces the cyclical history of AI, contrasting expert systems, symbolic reasoning, and modern learning-based approaches, emphasizing the central role of abstraction, uncertainty, and hierarchical planning. A major theme is planning and acting under partial observability via POMDPs and belief-space planning, and how robots must reason about both the world and their own uncertainty. She also critiques current research culture and publishing, champions open access and deeper theory, and stresses the need to design good objectives and structural biases rather than rely on monolithic end-to-end learning.

Key Takeaways

Abstraction and hierarchy are indispensable for real-world planning.

Humans and robots cannot plan at the level of raw sensory data and torques for long-horizon tasks; spatial, temporal, and goal abstractions (e. ...

Get the full analysis with uListen AI

Planning under uncertainty requires reasoning in belief space, not just state space.

In partially observable settings, agents must control their beliefs—probability distributions over possible world states—so they can decide when to gather information, how to trade off sensing vs acting, and when uncertainty is too high to safely proceed.

Get the full analysis with uListen AI

POMDPs are intractable in theory but still invaluable as modeling tools.

Even though optimal solutions for POMDPs are often computationally impossible, formulating problems this way clarifies what’s hard, guides approximation choices, and structures algorithms, rather than pretending the underlying uncertainty does not exist.

Get the full analysis with uListen AI

Different problems call for different internal representations and learning strategies.

There is no single ‘true’ method—symbolic logic, neural networks, model-based RL, and policies are all tools; riding a unicycle, solving algebra, and doing medicine likely require distinct representations, time-space trade-offs, and computational structures.

Get the full analysis with uListen AI

Perception’s next leap depends on what it should output, not just better classifiers.

We lack a clear understanding of the right representational targets for perception in an integrated intelligent agent—beyond steering or labeling images—so progress hinges on discovering structural biases (like convolution) for objects, relations, and higher-level reasoning.

Get the full analysis with uListen AI

Objective functions and value alignment will become central engineering artifacts.

Modern ML systems are specified by hypothesis classes plus objective functions; as autonomous systems grow more capable, engineering appropriate objectives and aligning them with human values becomes as critical as designing the algorithms themselves.

Get the full analysis with uListen AI

Current research culture favors short horizons and engineering over deep theory.

Kaelbling sees a ‘methodological crisis’ where empirical performance and rapid publication dominate, leaving approximate solution concepts and foundational theory for very hard problems underdeveloped, and discouraging students from multi-year deep work.

Get the full analysis with uListen AI

Notable Quotes

I like to say that I’m interested in doing a very bad job of very big problems.

Leslie Kaelbling

To me, it’s a big technical gap. I don’t see any reason why it’s more than a technical gap.

Leslie Kaelbling

The problem you have to solve is the problem you have to solve. If the problem you have to solve is intractable, that’s what makes us AI people.

Leslie Kaelbling

We don’t operate at Lego Mindstorms level. We specify a hypothesis class and an objective function, and we don’t know which solution will come out.

Leslie Kaelbling

I do research because it’s fun, not because I care about what we produce.

Leslie Kaelbling

Questions Answered in This Episode

How can we systematically learn the kinds of high-level abstractions and hierarchical models humans use to plan in unfamiliar environments like a new airport?

Leslie Kaelbling discusses her path from philosophy to AI and robotics, arguing that logic, formal semantics, and materialism naturally underpin her view that human-level robot behavior is a purely technical challenge. ...

Get the full analysis with uListen AI

What would a practical, useful ‘approximate solution concept’ look like for extremely hard problems in robotics and AI where optimality is impossible?

Get the full analysis with uListen AI

How should we decide which structural biases—beyond convolution—ought to be built into our perception and reasoning systems, and which features should be left to learning?

Get the full analysis with uListen AI

In an era dominated by rapid, incremental papers, how could incentives be reshaped to reward multi-year, high-risk, deeply theoretical or integrative work?

Get the full analysis with uListen AI

What concrete methodologies should engineers adopt to design and validate objective functions that accurately encode human values and avoid pathological ‘solutions’?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Leslie Kaelbling. She's a roboticist and professor at MIT. She's recognized for her work in reinforcement learning, planning, robot navigation, and several other topics in AI. She won the IJCAI Computers and Thought Award and was the editor-in-chief of the prestigious Journal of Machine Learning Research. This conversation is part of the Artificial Intelligence Podcast at MIT and beyond. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at lexfridman, spelled F-R-I-D. And now, here's my conversation with Leslie Kaelbling.

Leslie Kaelbling

What made me get excited about AI, I can say that, is I read Godel, Escher, Bach when I was in high school. That was pretty formative for me because it exposed, uh, the interestingness of primitives and combination and how you can make complex things out of simple parts and ideas of AI and what kinds of programs might generate intelligent behavior. So...

Lex Fridman

So you first fell in love with AI reasoning logic versus robots?

Leslie Kaelbling

Yeah, the robots came because, um, my first job... So I finished an undergraduate degree in philosophy at Stanford and was about to finish a master's in computer science, and I got hired at SRI, uh, in their AI lab, and they were building a robot that was a kind of a follow-on to Shakey, but all the Shakey people were not there anymore.

Lex Fridman

Mm-hmm.

Leslie Kaelbling

And so my job was to try to get this robot to do stuff, and that's really kind of what got me interested in robots.

Lex Fridman

So maybe taking a small step back-

Leslie Kaelbling

Yeah.

Lex Fridman

...to your bachelor's in Stanford in philosophy-

Leslie Kaelbling

Yeah.

Lex Fridman

...did master's and PhD in computer science, but the bachelor's in philosophy. Uh, so what was that journey like? What elements of philosophy do you think-

Leslie Kaelbling

Yeah.

Lex Fridman

...you bring to your work in computer science?

Leslie Kaelbling

So it's surprisingly relevant. So the... Part of the reason that I didn't do a computer science undergraduate degree was that there wasn't one at Stanford at the time, but that there's part of philosophy, and in fact, Stanford has a special sub-major in something called now symbolic systems which is logic model theory, formal semantics of natural language. And so that's actually a perfect preparation for work in AI and computer science.

Lex Fridman

That, that's kind of interesting. So if you were interested in artificial intelligence, what, what kind of majors were people even thinking about taking? Was it in neuroscience? Was... So besides philosophies, what, what were you supposed to do if you were fascinated by the idea of creating intelligence?

Leslie Kaelbling

There weren't enough people who did that for that even to be a conversation.

Lex Fridman

Okay.

Leslie Kaelbling

I mean, I think probably, probably philosophy. I mean, it's interesting, in my class, uh, my graduating class of undergraduate philosophers, probably, maybe slightly less than half went on in computer science-

Lex Fridman

Mm-hmm.

Leslie Kaelbling

...slightly less than half went on in law, and, like, one or two went on in philosophy. Uh, so it was a common kind of connection.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome