
Chris Urmson: Self-Driving Cars at Aurora, Google, CMU, and DARPA | Lex Fridman Podcast #28
Lex Fridman (host), Chris Urmson (guest)
In this episode of Lex Fridman Podcast, featuring Lex Fridman and Chris Urmson, Chris Urmson: Self-Driving Cars at Aurora, Google, CMU, and DARPA | Lex Fridman Podcast #28 explores chris Urmson on Safely Scaling Real-World Self-Driving Car Technology Chris Urmson, veteran of CMU, DARPA challenges, Google, and now CEO of Aurora, reflects on the technical and philosophical evolution of autonomous vehicles from early desert races to complex urban environments.
Chris Urmson on Safely Scaling Real-World Self-Driving Car Technology
Chris Urmson, veteran of CMU, DARPA challenges, Google, and now CEO of Aurora, reflects on the technical and philosophical evolution of autonomous vehicles from early desert races to complex urban environments.
He explains key breakthroughs such as HD mapping, multi-beam lidar, and Bayesian estimation, and why robust perception and prediction are now the core bottlenecks.
Urmson is sharply critical of over-trusted Level 2/3 driver-assist systems, arguing they diverge technically and economically from true self-driving, and pose significant human-factors risks.
He outlines how to prove safety (process rigor plus layered metrics, not a single number), why early deployments will be driverless urban/suburban services, and how public trust will come primarily from everyday experience with the technology.
Key Takeaways
Pursuing “impossible” challenges unlocks both technology and talent.
The DARPA challenges showed that self-driving was actually achievable and taught Urmson the value of tackling extremely hard problems and empowering young engineers to lead beyond their current credentials.
Get the full analysis with uListen AI
High-definition maps and lidar were pivotal to early autonomy success.
HD mapping bounded environmental complexity in the desert challenges, while multi-beam lidar enabled rich 3D understanding in the Urban Challenge, laying foundations for today’s autonomous stacks.
Get the full analysis with uListen AI
A diverse sensor suite is more important than the cheapest possible sensors.
Urmson argues lidar, cameras, and radar are all essential for robustness; the goal is an economically viable sensor suite that works reliably, not the absolute lowest-cost configuration that underperforms.
Get the full analysis with uListen AI
Level 2/3 driver assistance and full autonomy are fundamentally different paths.
Driver-assist systems rely on constant human supervision and can tolerate higher false negatives; this economic and design reality means they will diverge from the technology needed for safe, unsupervised self-driving.
Get the full analysis with uListen AI
Humans inevitably over-trust semi-autonomous systems over time.
Even if people begin with perfect information, repeated uneventful use (e. ...
Get the full analysis with uListen AI
Proving safety will require structured evidence, not a single headline metric.
Urmson envisions combining rigorous engineering process (functional safety) with capability-specific performance metrics, simulation and on-road data, and event pyramids akin to aviation, rather than blunt measures like disengements.
Get the full analysis with uListen AI
Early driverless deployments will likely be in moderate-speed urban/suburban zones.
Urban environments allow more frequent, lower-severity edge cases than high-speed trucking, enabling faster learning with lower risk; freeway autonomy will follow once city-scale systems are robust.
Get the full analysis with uListen AI
Notable Quotes
“The high-order bit was that it could be done.”
— Chris Urmson
“See people for who they can be, not who they are.”
— Chris Urmson (on a core leadership lesson from Red Whittaker)
“Any technology that we can bring to bear that accelerates self-driving technology coming to market and saving lives is technology we should be using.”
— Chris Urmson
“What you want is a sensor suite that works… not the cheapest sensor suite.”
— Chris Urmson
“I don’t think there exists a world where people don’t over-trust a Level 2 system.”
— Chris Urmson
Questions Answered in This Episode
How can regulators design frameworks that recognize the differences between driver-assist systems and full autonomy without stifling innovation?
Chris Urmson, veteran of CMU, DARPA challenges, Google, and now CEO of Aurora, reflects on the technical and philosophical evolution of autonomous vehicles from early desert races to complex urban environments.
Get the full analysis with uListen AI
What specific perception and prediction benchmarks would meaningfully compare human drivers to autonomous systems on common driving tasks?
He explains key breakthroughs such as HD mapping, multi-beam lidar, and Bayesian estimation, and why robust perception and prediction are now the core bottlenecks.
Get the full analysis with uListen AI
How should companies market and name partially automated features to minimize over-trust and misuse by the general public?
Urmson is sharply critical of over-trusted Level 2/3 driver-assist systems, arguing they diverge technically and economically from true self-driving, and pose significant human-factors risks.
Get the full analysis with uListen AI
In what ways might widespread autonomous mobility reshape urban design, infrastructure investment, and individual car ownership patterns over the next few decades?
He outlines how to prove safety (process rigor plus layered metrics, not a single number), why early deployments will be driverless urban/suburban services, and how public trust will come primarily from everyday experience with the technology.
Get the full analysis with uListen AI
If perfect short-horizon prediction around the vehicle were solved tomorrow, what remaining engineering or societal barriers would still delay large-scale driverless deployment?
Get the full analysis with uListen AI
Transcript Preview
The following is a conversation with Chris Urmson. He was a CTO of the Google self-driving car team, a key engineer and leader behind the Carnegie Mellon University autonomous vehicle entries in the DARPA Grand Challenges, and the winner of the DARPA Urban Challenge. Today, he's the CEO of Aurora Innovation, an autonomous vehicle software company he started with Sterling Anderson, who was the former director of Tesla Autopilot, and Drew Bagnell, Uber's former autonomy and perception lead. Chris is one of the top roboticists and autonomous vehicle experts in the world, and a longtime voice of reason in a space that is shrouded in both mystery and hype. He both acknowledges the incredible challenges involved in solving the problem of autonomous driving and is working hard to solve it. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter at Lex Fridman, spelled F-R-I-D-M-A-N. And now, here's my conversation with Chris Urmson. You were part of both the DARPA Grand Challenge and the DARPA Urban Challenge teams at CMU with, uh, Red Whittaker. What technical or philosophical things have you learned from these races?
I think the, the high order bit was that it could be done. I think that was the thing that was incredible about the, first, the, the grand challenges.
Mm-hmm.
That I remember, you know, I was a grad student at Carnegie Mellon and there we, it was kind of this dichotomy of it seemed really hard, so that would be cool and interesting. But, you know, at the time, we were the only robotics institute around. And so, you know, if we went into it and fell on our faces, that would, that would be-
Okay.
... embarrassing. Uh, so I think, you know, just having the will to go do it, to try to do this thing that at the time was marked as, you know, darn near impossible. And, and then after a couple of tries, be able to actually make it happen, I think that was, you know, that was really exciting.
But at which point did you believe it was possible? Did you from the very beginning? Did you personally? 'Cause you were one of the lead engineer, you actually had to do a lot of the work.
Yeah, I was the technical director there and did a lot of the work (laughs) , uh, along with a bunch of other really good people. Did I believe it could be done? Yeah, of course. Right? Like, why would you go do something you thought was impossible, completely impossible? Uh, we thought it was going to be hard. We didn't know how we were gonna be able to do it. We didn't know if we'd be able to do it the first time. (laughs) Turns out we couldn't. That, yeah, I guess you have to. I th- I think there's a certain benefit to naivete, right? That if you don't know how hard something really is, you, you try different things and, you know, it gives you an opportunity that others who are, you know, wiser maybe don't, don't have.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome