
George Hotz: Comma.ai, OpenPilot, and Autonomous Vehicles | Lex Fridman Podcast #31
Lex Fridman (host), George Hotz (guest), Narrator, Narrator
In this episode of Lex Fridman Podcast, featuring Lex Fridman and George Hotz, George Hotz: Comma.ai, OpenPilot, and Autonomous Vehicles | Lex Fridman Podcast #31 explores george Hotz on hacking, Comma.ai, and the real self‑driving race George Hotz discusses his journey from iPhone hacker to founder of Comma.ai, outlining how his company builds camera-based, retrofit driver-assistance systems that rival Tesla Autopilot. He argues that the only viable path to full autonomy is incremental Level 2 systems powered by large-scale data and end‑to‑end learning, not HD maps and lidar-heavy stacks like Waymo and Cruise. Hotz emphasizes driver monitoring, clear safety models, and a practical business roadmap that includes becoming a car insurance provider. The conversation ranges widely into hacking culture, programming practice, simulation, AI girlfriends, and the coming “singularity” where machine compute surpasses biological intelligence.
George Hotz on hacking, Comma.ai, and the real self‑driving race
George Hotz discusses his journey from iPhone hacker to founder of Comma.ai, outlining how his company builds camera-based, retrofit driver-assistance systems that rival Tesla Autopilot. He argues that the only viable path to full autonomy is incremental Level 2 systems powered by large-scale data and end‑to‑end learning, not HD maps and lidar-heavy stacks like Waymo and Cruise. Hotz emphasizes driver monitoring, clear safety models, and a practical business roadmap that includes becoming a car insurance provider. The conversation ranges widely into hacking culture, programming practice, simulation, AI girlfriends, and the coming “singularity” where machine compute surpasses biological intelligence.
Key Takeaways
Incremental Level 2 systems with strong driver monitoring are the practical path to autonomy today.
Hotz is explicit that Comma. ...
Get the full analysis with uListen AI
End‑to‑end learning will ultimately beat modular perception‑planning stacks for full self-driving.
He argues there is no complete, human-defined state vector between perception and planning; instead, a neural network must learn an internal representation (e. ...
Get the full analysis with uListen AI
HD maps and lidar solve only the “static” driving problem and are not a sustainable moat.
Hotz breaks driving into static (road geometry), dynamic (moving actors), and counterfactual (how your actions influence others); maps and lidar mainly address the static piece. ...
Get the full analysis with uListen AI
Deep data collection at scale is more valuable than hand-engineered features or simulators built from scratch.
Comma. ...
Get the full analysis with uListen AI
Driver monitoring is non-negotiable as systems get more reliable but remain fallible.
Hotz strongly criticizes Tesla’s lack of driver monitoring, insisting that once automation errors become rare (e. ...
Get the full analysis with uListen AI
Aftermarket ‘Android of cars’ can be a real business, culminating in insurance.
By selling an inexpensive retrofit kit (phone + CAN interface), Comma. ...
Get the full analysis with uListen AI
Security and safety must be designed around local sensing, not fragile connectivity or V2V.
He dismisses overreliance on GPS, V2V, and teleoperation as unsafe design assumptions, arguing that any safety-critical behavior must be supported by the car’s own sensors and bounded actuation limits, with connectivity used only for non-critical enhancements.
Get the full analysis with uListen AI
Notable Quotes
“We are proud right now to be a Level 2 system.”
— George Hotz
“If your perception system output can be written in a spec document, it is incomplete.”
— George Hotz
“Of course lidar’s a crutch. It’s not even a good crutch.”
— George Hotz
“Tesla paved the way. He’s iOS; we’re Android.”
— George Hotz
“Our long-term plan is to be a car insurance company.”
— George Hotz
Questions Answered in This Episode
Is George Hotz underestimating how far lidar and HD maps can be pushed with learning-based methods for dynamic and counterfactual driving?
George Hotz discusses his journey from iPhone hacker to founder of Comma. ...
Get the full analysis with uListen AI
How would large-scale end‑to‑end driving models be validated and certified for safety when their internal representations are opaque?
Get the full analysis with uListen AI
What is the realistic upper bound on safety and comfort improvements from Level 2 systems with perfect driver monitoring, compared to true Level 4/5 autonomy?
Get the full analysis with uListen AI
Could Comma.ai’s Android-style, aftermarket model scale globally without partnerships with major automakers, or is some form of deeper integration inevitable?
Get the full analysis with uListen AI
How might Hotz’s vision of reinforcement learning on real-world driving behavior be reconciled with regulatory caution and public discomfort about ‘learning on live traffic’?
Get the full analysis with uListen AI
Transcript Preview
The following is a conversation with George Hotz. He's the founder of Comma.ai, a machine learning based vehicle automation company. He is most certainly an outspoken personality in the field of AI and technology in general. He first gained recognition for being the first person to carrier unlock an iPhone and since then, he's done quite a few interesting things at the intersection of hardware and software. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. And I'd like to give a special thank you to Jennifer from Canada for her support of the podcast on Patreon. Merci beaucoup, Jennifer. She's, uh, been a friend and an engineering colleague for many years since I was in grad school. Your support means a lot and inspires me to keep this series going. And now here's my conversation with George Hotz. Do you think we're living in a simulation?
Y- yes, but it may be unfalsifiable.
What do you mean by unfalsifiable?
So if the simulation is designed in such a way that they did like a formal proof to show that no information can get in and out, and if their hardware is designed to, for the, anything in the simulation to always keep the hardware in spec, it may be impossible to prove whether we're in a simulation or not.
So they've designed it such that it's a closed system, you can't get outside the system?
Well, maybe it's one of three worlds. We're either in a simulation which can be exploited, we're in a simulation which not only can't be exploited but like the same thing's true about VMs. Um, a really well-designed VM, you can't even detect if you're in a VM or not.
(laughs) That's brilliant. So we're, uh, it's, yeah, so the simulation's running on a, on a virtual machine?
Yeah. But now i- in reality all VMs have ways to detect.
That's the point. I mean, is it, uh, y- you've done quite a bit of hacking yourself, uh, and so you should know that, uh, really any complicated system will have ways in and out.
So this isn't necessarily true going forward. I spent my time away from Comma, I learned, uh, Coq.
Mm-hmm.
It's a dependently typed, like, uh, it's a language we're writing math proofs in.
Mm-hmm.
And if you write code that compiles in a language like that, it is correct by definition. The, the types check its correctness.
Mm-hmm.
So it's possible that the simulation is written in a language like this, in which case, you know...
Yeah, but that, that can't be sufficiently expressive a language like that.
Oh, it can.
It can be?
Oh, yeah.
Okay. Well so you, uh, hm, all right, so-
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome