
George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God | Lex Fridman Podcast #387
Lex Fridman (host), George Hotz (guest), Narrator
In this episode of Lex Fridman Podcast, featuring Lex Fridman and George Hotz, George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God | Lex Fridman Podcast #387 explores george Hotz on AI Freedom, Human Obsolescence, and Decentralized Compute Futures George Hotz and Lex Fridman explore how advanced AI may transform and possibly end current human civilization, emphasizing wireheading, super-stimulus media, and AI-accelerated social manipulation. Hotz argues that intelligence itself is dangerous, but centralized control of AI is far worse than open access, making open source and decentralized compute his core mission with TinyCorp and TinyGrad. They discuss why current large language models aren’t AGI, why reproduction and robustness matter more than raw intelligence, and how self-driving via Comma.ai is increasingly an RL-in-simulation problem. The conversation also dives into software engineering culture, Twitter’s codebase, VR, AI girlfriends, God, and what it means for humans to “win” in a world saturated with machine intelligence.
George Hotz on AI Freedom, Human Obsolescence, and Decentralized Compute Futures
George Hotz and Lex Fridman explore how advanced AI may transform and possibly end current human civilization, emphasizing wireheading, super-stimulus media, and AI-accelerated social manipulation. Hotz argues that intelligence itself is dangerous, but centralized control of AI is far worse than open access, making open source and decentralized compute his core mission with TinyCorp and TinyGrad. They discuss why current large language models aren’t AGI, why reproduction and robustness matter more than raw intelligence, and how self-driving via Comma.ai is increasingly an RL-in-simulation problem. The conversation also dives into software engineering culture, Twitter’s codebase, VR, AI girlfriends, God, and what it means for humans to “win” in a world saturated with machine intelligence.
Key Takeaways
Intelligence is dangerous, but centralized intelligence is worse than widely distributed intelligence.
Hotz argues that advanced AI will inevitably be misused, but concentrating powerful models in the hands of a few corporations or governments guarantees a “chicken farm” scenario where the many are dominated by the few. ...
Get the full analysis with uListen AI
AI risk is more about bad humans using tools than AIs “waking up.”
He dismisses classic sci‑fi “rogue AI” plots, focusing instead on realistic threats: state actors and corporations using AI for mass persuasion, PSYOPs, and hyper-targeted content (e. ...
Get the full analysis with uListen AI
Reproduction and robustness define real “life” more than raw intelligence.
Hotz notes that all biological life is fundamentally reproductive and robust in nature, while current silicon systems, including robots and fabs, cannot self-replicate. ...
Get the full analysis with uListen AI
LLMs are impressive but fundamentally mid-tier and not AGI-level intelligence.
He views GPT-style models as powerful compressors trained with cross-entropy loss that produce “YouTube comment raps” and junior-engineer-level code. ...
Get the full analysis with uListen AI
Decentralized compute is a strategic defense against AI monopolies and nationalization.
TinyCorp’s mission is to build a minimal, hardware-agnostic ML stack (TinyGrad) and high-end local boxes (TinyBox/TinyRack) so individuals and small labs can run large models without cloud dependence. ...
Get the full analysis with uListen AI
Simplicity, refactoring, and strong tests beat complexity and promotion-driven bloat.
Drawing on Twitter’s codebase, he criticizes incentive structures where engineers get promoted for new libraries rather than deletion and simplification. ...
Get the full analysis with uListen AI
Self-driving progress hinges on realistic simulators and RL tuned to driver happiness.
At Comma. ...
Get the full analysis with uListen AI
Notable Quotes
“Intelligence is scary. When I’m in the woods, the scariest animal to meet is a human.”
— George Hotz
“You’re not gonna get paper‑clipped if everybody has an AI. You only get paper‑clipped if one group controls the AI.”
— George Hotz
“Of course we were created by God. It’s the most obvious thing. I create worlds—how am I supposed to believe no one created ours?”
— George Hotz
“We’ve solved the problem of self-driving… years ago. The question now is how to get a model to output a human driving policy people never want to disengage from.”
— George Hotz
“You have never refactored enough. Your code can get smaller. Your ideas can be more elegant.”
— George Hotz
Questions Answered in This Episode
If intelligence is inherently dangerous, what concrete mechanisms—beyond open sourcing—can ensure that decentralized AI actually benefits individuals rather than just creating many new small tyrannies?
George Hotz and Lex Fridman explore how advanced AI may transform and possibly end current human civilization, emphasizing wireheading, super-stimulus media, and AI-accelerated social manipulation. ...
Get the full analysis with uListen AI
How realistic is Hotz’s belief that human civilization can “reset” after an AI-driven societal collapse, and what would it practically take for a low-tech taboo society to avoid repeating our mistakes?
Get the full analysis with uListen AI
To what extent can we trust AI systems to serve as personal “firewalls” against propaganda and PSYOPs, and how do we prevent those very filters from becoming new forms of subtle control?
Get the full analysis with uListen AI
Is Hotz underestimating the degree to which LLMs and code-generating models can evolve from mid-tier tools to genuine creative and reasoning partners, or is there a real ceiling without richer interaction and embodiment?
Get the full analysis with uListen AI
How should large organizations balance the short-term business risk of stopping to refactor legacy systems (like Twitter’s) against the long-term strategic risk of being unable to move fast due to accumulated complexity?
Get the full analysis with uListen AI
Transcript Preview
What possible ideas do you have for the w- how human species ends?
Sure. So I think the most obvious way to me is wire heading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe, maybe it's even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it's probably hard to get all of humanity.
Yeah. The interesting thing about humanity is the diversity in it.
Oh, yeah.
Organisms in general. There's a lot of weirdos out there.
Well-
Two of them are sitting here.
I mean, diversity in humanity is-
With due respect. (laughs)
(laughs) I wish I was more weird.
The following is a conversation with George Hotz, his third time on this podcast. He's the founder of Comma.ai that seeks to solve autonomous driving, and is the founder of a new company called TinyCorp that created TinyGrad, a neural network framework that is extremely simple, with the goal of making it run on any device by any human easily and efficiently. As you know, George also did a large number of fun and amazing things, from hacking the iPhone to recently joining Twitter for a bit as an "intern" in quotes, making the case for refactoring the Twitter code base. In general, he's a fascinating engineer and human being, and one of my favorite people to talk to. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's George Hotz. You mentioned something in a stream about the philosophical nature of time. So, uh, let's start with a wild question. Do you think time is an illusion?
You know-
(laughs)
... I sell phone calls, uh, to Comma for $1,000. Uh, and some guy called me, and, uh, like, you know, it's $1,000, so you can talk to me for, for half an hour. And he's like, "Uh, yeah, okay. So like, time doesn't exist, and I really wanted to share this with you." I'm like, "Well, what do you mean time doesn't exist," right? Like, I think time is a useful model whether it exists or not, right? Like, does quantum physics exist? Well, it doesn't matter. I- i- it's about whether it's a useful model to describe reality. Is time maybe compressive?
Do you think there is an objective reality or is everything just useful models? Like underneath it all, is there an actual thing that we're constructing models for?
I don't know.
I was hoping you would know.
I don't think it matters.
I mean, this kind of connects to the models we construct of reality with machine learning, right?
Sure.
Like, is it just nice to have useful approximations of the world such that we can do something with it?
So there are things that are real. Kolmogorov complexity is real.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome