
The 10 Trillion Parameter AI Model With 300 IQ
Garry Tan (host), Garry Tan (host), Diana Hu (host), Harj Taggar (host)
In this episode of Y Combinator, featuring Garry Tan and Garry Tan, The 10 Trillion Parameter AI Model With 300 IQ explores ten-Trillion-Parameter AI: From 300 IQ Models To Startup Goldrush The episode explores what ultra-large models (imagined at 10 trillion parameters and ~300 IQ) and OpenAI’s new o1 model mean for founders, enterprises, and the broader economy.
Ten-Trillion-Parameter AI: From 300 IQ Models To Startup Goldrush
The episode explores what ultra-large models (imagined at 10 trillion parameters and ~300 IQ) and OpenAI’s new o1 model mean for founders, enterprises, and the broader economy.
Hosts argue current frontier models already rival typical knowledge workers and that o1-like breakthroughs may unlock previously impossible use cases while making AI behavior more deterministic and reliable.
They discuss the likely role of huge “teacher” models and distillation into cheaper models, the rapidly shifting LLM and coding-tool market (e.g., Cursor vs GitHub Copilot), and early signs of real business impact from automation and AI voice agents.
The conversation closes on a bullish vision where massively superhuman AI can digest humanity’s scientific output, accelerating discovery toward radical technologies, while competitive dynamics keep UX-focused startups central to capturing value.
Key Takeaways
Ultra-large models may act as expensive “teachers,” not everyday workhorses.
10T-parameter models will likely be too slow and costly for routine use, but can train smaller distilled models that deliver most of the capability at a fraction of the price, similar to Meta’s large 405B teacher improving its 70B model.
Get the full analysis with uListen AI
Developer market share is fragmenting; OpenAI no longer has automatic dominance.
Within recent YC batches, usage has diversified significantly—Claude moved from ~5% to ~25% of companies, LLaMA from 0% to 8%, showing founders will freely switch to better or cheaper models and tools.
Get the full analysis with uListen AI
o1-level reasoning can make previously non-viable AI products suddenly viable.
Startups report big jumps in accuracy (e. ...
Get the full analysis with uListen AI
As AI becomes more deterministic, competitive advantage shifts back to classic software execution.
If prompt wrangling and fragile workflows matter less, winners will be those who excel at UX, domain depth, sales, and integration—AI becomes infrastructure, and moats look more like traditional SaaS moats.
Get the full analysis with uListen AI
Voice-based AI is reaching a ‘works-in-practice’ inflection and threatens call centers.
With real-time voice APIs priced around $9/hour and much lower latency, AI can now handle debt collection, logistics coordination, and support calls at human-like quality and similar or lower cost.
Get the full analysis with uListen AI
AI-driven automation can rescue overfunded or margin-constrained companies by slashing costs.
Examples from the YC portfolio include companies automating ~60% of support tickets, moving from needing another funding round to cashflow breakeven while keeping ~50% annual growth.
Get the full analysis with uListen AI
Superintelligent models could accelerate scientific discovery by digesting humanity’s knowledge corpus.
If 200–300 IQ-level systems can truly reason over millions of papers and vast datasets, they may unlock breakthroughs analogous to nuclear fission or Fourier transforms—potentially yielding room-temperature superconductors, new energy sources, and beyond.
Get the full analysis with uListen AI
Notable Quotes
“You mean they're going to capture a light cone of all future value?”
— Host (on OpenAI’s potential dominance with o1)
“You could make a strong case that AGI is basically already here.”
— Host (on current state-of-the-art models and knowledge work)
“It took 150 years until the average Joe could feel the Fourier transform.”
— Diana (on long lags between foundational discoveries and everyday impact)
“At this point, AI has passed Turing tests and is solving all of these very menial problems over the phone.”
— Host (on the new generation of AI voice agents)
“What this might be is not merely a bicycle for the mind. It might actually be a self-driving car, or, even crazier, maybe a rocket to Mars.”
— Host (on how powerful AI could transform human capability)
Questions Answered in This Episode
If o1-like models become highly accurate and deterministic, what durable moats can AI startups realistically build beyond UX and distribution?
The episode explores what ultra-large models (imagined at 10 trillion parameters and ~300 IQ) and OpenAI’s new o1 model mean for founders, enterprises, and the broader economy.
Get the full analysis with uListen AI
How should regulators and enterprises treat mission-critical AI systems whose reliability jumps from ~90% to ~99% but still isn’t provably perfect?
Hosts argue current frontier models already rival typical knowledge workers and that o1-like breakthroughs may unlock previously impossible use cases while making AI behavior more deterministic and reliable.
Get the full analysis with uListen AI
What happens to global labor markets—especially call centers and back-office work—when $9/hour voice agents outperform humans at scale?
They discuss the likely role of huge “teacher” models and distillation into cheaper models, the rapidly shifting LLM and coding-tool market (e. ...
Get the full analysis with uListen AI
Could a 10T-parameter, ‘300 IQ’ model meaningfully accelerate hard sciences (e.g., fusion, materials) without introducing catastrophic risks, and who decides acceptable trade-offs?
The conversation closes on a bullish vision where massively superhuman AI can digest humanity’s scientific output, accelerating discovery toward radical technologies, while competitive dynamics keep UX-focused startups central to capturing value.
Get the full analysis with uListen AI
How long will OpenAI’s breakthrough advantages last before open-source or competitors close the gap, and what technical or economic factors determine whether this time is different?
Get the full analysis with uListen AI
Transcript Preview
If O1 is this magical, what does it actually mean for founders and builders? One argument is it's bad for builders because maybe O1 is just so powerful that OpenAI will just capture all the value.
You mean they're going to capture a light cone of all future value?
Yeah. (laughs) Yeah. They'll capture a light cone of all present, past and future value. (laughs)
Oh my god.
The alternative, more optimistic scenario is we see ourselves how much time the founders spend, especially during the batch, on getting prompts to work correctly, getting the outputs to be accurate. But if it becomes more deterministic and accurate, then they can just spend their time on bread-and-butter software things. The winners will just be whoever builds the best, like user experience, who gets all these like nitty-gritty details correct.
(instrumental music)
Welcome back to another episode of The Light Cone. We are sort of in this moment where OpenAI has raised the largest venture round ever, $6.6 billion with a B. Here's what Sarah Friar, the CFO of OpenAI said about how they're going to use the money.
It's compute first, and it's not cheap. Uh, it's great talent second. Um, and then, of, uh, course, it's all the normal operating expenses of a more traditional company. But I think there is no denying that you are, we are on a, a scaling law right now where orders of magnitude matter. The next model is going to be an order of magnitude bigger and the next one on and on. And so that does make it very capital intensive.
So it's really about orders of magnitude. Let's live in the future. There's 10 trillion parameters out there, 10 trillion parameter large language models, two orders of magnitude out from the state-of-the-art today. What happens? Like are people actually going to be throwing queries and actually using these 10 trillion parameter models? Uh, seems like you'd be waiting, you know, 10 minutes per token.
Yeah. For a bit of context, right now, the frontier models, I mean they're not public exactly how many para- parameters they have, but they're roughly in the five hundreds of billions-ish, like Llama-3, 4.0, 5 billion. Anthropic is speculated to be 500 billion, GPT-4.0 roughly a- around that much. Getting to 10 trillion, there's a two order of magnitude, right? I think the type of level of, uh, potential innovation could be the same leap we saw from GPT-2, which was a- around one billion parameters that was released with the paper of a scaling laws, which was one of the seminal papers that people figured out, okay, this is transformer architecture that we figured out, what if we just throw a bunch of engineering and just do a lot of it? Where does this scale on this logar- logarithmic type of scaling? Then this was proofed out when GPT-3.5 or 3 got released, that was about 170-ish billion parameters. So that's like that two order of magnitude. And we saw what happened with that, that created this new flourishing era of AI companies. And we saw it, we experienced this back in 2023 when we started seeing all these companies building on top of GPT-3.5 that was starting to work and it created this giant wealth. So we could probably expect if this scaling law continues, the feeling will be similar to what we felt from that year of, uh, transition from 2022 to 2023.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome