Skip to content
OpenAIOpenAI

AGI progress, surprising breakthroughs, and the road ahead — the OpenAI Podcast Ep. 5

How close are we to automating scientific discovery? What do AI competition wins really tell us about progress toward AGI? OpenAI Chief Scientist Jakub Pachocki and researcher Szymon Sidor share inside stories—from gold medals at the International Math Olympiad to surprising leaps in reasoning—that reveal where AI is headed next. Chapters 1:20 – From high school in Poland to AI research leaders 4:50 – Explaining AGI: technical and everyday perspectives 6:30 – Automating scientific discovery with AI 7:50 – Breakthroughs in medicine, AI safety, and alignment 10:30 – Today is a decade in the making 14:30 – Benchmark saturation and its limits 16:50 – Why math competitions matter for AI 18:15 – How models reason without tools 21:45 – Recognizing when a model can’t solve a problem 23:30 – Storytime: AtCoder competition in Japan 26:50 – How reasoning breakthroughs really happen 28:55 – What’s next for scaling and long-horizon reasoning 30:30 – What AGI will look and feel like 36:25 – Balancing trust and personal value 34:00 – Advice to high school students in 2025

Andrew MaynehostJakub PachockiguestSzymon Sidorguest
Aug 14, 202540mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

OpenAI leaders discuss AGI milestones, benchmark limits, and next breakthroughs

  1. OpenAI’s chief scientist Jakub Pachocki and researcher Szymon Sidor outline how the meaning of “AGI” has shifted from abstract goalposts to a bundle of distinct capabilities (conversation, math, long-horizon reasoning, real-world impact).
  2. They argue traditional benchmarks are increasingly unreliable due to saturation and “teaching to the test,” pushing evaluation toward utility, adoption, and the ability to generate novel insights—especially via automating research.
  3. Recent breakthroughs—IMO/IOI-level performance and a strong showing in Japan’s AtCoder long-horizon contest—are presented as evidence that reasoning-focused training is unlocking new capability, even including models recognizing when they’re stuck.
  4. Looking ahead, they expect progress from compounding scaling with longer persistence (spending far more compute on high-value problems like medicine and AI research), while emphasizing unresolved trust, robustness, and security trade-offs as models access more personal data.

IDEAS WORTH REMEMBERING

5 ideas

AGI is no longer one milestone—it’s a set of separable capabilities.

They note conversation, math competition performance, and research ability progress at different rates, making single “human-level” labels less informative than before.

Real-world impact, especially automating R&D, is becoming the north-star metric.

Pachocki argues the meaningful bar is automating discovery and technology production—AI that can generate new ideas, run experiments, and build artifacts like codebases and designs.

Benchmarks are breaking down due to saturation and specialization.

As models hit human-level on many standardized tests and training becomes more targeted (e.g., disproportionately strong math), benchmark gains stop reflecting general intelligence.

Math and programming contests are valued because they test deep reasoning with limited memorization.

IMO/IOI problems demand sustained, creative thought over hours with minimal external knowledge, serving as a proxy for “think hard” capability rather than recall.

Metacognition—knowing when you’re stuck—is a concrete safety-and-quality improvement.

They highlight the model correctly identifying it made no progress on IMO problem 6, contrasting with hallucination-like behavior and pointing to better calibration.

WORDS WORTH SAVING

5 quotes

It is possible to have a big computer that is coming up with ideas that fundamentally change our understanding of the world, and I actually think that is not that far away.

Jakub Pachocki

GPT-4 was… my personal AGI moment… because it would sometimes say things that surprised me.

Szymon Sidor

We started asking… ‘Are we ready as an organization for incredibly fast-paced progress?’

Szymon Sidor

The model was able to correctly identify that it didn't make progress on the problem.

Jakub Pachocki

So you should absolutely learn to code… don’t let people tell you that you should not learn to code.

Szymon Sidor

Personal origin story: Poland high school programming cultureDefinitions of AGI: capability bundles vs single thresholdAutomated scientific discovery and “automated researcher” visionMedicine, AI research automation, and alignment researchBenchmark saturation, representativeness, and gamingIMO/IOI/AtCoder as reasoning and long-horizon testsScaling + persistence, trust/robustness, and student advice

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome