Skip to content
OpenAIOpenAI

AGI progress, surprising breakthroughs, and the road ahead — the OpenAI Podcast Ep. 5

How close are we to automating scientific discovery? What do AI competition wins really tell us about progress toward AGI? OpenAI Chief Scientist Jakub Pachocki and researcher Szymon Sidor share inside stories—from gold medals at the International Math Olympiad to surprising leaps in reasoning—that reveal where AI is headed next. Chapters 1:20 – From high school in Poland to AI research leaders 4:50 – Explaining AGI: technical and everyday perspectives 6:30 – Automating scientific discovery with AI 7:50 – Breakthroughs in medicine, AI safety, and alignment 10:30 – Today is a decade in the making 14:30 – Benchmark saturation and its limits 16:50 – Why math competitions matter for AI 18:15 – How models reason without tools 21:45 – Recognizing when a model can’t solve a problem 23:30 – Storytime: AtCoder competition in Japan 26:50 – How reasoning breakthroughs really happen 28:55 – What’s next for scaling and long-horizon reasoning 30:30 – What AGI will look and feel like 36:25 – Balancing trust and personal value 34:00 – Advice to high school students in 2025

Andrew MaynehostJakub PachockiguestSzymon Sidorguest
Aug 15, 202540mWatch on YouTube ↗

CHAPTERS

  1. 1:20 – 4:50

    Meet OpenAI’s chief scientist and an “IC who does random things”

    Andrew Mayne introduces guests Jakub Pachocki (Chief Scientist) and Szymon Sidor, framing the episode around measuring AI progress and identifying the next breakthroughs. Jakub explains what “chief scientist” means in practice: setting the technical roadmap and long-term research bets.

  2. 4:50 – 6:30

    From a Polish high school to the frontier of AI research

    Jakub and Szymon recount meeting at the same high school in Gdynia, Poland, and how an exceptional CS teacher shaped their trajectories. They reflect on the value of deep technical mentorship and how AI tools can help replicate parts of that experience—though not the emotional support.

  3. 6:30 – 7:50

    Defining AGI: milestones vs real-world impact

    Jakub discusses how AGI used to feel abstract, but progress has split “intelligence” into distinct capabilities (conversation, math, research). He argues that pointwise milestones (like Olympiads) matter, yet become less adequate as models approach or exceed human performance on narrow tests.

  4. 7:50 – 10:30

    Automating scientific discovery and technology production

    Jakub frames the most consequential AI impact as automating discovery and invention—AI generating ideas and technology that shift understanding. Rather than building narrow systems per domain, OpenAI prioritizes general intelligence that can transfer and compound across fields.

  5. 10:30 – 14:30

    Breakthrough areas: medicine, alignment, and accelerating the feedback loop

    The conversation turns to where early “automated researcher” value might appear and why automating AI research and alignment is strategically important. Both guests emphasize that if AI can accelerate AI R&D, progress can become self-reinforcing—raising urgency around safety work.

  6. 14:30 – 16:50

    A decade-in-the-making: why “only 3% economic impact” misses the arc

    Szymon contrasts today’s capabilities with NLP from ~10 years ago, when even negation broke sentiment models. He recounts the progression from early deep learning through GPT-2/3/4 to “deep research” and competitive programming, arguing that small measured impacts can hide exponential change.

  7. 16:50 – 18:15

    Benchmark saturation: when tests stop telling the whole story

    Jakub explains why many benchmarks are hitting ceilings: models are reaching human-level on standardized measures, and training methods can target specific skills (like math) that distort general capability signals. This forces a move toward evaluating broad utility and insight generation.

  8. 18:15 – 21:45

    Why math and programming competitions still matter

    Jakub defends Olympiads as valuable because they test long-form reasoning under constraints with strong evidence of difficulty. These competitions stress sustained problem solving rather than rote recall, providing meaningful milestones for “thinking hard for hours.”

  9. 21:45 – 23:30

    Reasoning without tools—and knowing when you’re stuck

    They discuss that the IMO gold-level performance was achieved without calculators or external tools, emphasizing internal reasoning. A standout moment is the model recognizing it made no progress on the hardest problem (the famed “problem six”), linking to reducing hallucinations via calibrated self-assessment.

  10. 23:30 – 26:50

    Storytime: the AtCoder marathon contest and long-horizon optimization

    Jakub recounts entering a model into Japan’s prestigious AtCoder contest—one hard optimization problem over 10 hours with heuristic, open-ended solutions. He describes watching the model compete live against coworker Saiho, who ultimately won while the model placed second.

  11. 26:50 – 28:55

    How reasoning breakthroughs really happen (and why they felt sudden)

    Szymon pushes back on the idea that “long chain-of-thought” was a simple tweak, describing it as hard-earned engineering and research. When results first clicked, the team took the possibility of rapid progress seriously—prompting late-night conversations about organizational readiness.

  12. 28:55 – 30:30

    What’s next: scaling, compute, persistence, and long-horizon reasoning

    Jakub argues scaling remains foundational and will compound with reasoning methods. He highlights a shift from per-chat compute to spending vastly more compute on high-value problems (medical research, next-gen models), enabling persistent agents that work for long durations on focused goals.

  13. 30:30 – 34:00

    What AGI will look and feel like: an automated ‘company’ plus new interfaces

    Jakub describes AGI as resembling a largely automated organization of researchers and engineers that can build technologies, codebases, and designs—interfacing with humans and running experiments. He also predicts more human-like, persistent interfaces that deepen attachment and reshape interaction norms.

  14. 34:00 – 36:25

    Advice to high school students in 2025: keep coding, think bigger, learn foundations

    Szymon urges students to learn coding as a durable way to build structured problem-solving skills, rejecting claims it’s becoming obsolete. Jakub adds that many perceived constraints are fake—ambition and seeking big opportunities matter—and both reflect on inspirations (Hackers & Painters, Iron Man, AlphaGo) and the value of foundational fields like math/physics.

  15. 36:25 – 40:23

    Balancing trust and personal value: the data-access trade-off

    As assistants integrate with calendars and email, value increases—but so do risks. Jakub emphasizes a tough trade-off: users benefit from broader access, yet robustness against exploitation and misuse isn’t complete, requiring continued iteration across the field.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome