Skip to content
OpenAIOpenAI

How AI is accelerating scientific discovery today and what's ahead — the OpenAI Podcast Ep. 10

AI is beginning to change how science gets done. Head of OpenAI for Science Kevin Weil and OpenAI research scientist Alex Lupsasca talk about the early signs of acceleration researchers are seeing with GPT-5—from surfacing literature across fields and languages, to speeding up complex calculations, to designing follow-up experiments. They unpack what’s possible today, what doesn’t work yet, and why the next few years could reshape the trajectory of scientific progress across physics, math, biology and beyond. Chapters: - 00:00:40 — OpenAI for Science mission - 00:06:00 — Literature search and intersections across fields - 00:11:19 — A fusion physicist shows what GPT-5 can do - 00:15:08 — GPT-5 Pro and black hole symmetries - 00:19:02 — Getting the most out of the models - 00:24:33 — OpenAI’s new research paper (https://openai.com/index/accelerating-science-gpt-5/) - 00:29:59 — Looking ahead to the next 5 years - 00:32:05 — Will predictions outpace experiments? - 00:36:43 — The pace of model improvement - 00:40:31 — What do scientific benchmarks look like? - 00:44:16 — Fusion and the promise of abundant energy - 00:48:07 — Closing: Science 2.0 moment

Andrew MaynehostKevin WeilguestAlex Lupsascaguest
Nov 20, 202548mWatch on YouTube ↗

CHAPTERS

  1. Why AGI’s biggest impact may be science

    Andrew Mayne frames the episode with Kevin Weil and Alex Lupsasca: how frontier models are already affecting scientific work and what the next few years could look like. Kevin argues that the most profound, tangible effects of AGI may come through accelerated discovery rather than consumer features.

  2. OpenAI for Science: compressing 25 years of research into 5

    Kevin explains the mission and why now is the inflection point. The key change is that frontier models are starting to produce genuinely novel scientific outputs—small ‘existence proofs’ that indicate larger acceleration is near.

  3. Physics case study: a rare-identity lookup and a ‘human-like’ typo

    Alex describes a personal turning point: using a higher-tier model to simplify an infinite series solution arising in pulsar magnetosphere physics. The model located an obscure 1950s identity in a Norwegian math journal, producing a near-correct derivation that failed only via a minor final error—mirroring typical human mistakes.

  4. Beyond proofs: conceptual literature search across fields and languages

    Kevin and Alex emphasize that many research bottlenecks are ‘mundane’ but decisive—especially finding prior art and adjacent results. They describe how GPT can perform conceptual search that beats keyword-based methods, bridging disciplines and even languages.

  5. AI as the always-on collaborator: going deeper and broader

    A recurring theme is collaboration: researchers can use models as tireless partners that help explore neighboring areas without years of retraining. This changes what is feasible for an individual scientist—both in depth (technical derivations) and breadth (cross-field exploration).

  6. Kevin’s turning point: a fusion expert stress-tests the model from undergrad to “20-year lab veteran”

    Kevin recounts meeting a Lawrence Livermore physicist who progressively escalated a fusion-related problem—undergrad to expert-level—while the model continued to answer correctly. The key takeaway wasn’t that the model knew classified tools, but that it could compress days of expert work into minutes and guide next steps realistically.

  7. How scientists actually use GPT today—and the black-hole symmetry ‘warm-up’ lesson

    Alex explains typical uses (especially connecting new results to existing theory), then shares the episode that convinced him to join OpenAI. The model initially failed on a frontier black-hole symmetry problem, succeeded on a simpler ‘flat-space limit’ warm-up, and then solved the harder case after being primed—highlighting the importance of iterative workflow.

  8. Frontier-work reality: low pass rates, iteration, and reducing cognitive load

    Kevin underscores that frontier tasks often have low success probability per attempt. Researchers who get the most value treat the model like a collaborator: iterate, correct, retry, and stitch partial wins—while OpenAI explores ways to make this less labor-intensive.

  9. Jagged edges of knowledge: why AI fails at ‘simple’ questions but nails hard ones

    Alex argues that both human and model capability boundaries are irregular: some basic-sounding questions remain unanswered, while highly technical predictions can be extremely precise. AI’s jagged edge differs from ours, and the overlap—where each complements the other—creates leverage for discovery.

  10. The forthcoming GPT-5 science paper: what works, what doesn’t, and new results

    Kevin outlines a multi-author paper meant to capture the ‘state of the art’ without hype. It compiles real workflows and shareable transcripts across disciplines, ranging from pragmatic accelerations (search, calculations) to new mathematical results that push beyond existing knowledge.

  11. Advice for students and researchers: use AI to explore more paths, faster

    Addressing fears about ‘needing fewer scientists,’ the guests argue AI is closer to the telescope than a replacement: it expands what scientists can do. Alex emphasizes using models to rapidly prototype approaches and generate signposts through uncertain research terrain.

  12. Forecasting 12 months to 5 years: life sciences bottlenecks, pruning search spaces, and compute-as-thinking-time

    Kevin predicts near-term, science will change similarly to how coding changed recently; Alex raises concerns about experimental bottlenecks versus exploding predictions. They discuss how AI can prune massive search spaces (e.g., drug discovery) and how longer ‘thinking time’ measurably improves performance on hard problems.

  13. What should science benchmarks become? From GPQA saturation to frontier evaluations

    They argue that as models master existing tests, the field must move to harder, more ‘frontier’ benchmarks—especially in science and math. Kevin notes GPQA rose from GPT-4’s ~39% to nearly ~90% on newer models, motivating new evaluations that better reflect real research difficulty and value.

  14. Dream targets and philosophy: black holes, dark matter, fusion—and a bottom-up “100 Nobel prizes” vision

    Alex highlights black holes and the possibility of integrating disparate dark-matter theories and datasets to rule out hypotheses. Kevin names fusion as a world-changing target, then both stress that the biggest gains will be decentralized: give great general-purpose AI to scientists globally and let breakthroughs emerge unpredictably.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome