Skip to content
OpenAIOpenAI

Episode 16: Building AI for Life Sciences

What does it take to build AI systems that can actually help scientists? Research lead Joy Jiao and product lead Yunyun Wang discuss how OpenAI is developing models for life sciences and what responsible deployment means in a field with real biosecurity stakes. They explore how AI is already improving research workflows and where it could lead in drug discovery and more autonomous labs — including why a future with less pipetting sounds pretty good to most scientists. Chapters 0:39 Introducing the Life Sciences model series 3:47 Joy’s path into life sciences 5:00 Autonomous lab with Ginkgo Bioworks 7:27 Yunyun’s path into life sciences 8:12 OpenAI’s life sciences work 9:48 Biorisk, access, and safeguards 15:43 What models can do in the lab 17:51 Building scientific infrastructure 20:14 Why compute matters for science 24:54 Where are we in 6-12 months? 29:51 Scientific adoption and skepticism 33:17 Advice for students and researchers 40:27 Where are we in 10 years?

Joy JiaoguestYunyun Wangguest
Apr 16, 202644mWatch on YouTube ↗

CHAPTERS

  1. Why OpenAI is launching a Life Sciences model series

    Andrew Main introduces OpenAI’s push into life sciences with Research Lead Joy Jiao and Product Lead Yunyun Wang. They frame the central aim: apply frontier AI to biology and medicine while deploying responsibly given dual‑use risks.

  2. What the Life Sciences model series is built for: workflows, tools, and repeatability

    Yunyun explains the Life Sciences model series as biochemistry-focused models grounded in real research workflows. The team emphasizes mechanistic understanding (genomics/proteins), early discovery bottlenecks, and practical orchestration via products and plugins.

  3. From tool-using assistant to ‘biochemistry expert’ model

    Joy outlines how models can already behave like computational biologists by calling external tools (e.g., protein structure prediction) and iterating on inputs/outputs. The next step is deeper biochemical intuition so the model can use tools more intelligently and converge faster.

  4. Joy Jiao’s journey: systems biology to AI to speed up science

    Joy recounts her path from a Harvard systems biology PhD to software and then OpenAI, motivated by the slow pace and manual nature of lab work. The ‘full circle’ is using AI to accelerate the kind of biology work she once did—without returning to repetitive bench tasks.

  5. Autonomous lab collaboration with Ginkgo Bioworks: proving models can do biology

    Joy describes experiments where GPT-5 was integrated with Ginkgo’s robotic lab to design and run protein-related experiments. Early results were unexpectedly positive—producing non-zero protein—helping shift internal beliefs from uncertainty to confidence that AI can accelerate lab science.

  6. Yunyun Wang’s path: biodefense roots and a dual perspective on life sciences

    Yunyun shares how she started in biorisk mitigation and biodefense initiatives before moving into life sciences product work. This background shapes a ‘tackle both sides’ approach: unlock beneficial research while rigorously managing misuse risks.

  7. Scope of AI in the life sciences pipeline: from chemistry to drugs to regulation

    The discussion expands to potential applications across chemical/protein/enzyme design, pathway understanding, and drug discovery. They also highlight longer-term opportunities in accelerating clinical and regulatory stages, not just early discovery.

  8. Biorisk and safeguards: why intent is hard to infer from prompts

    They explain why biosecurity is uniquely challenging: benign and harmful workflows can look identical at early steps. As a result, OpenAI leans risk-averse in general access, using refusals/high-level responses and layered mitigations, while exploring more nuanced approaches.

  9. Differentiated access: enterprise controls to unlock advanced capabilities responsibly

    Joy and Yunyun argue that stronger capabilities require knowing who the user is and operating within controlled institutional environments. Verified researchers at regulated organizations enable higher-trust deployment because reagents, cell lines, and processes are tracked and audited.

  10. What models can do in labs today: from pipetting optimization to idea triage

    They describe current practical uses ranging from operational automation (spreadsheets, minimizing pipetting steps) to higher-level scientific assistance. Yunyun emphasizes models as ‘discriminators’ that stress-test hypotheses, narrow target lists, and prioritize feasible experiments.

  11. Building scientific infrastructure on Codex: agentic workflows and collaboration artifacts

    Joy envisions Codex as the backbone for computational scientific workflows: running code across machines, monitoring logs, and building bespoke analysis/visualization software. A notable shift is sharing interactive outputs (HTML/UIs) instead of raw data, changing collaboration patterns.

  12. Why compute matters for science: bigger models and test-time ‘thinking’

    Joy distinguishes two compute axes: scaling model size and scaling test-time compute for deeper reasoning. Test-time compute enables extended deliberation—potentially days—allowing models to tackle higher-difficulty scientific problems and discovery-style tasks.

  13. Near-term (6–12 months): drug repurposing, personalized medicine, and lab automation uplift

    They discuss realistic paths to impact: drug repurposing suggestions, scaling RNA-based personalized therapies, and smoothing bottlenecks in translating protocols to automated platforms. The emphasis is on augmenting scientists—improving analysis rigor and throughput—rather than replacing them.

  14. Scientific evals and adoption: proving value, embracing skepticism, and advice for learners

    They describe evaluation strategies using real experimental datasets, synthetic ‘messy data’ tests, and ultimately wet-lab validation. Adoption varies culturally; they advocate ‘show by doing’ through products and publications, and encourage students/researchers to learn via exploration, collaboration, and low-lift entry points.

  15. 10-year vision: autonomous robot labs, democratized expertise, and stronger biosecurity defenses

    Joy and Yunyun project a future of AI-connected autonomous labs that continuously design, run, and interpret experiments with humans providing high-level direction. They also emphasize societal benefits: democratizing expert knowledge, accelerating countermeasures, and improving environmental surveillance for emerging threats.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome