Skip to content
a16za16z

Building an AI Physicist: ChatGPT Co-Creator’s Next Venture

Scaling laws took us from GPT-1 to GPT-5 Pro. But in order to crack physics, we’ll need a different approach. In this episode, a16z General Partner Anjney Midha talks to Liam Fedus, former VP of post-training research and co-creator of ChatGPT at OpenAI, and Ekin Dogus Cubuk, former head of materials science and chemistry research at Google DeepMind, on their new startup Periodic Labs and their plan to automate discovery in the hard sciences. 00:00 Introduction 02:17 The Role of LLMs in Physics and Chemistry Research 03:53 What is Periodic Labs? 05:25 The Importance of Experimentation 07:44 Challenges and Goals in Physics Research 14:45 Building the Team 17:29 Scaling Laws and Physical Verification 22:36 Focus on Superconductivity 25:33 Creating a Repeatable Process for ML Systems 26:08 Balancing Commercial Viability and Scientific Goals 27:39 Periodic's Mission and Industry Applications 28:49 Integrating Diverse Expertise in the Team 29:52 Teaching LLMs to Reason in Physics and Chemistry 31:29 The Importance of Collaboration and Learning 35:38 Deploying AI in Traditional Industries 41:03 Mid Training and Its Impact on Model Performance 45:21 Collaboration with Academia and Future Directions 49:49 What Makes a Great Researcher at Periodic? Follow Liam on X: https://x.com/LiamFedus Follow Dogus on X: https://x.com/ekindogus Learn more about Periodic: https://periodic.com/

Liam FedusguestAnjney MidhahostEkin Dogus Cubukguest
Sep 29, 202551mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Periodic Labs aims to train AI scientists with experiment-in-loop learning

  1. Periodic Labs’ core thesis is that scientific discovery requires an iterative experiment–simulation loop, so real-world experiments should become the “reward function” for training agents, not just human preference or digital verifiers.
  2. They argue current frontier models struggle in physics and chemistry because the needed data is noisy, biased toward positive results, and often missing altogether—making lab-generated high-quality (including negative) data essential.
  3. The company’s first wedge is quantum-scale materials work (solid-state physics/chemistry), using automatable powder synthesis and characterization to run fast, verifiable iteration cycles.
  4. High-temperature superconductivity is positioned as a North Star because it’s both scientifically fundamental and technically tractable (a robust phase transition), while also forcing the team to build many reusable sub-capabilities (autonomous synthesis, characterization, simulation tool use).
  5. Commercialization is framed as “copilots” for advanced-industry R&D teams (semiconductors, space, defense, manufacturing), delivered via well-scoped deployments with clear evaluations and a land-and-expand approach.

IDEAS WORTH REMEMBERING

5 ideas

In physical sciences, “ground truth” is the experiment, not the model or simulator.

Periodic’s approach treats nature as the RL environment: simulations can guide search, but experimental measurements ultimately correct simulator deficiencies and provide hard-to-game reward signals.

Scaling laws can hold while still failing the task you care about.

They distinguish the scaling curve from its evaluation distribution: in-domain internet/test benchmarks may improve predictably, while out-of-domain physics tasks can have such a shallow improvement slope that progress would be impractically slow without targeted data and objectives.

Science-ready models need iteration, not just static knowledge ingestion.

They argue discovery requires repeated cycles of hypothesize → simulate → test → update; without the ability to act and learn from outcomes (including failures), models can at best reproduce literature distributions rather than reduce epistemic uncertainty.

The existing materials literature is an incomplete training set for discovery.

Reported properties can vary by orders of magnitude, negative results are rarely published, and context dependencies are high—so training purely on papers can yield weak predictors; a dedicated lab can systematically generate cleaner, more informative datasets.

Superconductivity is a strategic first domain because it is both inspiring and operationally practical.

As a phase transition, superconducting behavior is comparatively robust to hard-to-simulate microstructural details; it also provides a crisp success metric (raising Tc) and motivates building end-to-end autonomy across synthesis, characterization, and simulation.

WORDS WORTH SAVING

5 quotes

Ultimately, science is driven against experiment in the real world.

Liam Fedus

I'd say the objective is let's replace the reward function from math graders and code graders that we're using today.

Liam Fedus

LLMs can be very smart, but if they're not iterating on science, they won't discover science.

Ekin Dogus Cubuk

The RL environment, nature, like, is our RL environment in, in our setting.

Liam Fedus

If we could find a two hundred Kelvin superconductor, even before we make any product with it, that in itself says so much about the universe that we didn't know yet.

Ekin Dogus Cubuk

Experiment-in-the-loop RL and physically grounded rewardsWhy scaling laws don’t automatically solve out-of-domain physicsData gaps: noise, missing labels, and lack of negative resultsPowder synthesis lab automation and high-throughput workflowsSuperconductivity as a measurable North Star metricMid-training with proprietary simulation/experiment datasetsCross-disciplinary team building (ML + experiments + simulation)Deploying AI into slow-adopting, mission-critical industriesAcademia partnerships: advisory board and grantsTool-using agents: LLMs calling simulators and other neural nets

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome