Skip to content
a16za16z

Faster Science, Better Drugs

Can we make science as fast as software? In this episode, Erik Torenberg talks with Patrick Hsu (cofounder of Arc Institute) and a16z general partner Jorge Conde about Arc’s “virtual cells” moonshot, which uses foundation models to simulate biology and guide experiments. They discuss why research is slow, what an AlphaFold-style moment for cell biology could look like, and how AI might improve drug discovery. The conversation also covers hype versus substance in AI for biology, clinical bottlenecks, capital intensity, and how breakthroughs like GLP-1s show the path from science to major business and health impact. Timecodes: 00:00 Introduction to Accelerating Science 00:35 Welcome to the Podcast 00:45 The Moonshot: Virtual Cells and Human Biology 01:57 Challenges in Scientific Progress 02:58 Interdisciplinary Collaboration at Arc Institute 05:11 The Role of AI in Biology 10:18 Understanding Virtual Cells 22:13 Biotech and Pharma Industry Insights 27:39 Challenges in Clinical Trials 28:13 Capital Intensity and Technological Advancements 29:02 Improving Drug Development 30:12 Market Impact and Industry Trends 33:19 Future Technological Breakthroughs 38:15 AI in Drug Discovery 45:50 Investment Focus and Future Prospects 54:02 Closing Remarks and Upcoming Initiatives Resources: Find Patrick on X: https://x.com/pdhsu Find Jorge on X: https://x.com/JorgeCondeBio Stay Updated: Find a16z on X: https://x.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX?si=3E8B3qT9TyiwAHJ7JnaKbg Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Patrick HsuguestJorge CondeguestErik Torenberghost
Sep 15, 202555mWatch on YouTube ↗

CHAPTERS

  1. Moonshot: make science faster with virtual cells and biology foundation models

    Patrick Hsu lays out Arc Institute’s core ambition: use foundation models to build “virtual cells” that can simulate key aspects of human biology. The aim is to turn slow, physical wet-lab iteration into faster, massively parallel in‑silico experimentation that experimental biologists will actually trust and use.

  2. Why scientific progress is slow: incentives, training, and the limits of single-discipline teams

    The conversation unpacks the “multifactorial Gordian knot” behind slow science, emphasizing incentives and institutional structure. Patrick argues the system rewards individual achievement over collaboration and makes it hard for any one lab/company to be strong across many disciplines at once.

  3. Arc Institute as an organizational experiment: “collision frequency” and flagship programs

    Patrick explains why Arc is designed differently than a university: not just multiple departments, but tightly co-located teams aligned around large, shared projects. Arc’s structure is meant to increase cross-domain interaction and reduce incentive misalignment, accelerating ambitious work.

  4. Why AI moved faster than bio: evaluation is hard and experiments are the bottleneck

    Patrick contrasts language/image modeling with biology: we can quickly judge text and images, but we don’t “speak biology” natively. Biology models require lab-in-the-loop validation against ground truth, which slows iteration and makes ambiguous outputs harder to interpret.

  5. What a “virtual cell” means in practice: perturbation prediction over a cell-state manifold

    The episode defines virtual cells operationally: models that predict how cells change when perturbed (genes, drugs, environment). The practical target is to guide wet-lab decisions—what experiments to run—enabling in-silico target ID and eventually rational combination interventions.

  6. From AlphaFold to virtual cells: milestones, scaling laws, and the ‘GPT1→GPT4’ roadmap

    AlphaFold is used as the template: not perfect mechanistic simulation, but highly useful end-state prediction. Patrick situates virtual cells as early—“between GPT‑1 and GPT‑2”—and argues progress will require an integrated, full-stack approach: data, benchmarks, architectures, and scaling.

  7. What a ‘GPT‑3 moment’ in biology could look like: textbook-level evals and rediscoveries

    Rather than narrow ML metrics, Patrick argues for evaluations that resonate with biologists: can the model rediscover canonical mechanisms and landmark discoveries. Examples include predicting Yamanaka reprogramming factors and identifying differentiation drivers or drug mechanisms of action.

  8. Simulation vs understanding: incomplete measurements, RNA as a ‘mirror,’ and weather-model analogies

    The discussion addresses a key objection: biology data is incomplete and we may be missing core variables. Patrick argues models can still be useful without full mechanistic understanding—like weather prediction—while using scalable modalities (e.g., transcriptomics) as imperfect mirrors for harder-to-measure layers.

  9. Biotech business reality: why ‘AI drug discovery’ hasn’t translated cleanly to industry growth

    Patrick and Jorge explain why early AI-bio startups struggled selling “software to pharma” and why the industry still bottlenecks on making and testing drugs. With ~90% clinical trial failure rates, the field must improve both target selection and molecule correctness, while acknowledging clinical validation remains inherently slow.

  10. Clinical trials, capital intensity, and the path to better ROI: reduce cost, compress time, increase effect size

    Jorge frames what would “fix” biotech: lower capital intensity, speed early discovery, and increase effect sizes so outcomes are obvious sooner. He notes clinical timelines can’t always be compressed (survival endpoints, longevity), so improvements upstream must translate into clearer, faster clinical signals and better investment step-ups.

  11. Market impact and ambition: GLP‑1s as proof that big populations create massive value

    Patrick highlights GLP‑1s as a cultural and economic catalyst: enormous market cap creation compared to decades of biotech formation. The takeaway is that risk-avoidant strategies chasing small populations may underdeliver; better hit rates could justify pursuing large, high-impact indications.

  12. AI in drug discovery: hype vs heft, and why ‘design’ isn’t the only bottleneck

    Patrick distinguishes real traction from overhyped claims: protein-focused AI (binding/design) and pathology automation show heft, while toxicity prediction and vague ‘multimodal biology’ claims are often inflated. The core limitation is end-to-end drug development: designing, making, testing, and regulatory validation are all coupled and slow.

  13. Looking forward: discovery agents, new AI architectures, and Arc’s Virtual Cell Challenge

    Patrick responds to rapid-acceleration predictions (e.g., parallelized “discovery agents”) and notes the long-run possibility if models become reliably predictive. He also expects architectural innovation beyond transformers and highlights Arc’s concrete push to catalyze progress: an open benchmark competition with prizes to measure and drive perturbation-prediction capability.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome