No Priors Ep. 130 | With OpenEvidence Founder Daniel Nadler

No Priors Ep. 130 | With OpenEvidence Founder Daniel Nadler

No PriorsSep 5, 202544m

Sarah Guo (host), Daniel Nadler (guest), Elad Gil (host), Sarah Guo (host), Narrator

OpenEvidence’s role as high-stakes clinical decision support for physiciansSemantic search over biomedical literature and evidence routing vs. answer generationExplosion and half-life of medical knowledge and implications for medical educationPhysician vs. patient access, ambiguity in evidence, and patient handoutsTreating doctors as consumer users and bypassing traditional healthcare gatekeepersAI as a distributed curbside consult and equity in under-resourced healthcare settingsFounder psychology, motivation, and recruiting highly driven knowledge workers

In this episode of No Priors, featuring Sarah Guo and Daniel Nadler, No Priors Ep. 130 | With OpenEvidence Founder Daniel Nadler explores aI-Powered OpenEvidence Becomes Doctors’ Operating System For Clinical Decisions Daniel Nadler explains how OpenEvidence rapidly became the dominant clinical decision support tool for U.S. physicians by treating doctors as consumer internet users and focusing narrowly on high‑stakes medical decisions. The product semantically interprets complex patient scenarios and routes doctors to precise snippets in top-tier medical literature rather than providing opaque “answers.” Nadler discusses the explosion of biomedical knowledge, why medical education must invert toward continuous learning, and how AI can act as a “curbside consult” to extend specialist-level care into under-resourced areas. He also reflects on patient access to information, cultural determinants of health, and the psychology and motivation behind building impact-driven AI products for knowledge workers.

AI-Powered OpenEvidence Becomes Doctors’ Operating System For Clinical Decisions

Daniel Nadler explains how OpenEvidence rapidly became the dominant clinical decision support tool for U.S. physicians by treating doctors as consumer internet users and focusing narrowly on high‑stakes medical decisions. The product semantically interprets complex patient scenarios and routes doctors to precise snippets in top-tier medical literature rather than providing opaque “answers.” Nadler discusses the explosion of biomedical knowledge, why medical education must invert toward continuous learning, and how AI can act as a “curbside consult” to extend specialist-level care into under-resourced areas. He also reflects on patient access to information, cultural determinants of health, and the psychology and motivation behind building impact-driven AI products for knowledge workers.

Key Takeaways

Narrowly target the highest-stakes, hardest problems where AI adds clear value.

OpenEvidence focuses on high-stakes clinical decision support—where a single wrong choice can seriously harm patients—rather than lower-stakes tasks like paperwork or scribing, making its value proposition obvious and adoption urgent.

Get the full analysis with uListen AI

Design AI tools as semantic routers to trusted sources, not opaque oracles.

By deeply understanding complex clinical queries and surfacing specific snippets from Phase III trials and guidelines—with citations as first-class citizens—OpenEvidence positions itself as a search engine that can be audited, not an answer engine demanding blind trust.

Get the full analysis with uListen AI

Treat expert knowledge workers as consumers with direct, bottoms-up access.

Letting doctors simply download a free app and adopt it individually, instead of selling only through hospital administrators, broke a long-standing gatekeeper model and led to consumer-style viral growth among physicians.

Get the full analysis with uListen AI

Build for a world where domain knowledge doubles faster than humans can absorb it.

With top-tier medical literature doubling roughly every five years (or faster by some measures), traditional “front-loaded” medical school is obsolete; products must assume continuous education and help clinicians keep up without dedicating hours a day to reading.

Get the full analysis with uListen AI

Scope your users to match the epistemic risk: start with professionals, not laypeople.

Limiting OpenEvidence to physicians allows the system to surface ambiguous or conflicting evidence safely, relying on trained MDs to interpret nuance, while serving patients indirectly via doctor-generated handouts and explanations.

Get the full analysis with uListen AI

Use AI to simulate ‘curbside consults’ and distributed decision-making at scale.

By acting like a panel of expert colleagues available anywhere—including rural and under-resourced regions—AI tools can partially replicate interdisciplinary case reviews without the cost and scarcity of multiple in-person specialists.

Get the full analysis with uListen AI

When recruiting, optimize for internal propulsion, not just raw intelligence.

Nadler emphasizes that extraordinary output correlates only moderately with being “freakishly smart”; the real differentiator is deep, often unconscious motivation, allowing leaders to manage less and instead channel already-driven people toward hard problems.

Get the full analysis with uListen AI

Notable Quotes

In about 18 months, it's become the operating system for clinical knowledge in the United States.

Daniel Nadler

The golden age of biotechnology is the dark ages of physician burnout because it's just impossible to keep up with all the new drugs.

Daniel Nadler

They know that they're not gonna get an answer from OpenEvidence. They're going to get a routing to a source that answers the question.

Daniel Nadler

We did something that had never been done before ever, which is we treated [doctors] as consumers and as people that could go onto the app store and download a free app and start using it.

Daniel Nadler

There is only a moderate correlation between freakishly smart and output… you have to find people that have some propulsion system.

Daniel Nadler

Questions Answered in This Episode

How might OpenEvidence’s physician-only model evolve if regulators or patients demand more direct consumer access to AI-driven medical information?

Daniel Nadler explains how OpenEvidence rapidly became the dominant clinical decision support tool for U. ...

Get the full analysis with uListen AI

What governance or validation frameworks should exist to ensure that AI-powered clinical decision support tools remain aligned with evolving medical standards and reduce, rather than entrench, medical errors?

Get the full analysis with uListen AI

Could the “treat expert workers as consumers” playbook extend successfully to other conservative professions like law, finance, or government, and what frictions would be different from medicine?

Get the full analysis with uListen AI

As medical education inverts toward lifelong learning, how should licensing, credentialing, and residency structures change to reflect continuous, AI-augmented knowledge acquisition?

Get the full analysis with uListen AI

To what extent can AI-driven tools realistically close equity gaps in rural and under-resourced healthcare settings without simultaneously addressing structural issues like specialist shortages and reimbursement models?

Get the full analysis with uListen AI

Transcript Preview

Sarah Guo

Danielle, thanks for doing this.

Daniel Nadler

Happy to be here.

Sarah Guo

So, uh, give us a sense of this incredibly viral sensation that has been OpenEvidence, uh, in terms of what type of, um, coverage it has of American doctors today.

Daniel Nadler

As much as we would like to think that it's going especially well for us, I would sort of say as a qualifying point that, um, in all of the sub-industries of AI, you're, you're seeing an acceleration and compression, right? So the, the adoption cycles, even outside of OpenEvidence, before we get to OpenEvidence, in other fields of knowledge where encoding and so on are hyper compressed, right? It used to take, you know, half a decade or a decade for something to become standard, and now it seems to happen in two years or, or a year. So the same thing's happened with OpenEvidence. In about 18 months, it's become the operating system for clinical knowledge in the United States. Uh, it is used something like 20 times more than the next most used platform of any kind in our specific segment, which is high stakes clinical decision support for doctors. So high stakes clinical decision support for doctors is a specific category of medicine. It's distinct from, say, paperwork, or it's distinct from scribing. Um, those things are, you know, part of the workflow of being a doctor, uh, but the stakes and the consequences, uh, are different. Um, if you get it wrong, you can go back and do it again. Uh, that's not the case with a patient. Uh, you have to get it right, and you have one shot to get it right. And so clinical decision making, uh, of which clinical decision support, uh, is in service of, is unquestionably the highest stakes area of medicine. We're probably the only company working at the tip of that s- spear. Most people have self-selected themselves out of the problem of high stakes clinical decision making, uh, certainly through an AI lens, um, because they view it as ambitious.

Elad Gil

And could you explain an order of evidence? Because I think fundamentally, it's about taking information and then translating that into specific either recommendations or diagnoses for a patient. Can you tell us more about how that works?

Daniel Nadler

Yes. One way to sort of simplify it down is at its foundation, it's a search problem, but it's a very semantic search problem. Uh, so most search traditionally works with keywords, right? So like, you know, flights to Barcelona or hotels in Barcelona. Most of the, you know, most of the keywords there can be captured in, like, a couple of words, and certainly in a sentence. And that's sort of traditional Google search. Even if you were to think about clinical decision support as a search problem, simply describing your search query, if you want to think about it that way, usually takes many sentences. So an example I like to give is you have a 44-year-old female patient, she has moderate to severe psoriasis. That's the red stuff on your skin. Um, y- you know, you're a dermatologist. That's so far, so simple. You would just prescribe one of the many creams you see commercials for on television. Except, uh, she has, um, MS. Uh, so now it gets interesting because you want to treat her psoriasis, um, but you don't want to make the MS worse. And you are not a neurologist, you're a dermatologist, so neurology is not your specialty. Um, but you don't want to go refer her to a neurologist because you want to treat her psoriasis and, and if you just keep referring people in circles, medicine never happens. From the ether, you might have heard as a dermatologist that the new classes of psoriasis treatments, um, which are biologics, they're IL-17 inhibitors and IL-23 inhibitors, might have some interactivity, uh, with the neurological dimension of a patient's condition. That's about all you know. Um, you didn't learn this in medical school because IL-23s were FDA approved in 2019, right? Uh, so one of the great themes of OpenEvidence is that this sort of golden age of biotechnology is sort of the dark ages of physician burnout because it's just impossible to keep up with all the new drugs and all the new mechanisms of action and so on. So y- you know, it was approved in 2019, you might have graduated medical school in 2005, right? (laughs) So y- you didn't cover it in medical school and that's it. That's kind of, that's what you know. So your question then is, you know, for a 44-year-old female patient with moderate to severe psoriasis, is an IL-17 inhibitor or an IL-23 inhibitor more appropriate and more safely tolerated with respect to not aggravating the MS? Now that's, that's not a academic question. Um, that's a very consequential question. IL-17 inhibitors will actually make the MS worse. Uh, IL-23 inhibitors are safe and well-tolerated in case of MS. Th- that's an example of where medicine can go wrong because even five or 10 years ago, um, either you're referring that person to a neurologist, in which case you're just getting r- referrals in circles and medicine is not happening, or unfortunately, what would more likely happen is they would just 50/50, and that MS might be aggravated. And, y- you know, it's well-known and it's been often repeated that medical error is the third leading cause of death in the United States after heart disease and cancer. But even that kind of, that statistic kind of understates it because that's just looking at death, right? In the case of my, in my example, um, this patient is not gonna die as a result of taking an IL-17 inhibitor. She's going to have a, a relapse of MS. And so it's not just that medical error historically was a leading cause of death, it's that, uh, as many people died from medical error, probably a factor of 10 to 100 as many people had a, a comorbidity or condition that became aggravated and got worse and so on. So coming back to your question, that whole string is the search query. And so you can't just do search in a traditional way where you sort of say, you know, IL-17, 'cause that's not really what the question's about. Um, nor does the physician have the time to go read book chapters on this stuff. What you need is a semantic understanding of the, of the query in the way that another human physician would semantically understand that query. And then it's actually quite-... deterministic and simple after that. Um, once you semantically understand the query, uh, you can from the world of published biomedical literature, you can find the exact snippets in a phase 3 RCT, Randomized Controlled Trial in The New England Journal of Medicine that tested each of these things and found that one aggravated MS and the other didn't, right? So once- once you have a semantic understanding of the query, uh, the rest is fairly deterministic and it's almost a search problem. Um, but all of the- all of the juice is in, you know, connecting the very complex semantic meaning of a medical scenario to the answer where the answer might be in a phase 3 RCT in The New England Journal of Medicine and a snippet in- in, not even in the- in the abstract, but in the methodology section or in the population section.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome