Skip to content
No PriorsNo Priors

No Priors Ep. 25 | With Palantir's CTO Shyam Sankar

Can frontiers as high-stakes as next-generation, AI-enabled defense depend on something as mundane as data integration? Can "large language models" work in such mission critical applications? In this episode of No Priors, hosts Sarah Guo and Elad Gil are joined by Shyam Sankar, the Chief Technical Officer of Palantir Technologies and inventor of their famous Forward Deployed Engineering force. Early employee and longtime leader Shyam explains the evolution of technology at Palantir, from ontology and data integration to process visualization and now AI. He describes how a company of Palantir's scale has adopted foundation models and shares customer stories. They discuss the case for open source AI models fine-tuned on private, domain-specific data, and the challenges of anchoring AI models in reality. 00:00 - Palantir's CTO Discusses Company's Background 10:17 - Apollo and AIP 20:25 - Future of UI and Application Integration 28:29 - Investment in Co-Pilot Models and Education 31:22 - Exploring AI Implementation in Various Industries 38:19 - Operational and Analytical Workflows in Context

Sarah GuohostShyam SankarguestElad Gilhost
Jul 26, 202339mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Palantir CTO on AIP, Ontologies, and AI’s Operational Future

  1. Palantir CTO Shyam Sankar traces his path from early employee to architect of the company’s forward‑deployed engineering model and core platforms Gotham, Foundry, and Apollo.
  2. He explains how Palantir’s new AIP platform brings large language models into secure, private environments and couples them with ontologies—structured representations of an enterprise—to build reliable, high‑impact AI copilots.
  3. Sankar argues that the real value in AI will accrue at the application and integration layer, not in the underlying models, and that 'chat' interfaces are too limiting compared to AI that directly manipulates application state.
  4. Concrete examples span defense, manufacturing, and healthcare, illustrating how AI can shift workflows from surfacing alerts to proposing executable actions while maintaining human oversight and trust.

IDEAS WORTH REMEMBERING

5 ideas

Treat AI as a stochastic system, not deterministic code.

Sankar emphasizes that LLMs are 'stochastic genies'—powerful but probabilistic—so engineers must invest in evals, telemetry, and health checks rather than relying on traditional, spec-driven unit tests.

Ground LLMs in a rich ontology to make them useful and safe.

Palantir’s long-standing work on ontologies and digital twins gives LLMs compressed, semantically meaningful context about an enterprise, enabling more reliable, domain-specific tools and workflows without modifying the base models.

Shift from chatbots to AI that directly manipulates application state.

Instead of returning text answers, Palantir aims for prompts or intents that produce JSON/DSL changes to underlying applications (e.g., resource allocations, scenarios, courses of action), making AI an invisible but powerful UI and integration layer.

Focus AI on workflows where upside is high and errors are no-ops.

Early high‑value use cases are those where correct AI output creates large gains, but incorrect output can be safely ignored or reviewed—such as suggested courses of action, claims triage, or operational recommendations with human approval gates.

Use ensembles and multiple models to build trust and robustness.

Palantir and its customers often compare outputs from several 'mad genius' models, especially when those outputs are structured, which allows statistical reasoning, consensus-building, and safer deployment in high-consequence environments.

WORDS WORTH SAVING

5 quotes

It’s a stochastic genie… it’s neither human thought nor traditional computer science.

Shyam Sankar

Prompts are for developers. Chat is a massively limiting interface.

Shyam Sankar

An LLM is not gonna know anything about orbital simulation or weaponeering… but with the right tool, it’s gonna do that quite excellently.

Shyam Sankar

We had accidentally spent the last 20 years really thinking hard about dynamic ontologies… and the LLMs were just waiting for something like ontology.

Shyam Sankar

Instead of surfacing alerts, we want to surface solutions.

Shyam Sankar

Shyam Sankar’s background, role evolution, and forward-deployed engineering at PalantirOverview of Palantir platforms: Gotham, Foundry, Apollo, and AIPOntologies and digital twins as the foundation for enterprise AIDesigning AI copilots, tools, and evaluation stacks for stochastic LLMsAgent architectures, state machines, and non-chat user interfacesApplications in defense, manufacturing, and healthcare (clinical and operational)Strategic positioning: commoditized models vs. differentiated application and integration layer

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome