
No Priors Ep. 57 | With LangChain CEO and Co-Founder Harrison Chase
Sarah Guo (host), Harrison Chase (guest), Elad Gil (host)
In this episode of No Priors, featuring Sarah Guo and Harrison Chase, No Priors Ep. 57 | With LangChain CEO and Co-Founder Harrison Chase explores langChain CEO on building reliable AI agents, memory, and UX Harrison Chase, CEO and co-founder of LangChain, discusses how LangChain evolved from a side project into a core toolkit for building LLM-powered applications, alongside its evaluation platform LangSmith.
LangChain CEO on building reliable AI agents, memory, and UX
Harrison Chase, CEO and co-founder of LangChain, discusses how LangChain evolved from a side project into a core toolkit for building LLM-powered applications, alongside its evaluation platform LangSmith.
He explains how the framework balances stability with rapid ecosystem change, especially around abstractions, chaining runtimes, and emerging patterns like graphs and controlled state machines for agents.
The conversation covers key challenges to performant agents—planning, memory, UX, and evaluation—plus how RAG, long context windows, fine-tuning, and model switching are actually playing out in production.
Chase highlights under-explored areas such as rich long-term memory, continual learning via few-shot examples, and application-layer UX as the next major frontier for AI products.
Key Takeaways
Prioritize flexible runtimes and low-level abstractions to survive ecosystem churn.
LangChain keeps integrations and high-level patterns evolving while investing heavily in stable, low-level chaining protocols (LangChain Expression Language, LangGraph) so teams can customize and adapt as models and APIs change.
Get the full analysis with uListen AI
Treat agents as controlled state machines, not unconstrained autonomous loops.
In practice, robust agents look less like endless Auto-GPT loops and more like structured graphs or state machines, encoding domain knowledge about how information flows, which drastically improves reliability.
Get the full analysis with uListen AI
Separate procedural memory from personalization and design them differently.
System-level know‑how (how to use tools, how to plan) is best captured via few-shot examples or fine-tuning, while user-specific memory (preferences, history) benefits from explicit ‘remember/delete’ calls or passive background insight extraction.
Get the full analysis with uListen AI
Use observability and data flywheels to improve tool use and reasoning.
Monitoring real interactions, capturing good and bad examples, and feeding them back into few-shot prompts (and eventually models) is a pragmatic way to continually improve agent performance without heavy fine-tuning.
Get the full analysis with uListen AI
Long context windows complement but do not replace RAG and chaining.
Bigger context helps single-shot tasks and simple RAG over a few documents, but multi-document reasoning, iterative workflows, and environment interaction still require multi-step pipelines and thoughtful retrieval/indexing strategies.
Get the full analysis with uListen AI
Reserve fine-tuning for very large-scale, stable applications; rely on prompts first.
Most teams Chase sees are experimenting with fine-tuning but stop short due to data, evaluation, and iteration costs; few-shot examples and better prompts offer faster iteration and often enough performance for most products.
Get the full analysis with uListen AI
UX and long-term memory are under-explored levers for breakthrough applications.
Chase believes the most exciting upcoming advances will come from application-layer UX that transparently surfaces agent behavior and from systems that learn continually from user interactions using techniques like few-shot optimization.
Get the full analysis with uListen AI
Notable Quotes
“What LangChain is has evolved over time as the entire landscape has evolved.”
— Harrison Chase
“When we see people building agents that work right now, it’s often breaking it down into a bunch of smaller components and imparting their domain knowledge about how information should flow.”
— Harrison Chase
“Memory in general is a field that’s just super, super nascent… I’m underwhelmed at the amount of really interesting stuff that’s going on there.”
— Harrison Chase
“The things that we see making it into production are something in the middle, where it’s this controlled state machine type thing.”
— Harrison Chase
“If I wasn’t doing LangChain, I’d probably start something at the application layer, and it would probably be something that really takes advantage of long-term memory.”
— Harrison Chase
Questions Answered in This Episode
How should a team decide when to move from simple chains to full agentic, graph-based workflows in their product?
Harrison Chase, CEO and co-founder of LangChain, discusses how LangChain evolved from a side project into a core toolkit for building LLM-powered applications, alongside its evaluation platform LangSmith.
Get the full analysis with uListen AI
What concrete UX patterns best help users understand and correct an AI agent’s intermediate steps without overwhelming them?
He explains how the framework balances stability with rapid ecosystem change, especially around abstractions, chaining runtimes, and emerging patterns like graphs and controlled state machines for agents.
Get the full analysis with uListen AI
How can developers systematically design and evaluate different memory strategies (explicit vs. passive, procedural vs. personal) for their specific use cases?
The conversation covers key challenges to performant agents—planning, memory, UX, and evaluation—plus how RAG, long context windows, fine-tuning, and model switching are actually playing out in production.
Get the full analysis with uListen AI
At what scale and stability of requirements does fine-tuning begin to meaningfully outperform a carefully engineered prompt plus few-shot examples?
Chase highlights under-explored areas such as rich long-term memory, continual learning via few-shot examples, and application-layer UX as the next major frontier for AI products.
Get the full analysis with uListen AI
How might continual learning loops based on user feedback be implemented safely, without amplifying bias or locking in early mistakes?
Get the full analysis with uListen AI
Transcript Preview
Hi, listeners, and welcome to another episode of No Priors. Today, we're talking to Harrison Chase, the CEO and co-founder of LangChain, a popular open-source framework and developer toolkit that helps people build LLM applications. We're excited to talk to Harrison about the state of AI application development, the open-source ecosystem, and its open questions. Welcome, Harrison.
Thanks for having me. I'm excited to be here.
LangChain's a, a really unique story, and it started actually as a personal project for you. Can you talk a little bit about what, what LangChain is and what it was originally?
Yeah. Absolutely. So how I'd - how I would answer the question what LangChain is has kind of evolved over time, as has the entire landscape. LangChain, the open-source, uh, package, started, yeah, as a side project. Um, so, so my background's in ML and MLOps. I was at, I was at my previous company. I, I knew I was gonna leave. I didn't know what I was gonna do. This was in September, October of 2022. Um, and so I went to a bunch of hackathons, went to a bunch of meetups, chatted with folks that were playing around with LLMs, um, and saw some common abstractions, put it in a Python project as a just fun side project. Turned out to strike a chord, be fantastic timing, you know, ChatGPT came out, like, a month later. Um, and it's kind of evolved from there. So right now, LangChain, the company, um, there's really two main products that we have. One is the LangChain open-source packages. I'm happy to dive into that more. And then the other is LangSmith, a platform for, for testing, evaluation, monitoring, and, and all of those types of things. And so, you know, what LangChain is has evolved over time as, uh, the company's grown.
One thing that we talked about the last time we say each other in person was just how quickly, like, the AI, um, ecosystem and research field is evolving and what it means to manage an open-source project through that. Can you talk a little bit about what you decide to keep stable and change when you both have, like, big ecosystem of users now and, like, very rapidly changing environment of applications and technology?
That's been a fun exercise. So, I mean, if we go back to the original version of LangChain, what it was when it came out was essentially three kind of, like, high-level implementations. Two were based on research papers, and then one was based on Nat Friedman's, like, NatBot type of agent web crawler thing. And so there was some high-level kind of, like, uh, abstractions, and then there was a few, like, integrations. So we had integrations with, I think, like, OpenAI, Cohere, and Hugging Face to start or something like that. And those two layers have kind of, like, remained. So we have, you know, 700 different integrations. We have a bunch of kind of, like, higher level chains and agents for, for doing particular things. I think the thing that we've put a lot of emphasis in, um, to your point around kind of, like, what's remained constant and what's remained, uh, and what's changed, is, like, a lower level kind of, like, abstraction and runtime for, for joining these things together. One of the things that we pretty quickly saw was that as people wanted to improve the performance, go from prototype to production, they wanted to customize a lot of these bits. And so we've invested a lot in, uh, a lower level kind of, like, chaining protocol, so LangChain expression language, and then in, in, uh, different protocol LangGraph, which is one- something we're really excited about, and that's more aimed at, uh, basically graphs that are not DAGs. So, you know, all these agents are basically running an LLM in a loop. You need cycles, um, and, and so LangGraph helps with that. And so I think what we've kind of seen is the underlying bits of, um... There's all these different integrations, and, like, you know, there's, there's LLMs, vector stores, and sometimes they change, right? When chat models came out, like, that was a, that's a very big change in the API interface, and so we had to add a new abstraction for that. But those have, especially over the past few months, remained relatively stable. Um, we've invested a lot in this underlying runtime, which emphasizes a few things, uh, streaming, structured outputs, and, and the importance of those has remained relatively stable. But then the way that you put things together and the kind of, like, patterns, um, for building things has definitely evolved over time, from, like, simple chains, to complex chains, to then these kind of, like, autonomous agents, to now something, um, maybe in the middle of, like, complex state machines or graphs or something. And so it's really that upper layer, which is, like, the common ways to put things together, that I think we've seen the most rapid kind of, like, churn.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome