Skip to content
No PriorsNo Priors

No Priors Ep. 57 | With LangChain CEO and Co-Founder Harrison Chase

Companies are employing AI agents and co-pilots to help their teams increase efficiency and accuracy, but developing apps that are trained properly can require a skillset many enterprise teams don’t have. This week on No Priors, Sarah and Elad are joined by Harrison Chase, the CEO and co-founder of LangChain, an open-source framework and developer toolkit that helps developers build LLM applications. In this conversation they talk about the gaps in open source app development, what it will take to keep up with private companies, the importance of creating prompts that can be compatible with many API models, and why memory is so undeveloped in this space. Show Notes: 0:00 Introduction to LangChain 1:45 Managing an open source environment 4:30 Developing useful AI agents 10:03 Sophistication and limitations of AI app development 14:17 Switching between model APIs 17:10 Context windows, fine tuning and functionality 21:37 Evolution of AI open source environment 23:53 The next big breakthroughs

Sarah GuohostHarrison ChaseguestElad Gilhost
Mar 27, 202427mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

LangChain CEO on building reliable AI agents, memory, and UX

  1. Harrison Chase, CEO and co-founder of LangChain, discusses how LangChain evolved from a side project into a core toolkit for building LLM-powered applications, alongside its evaluation platform LangSmith.
  2. He explains how the framework balances stability with rapid ecosystem change, especially around abstractions, chaining runtimes, and emerging patterns like graphs and controlled state machines for agents.
  3. The conversation covers key challenges to performant agents—planning, memory, UX, and evaluation—plus how RAG, long context windows, fine-tuning, and model switching are actually playing out in production.
  4. Chase highlights under-explored areas such as rich long-term memory, continual learning via few-shot examples, and application-layer UX as the next major frontier for AI products.

IDEAS WORTH REMEMBERING

5 ideas

Prioritize flexible runtimes and low-level abstractions to survive ecosystem churn.

LangChain keeps integrations and high-level patterns evolving while investing heavily in stable, low-level chaining protocols (LangChain Expression Language, LangGraph) so teams can customize and adapt as models and APIs change.

Treat agents as controlled state machines, not unconstrained autonomous loops.

In practice, robust agents look less like endless Auto-GPT loops and more like structured graphs or state machines, encoding domain knowledge about how information flows, which drastically improves reliability.

Separate procedural memory from personalization and design them differently.

System-level know‑how (how to use tools, how to plan) is best captured via few-shot examples or fine-tuning, while user-specific memory (preferences, history) benefits from explicit ‘remember/delete’ calls or passive background insight extraction.

Use observability and data flywheels to improve tool use and reasoning.

Monitoring real interactions, capturing good and bad examples, and feeding them back into few-shot prompts (and eventually models) is a pragmatic way to continually improve agent performance without heavy fine-tuning.

Long context windows complement but do not replace RAG and chaining.

Bigger context helps single-shot tasks and simple RAG over a few documents, but multi-document reasoning, iterative workflows, and environment interaction still require multi-step pipelines and thoughtful retrieval/indexing strategies.

WORDS WORTH SAVING

5 quotes

What LangChain is has evolved over time as the entire landscape has evolved.

Harrison Chase

When we see people building agents that work right now, it’s often breaking it down into a bunch of smaller components and imparting their domain knowledge about how information should flow.

Harrison Chase

Memory in general is a field that’s just super, super nascent… I’m underwhelmed at the amount of really interesting stuff that’s going on there.

Harrison Chase

The things that we see making it into production are something in the middle, where it’s this controlled state machine type thing.

Harrison Chase

If I wasn’t doing LangChain, I’d probably start something at the application layer, and it would probably be something that really takes advantage of long-term memory.

Harrison Chase

Evolution and architecture of LangChain and LangSmithDesigning and stabilizing abstractions in a rapidly changing LLM ecosystemCurrent limitations and design patterns for AI agents (planning, reflection, state machines)Concepts and implementations of memory: procedural vs. personalizationRAG, context windows, and how long context affects application designFine-tuning versus prompting and few-shot examples in real-world deploymentsFuture directions: continual learning, long-term memory, and application-layer UX

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome