At a glance
WHAT IT’S REALLY ABOUT
Decagon’s Transparent AI Agents Redefine Enterprise Customer Support at Scale
- Decagon CEO and co-founder Jesse Zhang discusses building enterprise-grade generative AI agents focused on customer support, already deployed at companies like BILT Rewards, Rippling, Notion, and Duolingo.
- He explains that Decagon’s edge lies less in owning core LLMs and more in orchestration, transparency, and software around the models—giving enterprises control, observability, and clear ROI.
- Zhang shares concrete impact metrics, such as BILT Rewards saving the equivalent of 65 support agents while improving customer experience and response speed across channels including emerging voice agents.
- He also outlines where AI agents will win first—use cases with quantifiable ROI and safe, incremental rollout—and where adoption will be slower due to risk, trust, and measurement challenges.
IDEAS WORTH REMEMBERING
5 ideasFocus on use cases with clearly measurable ROI and incremental rollout.
Decagon chose customer support because you can quantify automation (deflection rates, headcount saved, CSAT/NPS) and safely start with a small traffic slice before scaling.
Transparency and control are critical for enterprise AI adoption.
Large customers demand visibility into what data the agent uses, how decisions are made, and the ability to inspect, audit, and adjust behavior rather than treat AI as a black box.
Most differentiation sits above the base models in orchestration and software.
Since everyone can access similar LLMs, value comes from how you orchestrate multiple models, encode business logic, evaluate performance, and build surrounding tooling and analytics.
Instruction following matters more than pure reasoning in many applied agents.
For customer support workflows, strict adherence to policies and SOPs is more impactful than improved quantitative reasoning, so advances in instruction-following will unlock more automation.
Voice agents are becoming viable but hinge on latency and UX design.
High-quality TTS/ASR and voice-to-voice models from vendors like OpenAI and ElevenLabs are enabling phone-based agents, but latency, streaming strategies, and conversational pacing remain key challenges.
WORDS WORTH SAVING
5 quotesWe kind of arrived at our current use case as maybe what we think is the golden use case for these AI agents, which is customer interactions, customer service.
— Jesse Zhang
Most applications nowadays are real software companies and AI models are kind of tools that everyone can use.
— Jesse Zhang
The thing that’s made us special so far is we have a huge sort of focus on transparency… it’s very important for them that the AI agent is not a black box.
— Jesse Zhang
So far, it’s around 65 agents of just headcount saved… the customer experience is also a lot snappier.
— Jesse Zhang, on BILT Rewards
For the vast majority of use cases right now, there’s not going to be real commercial adoption with the state of the current models.
— Jesse Zhang
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome