CHAPTERS
Trigger.dev today: SDK + platform to run reliable AI agents in your product
Matt and Eric explain Trigger.dev’s current purpose: an SDK that lets product teams add AI agents and long-running workflows into their apps, while Trigger runs them reliably. The emphasis is on execution, reliability, and not making developers manage infrastructure for these agent tasks.
v1 in YC: “Zapier for developers” and the early developer-design philosophy
They revisit the YC-era positioning: a developer-first automation tool (the “Zapier for devs” meme) that resonated on Hacker News. They credit strong design and “code-first” communication—showing code immediately—as a major driver of adoption and interest.
From back-office automations to embedding workflows into customer products
Early users tended to treat Trigger as an internal automation layer (sales/marketing/biz ops), similar to Zapier. The team learned the highest-value use cases came from customers embedding background workflows directly into their product’s user experience.
Serverless created a gap: long-running work became painful
They connect the product’s direction to broader infrastructure trends: serverless is great for short request/response but poor for long-running tasks. Trigger’s core promise became filling that gap with durable execution primitives developers could rely on.
The first major pivot: v2 adoption was “okay,” but not PMF
Version two improved the embedded use case, but they still didn’t feel strong product-market fit. They saw real demand but concluded the product didn’t match the problem well enough, in part because developers still had to write messy code and manage too much themselves.
v3 breakthrough: Trigger executes the code (and customers expected that anyway)
In summer 2024 they shipped v3, moving from “SDK only” to SDK + platform + infrastructure that actually runs the tasks. A customer poll showed many users already assumed Trigger executed the jobs—making the shift both logical and aligned with expectations.
Hitting product-market fit: rapid revenue growth and charging after beta
After v3, they saw immediate traction and sustained strong growth. They started with a free beta, then began charging a couple months later, describing consistent ~30%+ month-over-month revenue growth for an extended period.
Open source strategy: permissive Apache 2 plus a clear cloud value line
They describe how open source fits the business: most teams prefer the hosted cloud offering because it removes infrastructure burden. Almost all functionality is open, while the hardest-to-operate scaling layer (Kubernetes orchestration/management) is the main differentiator in the managed product.
Customer stories and what “agent workflows” look like in practice
They share concrete examples that illustrate why durable execution matters for AI. Use cases include generating video ads (Icon.com), education workflows for teachers/students (Magic School), and coding agents that pull repos, run tools, and commit changes (Scrapybara).
Why agents need “pause/resume”: checkpointing compute for human-in-the-loop loops
A major theme is the primitive of pausing, waiting for feedback, then resuming with full state intact. They argue the future is programmatic snapshot/restore of CPU/memory/filesystem state, avoiding complex manual state rehydration and enabling human/agent-in-the-loop workflows.
“Vibe coders” vs traditional devs: the gap narrows as models and docs improve
They observed a sharp split earlier—novices struggled and support signals were obvious. With better models (Opus 4.5) and Trigger becoming more LLM-friendly (better docs, MCP server, agent-oriented materials), usage patterns have converged and the distinction is less visible.
Open source as “agent marketing”: LLMs read repos, tests, and even propose fixes
They frame LLMs as a second user persona: not just humans, but agents that choose tools. Open source increases their “footprint” on the internet, letting models inspect real code/tests and enabling customers to arrive with precise bug reports—or even patches—generated by Claude.
Hiring and shipping with agents: fewer engineers, higher leverage, new evaluation methods
Post–Opus 4.5, they revised hiring expectations: productivity per engineer is far higher, so they’ll hire more selectively. They emphasize hiring for ability to leverage AI tools and use a paid “trial day” to observe real-world output rather than LeetCode-style interviewing.
Maintaining quality (avoiding “slop”): design systems, tests, and agent-assisted review
They address the critique that agents generate low-quality code by describing guardrails: strong reusable UI components, a design system, and more tests—especially backend tests that verify success criteria for agents. Code review becomes the bottleneck, with specialized review tools and founders/design leadership ensuring UX and maintainability stay high.
Founder advice: ship early, stay close to customers, and know when to persist vs pivot
They close with YC-aligned guidance: shipping quickly is the fastest way to learn what matters and whether the problem is real. Matt adds the harder meta-skill—deciding when to keep going without PMF—grounded in personal pain, persistent customer signals, and daily customer proximity.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome