CHAPTERS
Giga’s wedge: product-first enterprise support agents (and why DoorDash picked them)
Harj introduces Varun and Esha and asks how Giga stands out in a crowded AI support-agent market. The founders explain that their biggest differentiator is a productized approach that can go live fast, rather than a slow, forward-deployed/consulting model.
Why traditional customer support (and many AI rollouts) fail at scale
They unpack why customer support is deceptively hard at large enterprises: high volume plus a meaningful tail of complex, multi-party issues. Many vendors can handle simple flows, but break down on the hardest cases that matter for trust and broad adoption.
“No custom work” as a company rule: building a generalizable core product
Esha explains Giga’s internal rule: nothing is built as one-off custom code for a single customer. Instead, every feature added for a major customer becomes a core primitive usable by all customers, forcing the platform to generalize.
AI forward-deployed engineer: turning business intent into executable logic
Giga reframes the forward-deployed engineer model: the real job is translating customer business logic into code and configuration. Their bet is that AI can do much of that translation inside the product, enabling non-engineers to implement sophisticated workflows.
Python as a first-class primitive inside the platform (and why that enables breadth)
They describe a key architectural choice: Python is embedded as a first-party capability with controlled primitives for where code can be injected. This makes the system inherently general and extensible across many enterprise workflows.
Deep LLM expertise as an unfair advantage: context length, cost, and reliability
Varun and Esha discuss their background in fine-tuning and model optimization, including context-length work on Llama. This informs what they choose to automate, how they manage token-intensive workflows, and how they optimize cost/performance for enterprise deployment.
DoorDash’s “address change” case study: multi-party parallel calling in production
They walk through a live DoorDash use case involving fraud checks and geofencing that requires coordinating with both Dasher and customer. Giga’s agent can run parallel calls, verify intent, and then take an operational action (marking delivery) in real time.
Why AI can outperform humans on support calls: speed, parallelism, language coverage
Beyond cost savings, they argue AI can create a better support experience than humans in certain scenarios. Removing hold times, reducing resolution times, and handling multilingual/accents improve CSAT—especially when parallel conversations are required.
Raising $61M: scaling to meet Fortune 500 demand and expand deployments
Harj shifts to fundraising: why raise now and what the capital is for. Varun explains they have heavy inbound demand and large pilots, so the round supports hiring and delivery capacity to maintain quality at scale.
How Giga built an unusually strong enterprise pipeline: C-level deals and referrals
They attribute pipeline strength to large contract sizes and selling to senior executives. Once results are clear, referrals propagate among C-level networks, and Giga becomes part of the company’s board-level “AI strategy” narrative.
Building an enterprise-grade team: high agency, raw IQ, and outsized impact
They discuss hiring and culture as they scale from ~20 people. The founders emphasize selectivity, mission alignment, and the appeal of shipping work that affects hundreds of millions (or billions) of end users.
Founder origin story and YC pivot journey: from edtech to fine-tuning to support
They recount meeting at IIT Kharagpur, applying to YC with an education idea, and pivoting under YC’s influence. Visa issues forced them to do YC remotely initially; later, they pivoted into fine-tuning and then found customer support through customer discovery.
Why fine-tuning-as-a-service didn’t work: competing against the model curve
Esha explains their earlier fine-tuning business had traction when frontier models were expensive, but the funnel broke as OpenAI/Anthropic released cheaper, better models. They concluded they were “betting against AI” and needed a strategy that benefits from model progress.
The future: context-rich ops automation platform beyond support (plus founder advice)
They outline a broader vision: customer support is the entry point to gather context, data, and operational footing inside enterprises. Over time, Giga aims to expand into other OpEx-heavy functions (like compliance) and build a platform that enables broader enterprise optimization.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome