How I AISuccessfully coding with AI in large enterprises: Centralized rules, workflows for tech debt, & more
At a glance
WHAT IT’S REALLY ABOUT
Enterprise AI coding: centralized rules, docs, tech-debt workflows, hiring rigor
- Claire Vo and Zach Davis argue that “vibe coding” doesn’t translate to enterprise-scale software, where reliability, maintainability, and team-wide consistency matter.
- LaunchDarkly improved AI-agent performance by making the repo easier for both humans and LLMs: moving scattered docs into the repo, and centralizing guidance in a single rules source of truth that multiple tools can reference.
- They demonstrate practical workflows: using Devin (and its wiki/knowledge) to generate and refine documentation/rules, and using Cursor/Claude to plan and execute incremental tech-debt cleanup with checklists that both bots and humans can follow.
- They also show a hiring workflow where a custom GPT evaluates interview scorecard quality against a rubric and produces coachable feedback (including Slack-ready messages) to keep hiring standards consistent.
IDEAS WORTH REMEMBERING
5 ideasTreat AI adoption as an engineering enablement program, not an individual hack.
Zach emphasizes assigning clear responsibility to someone close to the code who experiments, identifies failure modes, and ensures skeptics get an early “aha” instead of a first-run failure.
What’s good for humans is good for LLMs—fix your repo’s “usability.”
They moved key guides (style, accessibility, frontend organization) into a repo docs directory so both engineers and tools can reliably find and apply standards without hunting through Confluence/Docs.
Centralize rules once, then point every tool to the same source of truth.
Instead of maintaining separate Claude.md / Cursor rules / tool-specific configs, they created a consolidated rules structure (e.g., .agentsrules / .agents directory) and made each tool reference it to avoid drift and duplication.
Use agents to draft rules/docs, but review with a fine-tooth comb.
Zach seeds structure using Devin’s wiki and agent output, then manually verifies correctness and trims rules to be concise, linking to deeper docs when needed.
Write rules that prevent domain ambiguity and common tool mistakes.
LaunchDarkly had cases where models confused “feature flags in LaunchDarkly product” vs “feature flags in code,” so they wrote explicit flagging guidance to improve correctness and automation success (e.g., returning links, using MCP flows).
WORDS WORTH SAVING
5 quotes“Vibe coding is not an acceptable enterprise development strategy.”
— Claire Vo
“What’s good for humans is also good for LLMs.”
— Zach Davis
“Everyone was on their own journey… and that just doesn’t scale very well.”
— Zach Davis
“If it’s hard to get Devin up and running, it’s probably hard for your human developers to get up and running.”
— Zach Davis
“Technical debt is my favorite use case for AI to supercharge a medium-sized organization.”
— Zach Davis
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome