Skip to content
How I AIHow I AI

Successfully coding with AI in large enterprises: Centralized rules, workflows for tech debt, & more

Zach Davis is a product-minded engineering leader and builder at heart, with over 12 years of experience building high‑performing teams and crafting developer tools at companies like Atlassian and LaunchDarkly. In this episode, he shares how he’s helping his 100-plus-person engineering team successfully adopt AI tools by creating centralized documentation, using agents to tackle technical debt, and improving hiring processes—all while maintaining high quality standards in a mature codebase. *What you’ll learn:* 1. How to create a centralized rules system that works across multiple AI tools instead of duplicating documentation 2. A systematic approach to using AI agents like Devin and Cursor to analyze and reduce test noise in large codebases 3. How to leverage AI tools to document your codebase more effectively by extracting knowledge from existing sources 4. Why “what’s good for humans is also good for LLMs” should guide your documentation strategy 5. A custom GPT workflow for improving interview feedback quality and coaching interviewers 6. How to approach tech debt reduction with AI by creating prioritized task lists that both humans and AI agents can work from *Brought to you by:* WorkOS—Make your app enterprise-ready today Lenny’s List on Maven—Hands-on AI education curated by Lenny and Claire *Where to find Zach Davis:* LaunchDarkly: https://www.launchdarkly.com LinkedIn: https://www.linkedin.com/in/zach-davis-28207195/ *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Zach Davis (02:44) Overview of AI tools used at LaunchDarkly (04:00) The importance of having someone responsible for driving AI adoption (05:44) Why vibe coding isn’t acceptable for enterprise development (06:42) Making engineers successful with AI on their first attempt (07:55) Creating centralized documentation for both humans and AI agents (10:19) Using feature flagging rules to improve AI outputs (12:33) Advice for getting started with rules (14:28) Demo: Setting up Devin’s environment in a large codebase (24:33) Devin’s plan overview (27:55) Demo: Creating a prioritized tech debt reduction plan (36:40) Demo: Using AI to improve hiring processes and interview feedback (40:34) Summary of key approaches for integrating AI into engineering workflows (42:08) Lightning round and final thoughts *Tools referenced:* • Cursor: https://www.cursor.com/ • Devin: https://devin.ai/ • ChatGPT: https://chat.openai.com/ • Claude: https://claude.ai/ • Windsurf: https://windsurf.com/ • Lovable: https://lovable.dev/ • v0: https://v0.dev/ • ChatPRD: https://www.chatprd.ai/ • Figma: https://www.figma.com/ • GitHub Copilot: https://github.com/features/copilot *Other references:* • Jest: https://jestjs.io/ • Vitest: https://vitest.dev/ • MCP: https://www.anthropic.com/news/model-context-protocol • Confluence: https://www.atlassian.com/software/confluence _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostZach Davisguest
Jul 20, 202544mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Enterprise AI coding: centralized rules, docs, tech-debt workflows, hiring rigor

  1. Claire Vo and Zach Davis argue that “vibe coding” doesn’t translate to enterprise-scale software, where reliability, maintainability, and team-wide consistency matter.
  2. LaunchDarkly improved AI-agent performance by making the repo easier for both humans and LLMs: moving scattered docs into the repo, and centralizing guidance in a single rules source of truth that multiple tools can reference.
  3. They demonstrate practical workflows: using Devin (and its wiki/knowledge) to generate and refine documentation/rules, and using Cursor/Claude to plan and execute incremental tech-debt cleanup with checklists that both bots and humans can follow.
  4. They also show a hiring workflow where a custom GPT evaluates interview scorecard quality against a rubric and produces coachable feedback (including Slack-ready messages) to keep hiring standards consistent.

IDEAS WORTH REMEMBERING

5 ideas

Treat AI adoption as an engineering enablement program, not an individual hack.

Zach emphasizes assigning clear responsibility to someone close to the code who experiments, identifies failure modes, and ensures skeptics get an early “aha” instead of a first-run failure.

What’s good for humans is good for LLMs—fix your repo’s “usability.”

They moved key guides (style, accessibility, frontend organization) into a repo docs directory so both engineers and tools can reliably find and apply standards without hunting through Confluence/Docs.

Centralize rules once, then point every tool to the same source of truth.

Instead of maintaining separate Claude.md / Cursor rules / tool-specific configs, they created a consolidated rules structure (e.g., .agentsrules / .agents directory) and made each tool reference it to avoid drift and duplication.

Use agents to draft rules/docs, but review with a fine-tooth comb.

Zach seeds structure using Devin’s wiki and agent output, then manually verifies correctness and trims rules to be concise, linking to deeper docs when needed.

Write rules that prevent domain ambiguity and common tool mistakes.

LaunchDarkly had cases where models confused “feature flags in LaunchDarkly product” vs “feature flags in code,” so they wrote explicit flagging guidance to improve correctness and automation success (e.g., returning links, using MCP flows).

WORDS WORTH SAVING

5 quotes

“Vibe coding is not an acceptable enterprise development strategy.”

Claire Vo

“What’s good for humans is also good for LLMs.”

Zach Davis

“Everyone was on their own journey… and that just doesn’t scale very well.”

Zach Davis

“If it’s hard to get Devin up and running, it’s probably hard for your human developers to get up and running.”

Zach Davis

“Technical debt is my favorite use case for AI to supercharge a medium-sized organization.”

Zach Davis

Why “vibe coding” fails in enterprisesAI enablement ownership and change managementIn-repo documentation as LLM-readable contextCentralized, tool-agnostic agent rules (.agentsrules)Devin Wiki/knowledge vs. avoiding duplicated guidanceRule creation via agents + human reviewTech-debt burn-down via prioritized checklists and agentsAI-assisted interviewer coaching via scorecard grading

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome