Skip to content
How I AIHow I AI

Gamma’s head of design on using AI to synthesize feedback and generate on-brand imagery | Zach Leach

Zach Leach, head of design at Gamma, reveals how his small team uses AI to analyze global feedback, create on-brand imagery, and maintain design quality while serving users in more than 60 countries. *What you’ll learn:* 1. How Gamma analyzes feedback from their 60% international user base using ChatGPT’s deep research capabilities 2. How to transform hundreds of multilingual feedback items into actionable design insights 3. A simple workflow for creating on-brand imagery using Midjourney-style references 4. How to use AI to maintain brand consistency across a globally distributed product 5. The secret to removing image backgrounds instantly using Replicate 6. How to create consistent, high-quality job descriptions in minutes using AI templates *Brought to you by:* WorkOS—Make your app enterprise-ready today: https://workos.com?utm_source=lennys_howiai&utm_medium=podcast&utm_campaign=q22025 Retool—AI that’s designed for developers and built for the enterprise: https://retool.com/howiai *Where to find Zach Leach:* LinkedIn: https://www.linkedin.com/in/zleach X: https://x.com/thisiszach *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Intro (02:42) Building the Gamma AI image editing feature (05:25) Using ChatGPT’s deep research for feedback analysis (09:10) How feedback was analyzed before AI tools (10:10) Benefits of deep research vs. basic scripting (12:40) Insights from ChatGPT's deep research (16:41) Demo of Midjourney workflow for creating on-brand art (23:54) Using Replicate for background removal (25:40) Style references (SREF) and brand consistency in Midjourney (29:19) An AI workflow for creating consistent job descriptions (32:27) Conclusion and final thoughts *ChatGPT feedback prompt:* “This is some feedback we’ve received about our AI image editing feature. I want you to analyze the feedback and find where we are doing poorly and where we are doing well. Break down for our product team what kinds of things we are doing well and why, and what kinds of things we are doing poorly and why. What do people love? What do people hate? Where can we improve?” *Tools referenced:* • Gamma: https://gamma.app/ • ChatGPT: https://chat.openai.com/ • Midjourney: https://www.midjourney.com/ • Midjourney Style Reference (SREF): https://docs.midjourney.com/hc/en-us/articles/32180011136653-Style-Reference • Replicate: https://replicate.com/ • Figma: https://www.figma.com/ • Claude Projects: https://claude.ai/projects • GPT 4o image model https://openai.com/index/introducing-4o-image-generation/ *Other reference:* • LaunchDarkly: https://launchdarkly.com/ _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostZach Leachguest
Jun 9, 202536mWatch on YouTube ↗

CHAPTERS

  1. Gamma’s global user base and the design challenge of multilingual feedback

    Claire and Zach set the context: Gamma serves a highly international audience, which means product feedback arrives in many languages and formats. They frame the core challenge for a small design team—staying close to customers while processing feedback at scale.

  2. Shipping AI image editing in Gamma—and instrumenting feedback loops

    Zach demos Gamma’s AI image editing feature and shows how users can request edits via chat (e.g., adding caramel drizzle). Crucially, Gamma captures user feedback on results to learn which prompts and edit types succeed or fail.

  3. The scale problem: 550 responses in a week across many languages

    Zach reveals the volume of incoming feedback—hundreds of responses in a single week—making manual review and translation unrealistic. The multilingual nature of the dataset is highlighted as a key barrier to traditional analysis.

  4. Using ChatGPT Deep Research to synthesize and translate feedback

    Zach walks through uploading the feedback file into ChatGPT Deep Research to produce a structured analysis of what’s working and what isn’t. He emphasizes the value of translations, trend extraction, and the ability to ask follow-up questions after the deep run completes.

  5. From analysis to stakeholder-ready output: generating a deck in Gamma

    After Deep Research produces findings, Zach copies the results into Gamma to quickly create a shareable presentation for product/engineering. He requests charts and data visualizations to make trends legible and discussion-ready.

  6. Before Deep Research: sampling bias and basic scripts vs. deeper insight

    Zach contrasts the new approach with the old: manually scanning a small English-only subset or writing simple keyword-based scripts. Deep Research offers richer categorization and understanding compared to brittle keyword matching or per-row single prompts.

  7. What the analysis surfaced: model tiers, multi-step edit failures, and UX ideas

    They discuss concrete insights uncovered by the synthesized feedback, including differences between paid and free model experiences and a major usability issue with multi-step edits. Zach describes potential UX/product responses, like prompting users to split complex requests.

  8. Scaling brand on a small team: defining Gamma’s new art direction

    Claire transitions to Gamma’s rebrand and the cost of maintaining high-quality brand visuals with limited headcount. Zach articulates the desired style—airy, surreal, vivid—and frames AI as a way to approximate an art department’s throughput.

  9. Midjourney workflow demo: iterating toward an on-brand empty-state illustration

    Zach shows how he iterated through many Midjourney generations to create an empty-state image for the image editor panel. He explains how exploration (apples → birds → split transformation concept) led to the final, on-brand visual that communicates “transformation.”

  10. From ‘agency revisions’ to creative rabbit holes: why rapid iteration matters

    Claire highlights how AI changes the dynamics of art direction by making iteration cheap and immediate, avoiding the slow back-and-forth typical with agencies. Zach describes the freedom to chase creative “rabbit holes” until something clicks.

  11. Productionizing the asset: Replicate background removal and Figma integration

    Zach explains how he removes the Midjourney background using a specific Replicate model, producing a clean transparent asset for Figma. This replaces older manual approaches like pen-tool masking and accelerates shipping polished UI visuals.

  12. Maintaining brand consistency: Midjourney Style References (SREF) and personalization

    They break down how Gamma keeps outputs consistent by using Midjourney style references and personalization developed with their brand agency/creative director. Zach describes it as a loosely shared “kit” that helps the whole company generate aligned visuals.

  13. A reusable AI workflow for consistent job descriptions (Claude Projects)

    Zach shows a separate internal workflow: using a Claude Project seeded with example job posts and instructions to generate new role descriptions in Gamma’s voice. The aim is speed plus consistency so any hiring manager can draft high-quality postings.

  14. Human craft in an AI-assisted design world: making products ‘fun’ + closing reflections

    In the wrap-up, Zach argues the durable human contribution is taste and “finding the fun”—personality, engagement, and delight in product experiences. They end with light discussion of prompting temperament, a personal Deep Research anecdote, and final show outro.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome