Aakash GuptaThis AI Expert's Method Will Change How You Do Customer Research
At a glance
WHAT IT’S REALLY ABOUT
A rigorous, multi-step AI workflow for trustworthy customer research analysis
- Good AI research mirrors rigorous human research by separating analysis, verification, and synthesis rather than jumping straight to themes.
- A Step 0 “context load” prompt onboards the model with business goals and product details to reduce wrong assumptions and instruction drop-off.
- Interview analysis is strengthened by per-participant extraction (e.g., value anchors and fragile points) followed by contradiction checks to prevent cherry-picking and hallucinations.
- Survey analysis should start with inductive coding before counting frequencies, then add calibrated emotional intensity ratings to prioritize what matters most.
- Agentic workflows in Claude Code can parallelize survey and interview analysis, output structured markdown deliverables, and cut analysis time dramatically—while still requiring audits and human judgment.
IDEAS WORTH REMEMBERING
5 ideasDon’t start with synthesis; start with granular analysis.
The workflow forces the model to comb through each file/response first (like a human researcher would) before summarizing themes, which reduces missed nuance and overconfident generalizations.
Separate “context loading” from task prompts to prevent instruction loss.
A dedicated Step 0 prompt onboards the model on goals and product/tier details and ends with “do not run analysis yet,” improving focus and reducing incorrect product assumptions.
Per-participant extraction creates traceability and better foundations.
Extracting value anchors, fragile points, quotes, and a churn/stability rating per participant replicates line-by-line human review and produces evidence you can later synthesize confidently.
Add a verification pass specifically designed to catch contradictions.
Having the model re-scan for conflicting statements (and defining what counts as a contradiction) prevents cherry-picking one narrative when the participant’s account is inconsistent.
For surveys, code first—then count.
Inductive open coding (with rules like mutually exclusive primary codes) produces a defensible codebook and prevents the model from miscategorizing or forcing responses into premature themes.
WORDS WORTH SAVING
5 quotesGood AI customer research and analysis actually looks like replicating the way that we do rigorous analysis as humans.
— Caitlin Sullivan
What most people do… is jumping straight ahead to synthesis, and that's exactly what we don't wanna do.
— Caitlin Sullivan
Internalize this only. Do not run analysis yet.
— Caitlin Sullivan
When we're working with survey responses or short customer feedback, we want to code first.
— Caitlin Sullivan
I’ll call this the CYA way to use AI. Cover your ass.
— Aakash Gupta
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome