At a glance
WHAT IT’S REALLY ABOUT
A practical four-stage discovery system for AI-enabled product teams today
- The team uses a shared four-stage vocabulary—Wonder, Explore, Make, Impact—to align stakeholders on where an idea truly is and what kind of help or decisions are appropriate at each stage.
- In Wonder, they rapidly build empathy and clarity through ~10+ customer interviews, using short curated video clips (not polished prose) to transmit urgency and nuance across the team.
- In Explore, they validate solutions with low-fidelity prototypes (sometimes just a slide) and iterate with the same customers until users can explicitly explain how the concept solves the problem.
- In Make, they still “do discovery” by shipping progressively to small cohorts (10→100→1000) using a safety-funnel mindset to avoid bad early experiences that are hard to recover from.
- They operationalize continuous discovery via tooling and habits (weekly PM rotation for feedback triage, Dovetail/Loom/Gong/Pendo/community/Slack), and use AI to accelerate retrieval and synthesis without replacing direct customer learning.
IDEAS WORTH REMEMBERING
5 ideasCreate a shared “stage vocabulary” to prevent false certainty and misalignment.
Labeling work as Wonder/Explore/Make/Impact helps everyone understand how mature an idea is and what’s being asked (e.g., dependency help vs. funding vs. scaling plans), reducing confusion across large orgs.
In early discovery, raw customer footage beats polished narratives.
They aim for under ~10 minutes of clips capturing customer emotions and language; watching real users creates urgency and shared understanding more reliably than beautifully written docs.
Stop “performing interviews”; learn by removing leading questions and adding silence.
Crusson emphasizes training from professional researchers: don’t introduce the concept you’re testing (e.g., “feedback”), don’t give options, don’t interrupt, and let users take you where the truth is.
Explore with the cheapest prototype that can trigger real user reactions.
Their earliest JPD validation was literally a slide; later prototypes evolved in Figma and now can be made interactive quickly (e.g., with Lovable) to test comprehension and value before writing code.
Don’t scale exposure until you can protect users from a bad first experience.
Using a “safety funnel,” they onboard cohorts gradually (10→100→1000) to ensure CSAT and fit before broad release, because early negative experiences make it hard to win users back.
WORDS WORTH SAVING
5 quotesHonestly, if there's one investment you could make that could change your life as a PM is find a real user researcher, someone who does that as a trade, and ask them for training.
— Tanguy Crusson
I used to think I was amazing at user interviews... and she said, "Yeah, you... I know you're very proud of those 50 user interviews you did. Dude, I can tell you, you didn't learn a thing."
— Tanguy Crusson
Otherwise, I can promise you that you are going to have many conversations, and it's gonna prove what you think is right.
— Tanguy Crusson
Embrace that messiness. It's fine... It's like if you take a bottle of water and you put sand in it and you shake everything... And eventually the sand will settle, and you're gonna see through.
— Tanguy Crusson
As a PM, AI or no AI, you're gonna beat every other competitor if you learn faster than them and if you know more about your customers.
— Tanguy Crusson
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome