Skip to content
Aakash GuptaAakash Gupta

AI for Product Managers: 10X Growth with Smart Experimentation

AI has completely transformed how we run experiments. What used to take weeks can now happen in minutes. Frederic De Todaro, CPO at Kameleoon with 12+ years helping thousands of teams, reveals exactly how AI is revolutionizing experimentation from ideation to analysis. ⏰ Timestamps: 00:00 How AI Changed Experimentation Overview 01:54 The 4 Steps of Experimentation Framework 14:12 ADS 16:00 How AI has Changed Experimentation 21:08 User Behaviour Models 26:56 Multi-Armed Bandit vs Contextual Bandit 30:05 ADS 31:55 AI Content Genration 35:13 How Vibe Coding Changes Experimentation 41:35 Live Demo From Idea to Running Experiment in 2 Minutes 43:36 Two-Minute Build Achievement 51:49 How to Measure AI Features Properly 54:17 Measuring RAG Systems 3 Key Metrics 01:07:18 Best Experimentation Company Booking.com 01:10:10 Biggest PM Mistakes in Experimentation 01:13:52 Ending Transcript: https://www.news.aakashg.com/p/frederic-de-todaro-podcast -- 🏆 Thanks to our sponsors: 1. Mobbin: Discover real-world design inspiration https://mobbin.com/aakash 2. Jira Product Discovery: Build the right thing, reliably https://www.atlassian.com/software/jira/product-discovery 3. Product Faculty: Get $550 off https://maven.com/product-faculty/ai-product-strategy-certificate?promoCode=AAKASH550C1 4. Maven: Get $100 off my curation of their top courses - http://maven.com/x/aakash -- Key Takeaways: 1. The Build Bottleneck is Dead. Most product ideas never get tested because building takes weeks. AI just killed this constraint - you can now go from idea to live experiment in 2 minutes using plain English prompts. 2. Prompt Your Way to Tests. Type "change sorting to price low to high" and AI builds the variation in 2 minutes. Still run it through design, engineering, and data reviews - but now you're reviewing the actual live variation, not specs. 3. Beyond Text: Draw Your Ideas. Upload mockups or sketch rough concepts. AI transforms drawings into live experiments you can actually review - newsletter popups, onboarding flows, layout changes. Share preview links with stakeholders before going live. 4. AI Reads User Intent. Like a digital sales rep, AI scores every visitor's conversion likelihood in real-time. Show discounts only to users who need them to buy, not everyone who visits your site. 5. Failed Tests Become Wins. 80% of experiments fail overall, but AI automatically finds segments where they succeed. "Failed globally but increased mobile conversions 25%" - insights that would take hours manually. 6. Speed vs Accuracy Trade-offs. Multi-armed bandits optimize news headlines in hours, not weeks - perfect when time beats perfect measurement. Contextual bandits personalize every individual user's experience. 7. Humans Still Drive Strategy. PMs bring business context AI doesn't have - customer constraints, strategic priorities, success metrics. Data scientists validate statistical approaches. Designers review brand compliance. AI handles building variations fast. 8. Measure What Actually Matters. Track business metrics, not just usage: prompts needed per experiment, time from idea to live test, developer dependency rate. If you still need developers 80% of the time, AI isn't solving your bottleneck. 9. Discovery Meets Testing. User interviews reveal what people say they want. Experiments show what they actually do. Combine both for complete insight - validate problems through discovery, solutions through testing. 10. Experimentation Culture Wins. Harvard Business Review found direct correlation between experiments run annually and revenue growth. More experiments = faster growth. AI finally makes this accessible to every team. --- 👨‍💻 Where to find Fred: LinkedIn: https://www.linkedin.com/in/fdetodaro/ Kameleoon: https://kameleoon.com 👨‍💻 Where to find Aakash: Twitter: https://www.twitter.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ #ai #experimentation #abtesting #productmanagement 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 180K listeners. Hosted by Aakash Gupta, who spent 16 years in PM, rising to VP of product, this 2x/week show covers product and growth topics in depth. 🔔 Subscribe and turn on notifications to master AI-powered experimentation!

Aakash GuptahostFrederic De Todaroguest
Aug 28, 20251h 15mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

How AI removes experimentation bottlenecks and improves measurement for PMs

  1. AI’s biggest impact on experimentation is eliminating the build bottleneck by generating variations and experiment code from prompts, sketches, or mockups in minutes instead of sprints.
  2. Product managers and data scientists remain essential “humans in the loop,” with PMs providing business context and hypotheses while data scientists validate metrics, bias, and statistical rigor.
  3. The ML wave (circa 2016) improved targeting, traffic allocation, and analysis via techniques like intent scoring, multi-armed bandits, contextual bandits, and automated opportunity detection.
  4. The GenAI wave (since 2022) expanded experimentation capabilities through content generation, RAG-based assistants inside tools, and “vibe experimenting” (prompt-based experimentation) that democratizes testing.
  5. Measuring AI features requires going beyond usage to outcomes and experience, plus technical eval metrics (accuracy, relevance, context quality) and methods like LLM-as-judge for RAG evaluation.

IDEAS WORTH REMEMBERING

5 ideas

The build step—not ideation—is why most teams under-experiment.

Teams often have plenty of test ideas, but development bandwidth turns experiments into a scarce resource, forcing prioritization meetings and multi-sprint waits; AI-generated variations aim to collapse that cycle time.

AI makes experimentation faster, but it doesn’t remove the need for accountability.

PMs must supply business constraints, customer context, and a clear hypothesis of success, while data scientists/analysts challenge AI outputs for plausibility, bias, and proper measurement design.

Treat AI as “UX memory” by connecting it to past experiments.

If an AI can retrieve prior test results across teams, it can warn you when an idea has already been tested, summarize what happened, and suggest whether it’s worth re-testing due to changed users or product context.

Use multi-armed bandits when speed matters more than statistical certainty.

Multi-armed bandits optimize performance by shifting traffic toward the current best variant early, trading some accuracy for faster gains—useful in high-velocity contexts like media headlines.

Use contextual bandits for personalization, but only when you have enough traffic.

Contextual bandits attempt to learn which variant works best per user context (hyper-personalization), which demands significant behavioral data and typically high traffic to learn reliably.

WORDS WORTH SAVING

5 quotes

You have an idea, uh, ma- then you make an assumption. If you release that feature in production, it will increase by X, that metric, because of these reason and these reasons, right? And then you build the experiment, the variations... And then you look at your results, right? So it's a simple loop, right?

Frederic De Todaro

Most tools still rely, you know, a lot on developers. They are already busy, you know, building the next features in your roadmap. And so as a result, most teams, you know, do not A/B test, uh, the majority of what they deliver to their users.

Frederic De Todaro

It means that you can, you can, you can turn any idea into a running e-experiment just by prompting an AI.

Frederic De Todaro

But the real quest- question to me isn't can you build it? It's really should you, should you build it, right?

Frederic De Todaro

Product discovery will tell you what users say they want. Experimentation tell you, uh, what they actually do.

Frederic De Todaro

4-step experimentation loop (idea → build/configure → results)Build phase as the primary bottleneckHuman-in-the-loop roles: PM vs data scientist vs AIML wave: AI targeting, bandits, opportunity detectionMulti-armed bandit vs contextual bandit tradeoffsGenAI wave: content generation, RAG assistants, prompt-based experimentationMeasuring AI features and RAG systems (business + technical metrics)

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome