CHAPTERS
How the Consumer AI Top 100 is built (data sources, web vs. mobile, usage not revenue)
Justine and Olivia explain the purpose of the Consumer AI Top 100 and how it’s compiled to reflect what people actually use. They break down the methodology: ranking AI-native products across the world using web visits and mobile MAUs, with an emphasis on total usage (including free) rather than revenue.
Who’s new this cycle—and why the ecosystem is stabilizing
They discuss changes in new entrants, noting that web is the best lens for longitudinal shifts while mobile is more volatile due to app-store policy changes. The number of newcomers is shrinking, suggesting the consumer AI landscape is settling into more durable categories and leaders.
Companionship still dominates consumer AI usage
Companion/roleplay apps remain a major driver of consumer engagement, with multiple new entrants joining established leaders. The hosts highlight how large and persistent this category is relative to others in consumer AI.
Creative tools as a core pillar (and the enduring ‘killer apps’)
They frame creative tooling—image, video, audio, and related workflows—as one of the long-standing consumer AI pillars alongside general assistants and companionship. Even as new categories emerge, creative tools remain prominent due to clear user value and shareable outputs.
Big Tech’s presence: Google’s breakout across multiple properties
For the first time, Google appears meaningfully in the rankings due to changes enabling separate domain-level measurement. They detail how multiple Google properties—consumer and developer-facing—earned top placements, driven by strong distribution and product momentum.
Google Labs as a ‘consumer sandbox’ and Veo 3’s traffic spike
They discuss Google Labs as a multi-product experimental hub and speculate that Veo 3 largely drove recent traffic growth. The chapter highlights how breakout model launches can lift an umbrella destination that hosts multiple experiences.
Chinese AI companies: three patterns of global presence
Olivia outlines how China shows up on the list in distinct ways: domestic-only products, China-built products aimed at global users, and products that succeed both inside and outside China. Regulation and access constraints shape competition and adoption dynamics.
‘Vibecoding’ breaks into the Top 100—and shows unusually strong retention
They cover the rapid rise of vibecoding platforms and what the list reveals about their adoption. Beyond traffic, they discuss revenue retention data suggesting many users increase spend after initial adoption, indicating real utility and potential enterprise/prosumer pull.
Builders vs. what they build: the hosting-domain traffic puzzle
They explain that for vibecoding platforms you can observe both the maker traffic and the traffic to hosted outputs, and maker traffic is larger. They propose two interpretations: serious projects move to custom domains, or many projects are personal/internal tools with low public traffic but high private value.
AI All-Stars: the products that never left the Top 100
Olivia introduces “AI All-Stars,” products that have appeared on every edition of the web list. The segment highlights which categories have durable consumer demand and what that says about defensibility beyond having a proprietary model.
Network effects and why UI/workflows matter as much as models
They argue that in consumer AI, product experience and community can be decisive because models are increasingly accessible via APIs or open source. They explore network effects beyond “more users → better model,” including libraries, marketplaces, and team-based lock-in.
Prosumer-to-enterprise migration: bottoms-up adoption and team expansion
They describe how several consumer AI tools are ‘graduating’ into enterprise settings via team plans and organizational sharing. Bottoms-up adoption—individuals trying a tool and then bringing it into work—creates an easier path to enterprise revenue than traditional top-down sales.
Biggest takeaways + what to watch next (Grok debut, verticalization, new categories)
In closing, they reflect on how the list evolved from early chaos to a more normalized leaderboard with recurring winners and fewer newcomers. They predict continued ‘verticalization’ across general assistants, highlight Grok’s strong debut, and point to likely future breakout categories driven by improved model reliability.
Closing: check the full list and share products that surprised you
They encourage listeners to explore the complete report and discuss their own favorite AI apps. The episode ends with an invitation to comment on omissions and to return for the next list in six months.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome