Lenny's PodcastLenny's Podcast

Why cultivating agency matters more than cultivating skills in the AI era | Max Schoening (Notion)

Lenny Rachitsky and Max Schoening on why agency outlasts skills as AI reshapes product building.

Max SchoeningguestLenny Rachitskyhost
May 2, 20261h 27mWatch on YouTube ↗
Agency vs. skills in the AI eraDesigners/PMs prototyping in code at NotionAgent loops and “software factories”Malleable software and user ownershipSaaS apocalypse and tool generalizationToken spend, ROI, and model commoditizationTaste, iteration, and the “tiny core” of great products
AI-generated summary based on the episode transcript.

In this episode of Lenny's Podcast, featuring Max Schoening and Lenny Rachitsky, Why cultivating agency matters more than cultivating skills in the AI era | Max Schoening (Notion) explores why agency outlasts skills as AI reshapes product building AI makes the “first 10%” of projects nearly free, shifting advantage to people who can rapidly prototype, iterate, and choose good directions early.

At a glance

WHAT IT’S REALLY ABOUT

Why agency outlasts skills as AI reshapes product building

  1. AI makes the “first 10%” of projects nearly free, shifting advantage to people who can rapidly prototype, iterate, and choose good directions early.
  2. Schoening contends that agency—believing the world is malleable and acting to change it—is now the key differentiator as skill barriers drop via AI tools.
  3. Notion’s practice of designers and PMs prototyping in code is less about shipping to production and more about designing in the true medium of AI/agent loops and interactive systems.
  4. As roles blur, teams risk losing specialists and craft; scaling reliable, high-quality software remains a distinct and under-discussed engineering discipline.
  5. The SaaS “apocalypse” is overstated: users still pay for maintenance and specialist focus, while AI pushes tools toward more general, customizable “90s-style” software experiences.

IDEAS WORTH REMEMBERING

5 ideas

Optimize for agency, not credentials or role purity.

When models put skills “at your fingertips,” the bottleneck becomes whether you take initiative, reshape your job, and push ideas into reality rather than hiding behind “skill issues.”

Have designers/PMs think in code to master the material.

Schoening cares less about non-engineers deploying production code and more about them prototyping in the medium that will ship—especially for AI products where interaction dynamics (agent loops) can’t be captured in static mockups.

Use a “safe playground” to make code one-shottable for newcomers.

Notion created an LLM-friendly prototyping codebase to remove fear of the terminal and lower activation energy; once people are “on the treadmill,” improving models let them gradually contribute closer to production.

Don’t confuse more output (features/tokens/LOC) with better outcomes.

He warns against “vibe coding” as a quality trap and rejects token-spend bragging as analogous to boasting about lines of code; reliability and craft still require disciplined engineering and consolidation.

Expect ROI scrutiny to rise as AI usage matures.

Many companies are currently in an exploration phase with loose spend controls, but Schoening expects uncomfortable ROI conversations soon—plus increased interest in smaller/self-hosted models if they’re “good enough.”

WORDS WORTH SAVING

5 quotes

I think before it was very easy to always say, "Well, I will never be able to do this because insert skill issue." And I think we're realizing that even if you have the skills at your fingertips because now, I don't know, an AGI-adjacent model helps you, uh, the thing that matters is agency, and I don't think agency is very evenly distributed in the world.

Max Schoening

One day you wake up and you realize the world is made up by people no smarter than you.

Max Schoening

I think the first 10% of every project are now free.

Max Schoening

If I really look at the, the truly great products, they all have one tiny core that is so exceptionally good.

Max Schoening

We already have universal basic income. It's called knowledge work.

Max Schoening

QUESTIONS ANSWERED IN THIS EPISODE

5 questions

In Notion’s prototyping “playground,” what specific conventions made it more LLM/agent-friendly than the main codebase (and what would you change in a legacy codebase to replicate that)?

AI makes the “first 10%” of projects nearly free, shifting advantage to people who can rapidly prototype, iterate, and choose good directions early.

You said understanding agent loops matters more than being able to tweak UI styles—what are 2–3 concrete “agent loop” patterns PMs/designers should learn to design?

Schoening contends that agency—believing the world is malleable and acting to change it—is now the key differentiator as skill barriers drop via AI tools.

What does “software quality” mean in an AI-coded world—reliability, performance, maintainability, security—and how do you operationalize it without slowing iteration?

Notion’s practice of designers and PMs prototyping in code is less about shipping to production and more about designing in the true medium of AI/agent loops and interactive systems.

If “the first 10% is free” but “the last 10% is 90%,” what practices best attack that last-mile work (testing, integration, permissions, UX polish, rollout)?

As roles blur, teams risk losing specialists and craft; scaling reliable, high-quality software remains a distinct and under-discussed engineering discipline.

On the SaaS-apocalypse debate: which categories are most vulnerable to being replaced by internal AI-built tools, and which are most defensible because of maintenance/scale complexity?

The SaaS “apocalypse” is overstated: users still pay for maintenance and specialist focus, while AI pushes tools toward more general, customizable “90s-style” software experiences.

Chapter Breakdown

Max Schoening’s “merged roles” perspective on the AI-era product team

Lenny introduces Max’s unusual background across PM, design leadership, engineering, and founding, teeing up why he has a credible view on roles collapsing in the AI era. Max frames the conversation around how teams build products when prototyping and coding get dramatically easier.

Notion’s origin story: moving AI interface design from Figma to an LLM-friendly code playground

Max explains how Notion’s early AI work (especially chat interfaces) exposed the limits of static Figma mocks. Inspired by Bret Victor’s critique of static prototypes, they created a small “playground” codebase optimized for one-shot LLM edits to make prototyping feel like “chatting,” not “terminal fear.”

How much designers/PMs are shipping—and why production deploys aren’t the point

Lenny asks how much non-engineers ship today and where it’s heading. Max argues the value isn’t designers deploying code; it’s designing in the same medium that becomes the real product, while remaining wary of “vibe coding” that doesn’t improve reliability or quality.

The strategic balance: coding vs. higher-level product thinking (agent loops over CSS tweaks)

Max reframes the “should PMs/designers code?” debate: he cares less about shipping and more about understanding agentic systems. He’d prefer a PM/designer who can design “agent loops” over one who only tweaks UI via tools, because agent loops require working in the material—currently code.

Agency: the core trait for thriving when skills are “on demand” via AI

Max claims AI reduces “skill issue” excuses: even with skills at your fingertips, agency determines outcomes. People who see the world as malleable and who ignore rigid role definitions adapt faster than those clinging to what a PM/designer/engineer “is.”

High-agency examples at Notion: ‘drive it like it’s stolen’

Max shares concrete Notion examples where people reshape their job boundaries to create impact. He highlights recruiting, proactively filling organizational gaps, and leveling up from docs to prototypes as demonstrations of agency.

What we risk losing as roles merge: specialists, craft, and ‘factory-grade’ engineering

Lenny asks what’s lost when disciplines blend. Max warns that without care, teams may lose specialists and the rigor needed to scale reliably—likening today’s software discourse to celebrating prototypes while neglecting manufacturing-grade precision.

How to develop agency: making, tinkering, and the realization that ‘you can just do things’

Max’s advice is to start with making, not politicking inside org charts. Tinkering—whether software, furniture, or cooking—builds the muscle of changing reality, eventually creating the insight that the world is built by people no smarter than you.

Malleable software: owning your computing life (and why AI makes it newly accessible)

Max defines malleable software as tools serving users’ interests more than corporate constraints—like rearranging your living room rather than living in a fixed kitchen. AI helps people build personal tools, but Max argues the real win requires platforms designed for communal, adaptable software with modern collaboration and security.

Design philosophy via Dieter Rams: usefulness before beauty, and why malleability exposes utility

Lenny references a pinned Dieter Rams clip where Rams dismisses museum-like furniture as impractical. Max ties the humor and critique to a utilitarian design philosophy: great design is first useful, and the ability to tweak/adapt objects reveals whether they actually serve real life.

The “SaaSpocalypse” debate: SaaS won’t vanish, but tools will become more general and agent-friendly

Max argues the ‘apocalypse’ is overstated: people don’t want to maintain software stacks, and “as-a-service” primarily buys maintenance and specialists. AI may push a return to more general-purpose tools (’90s-style), while still leaving room for specialized products with deep domain rigor.

How product building changed: ‘the first 10% is free,’ demos over memos, and iteration as default

Max says AI makes early execution almost effortless: prototypes and “janky demos” replace heavy PRDs, enabling rapid parallel exploration. Teams can now send many agent-driven experiments and converge based on concrete artifacts rather than documents.

What’s next: modality, speed, and the ROI era (token spend, verifiability loops, and cheaper models)

Max explores near-future shifts: whether direct manipulation returns as inference becomes instant, and whether “good enough” intelligence will matter more than frontier smarts for most work. He anticipates rising scrutiny on ROI and a move toward smaller/cheaper models or in-house approaches if the frontier gap doesn’t widen.

Shipping fast without losing quality: ‘shots on goal,’ consolidation, and building ‘obviously good’ products

Max describes how mature companies become precious and risk-averse, so leaders must push for more attempts while staying focused on quality. He emphasizes iterative correctness, consolidating divergent “automation primitives,” and aiming for products that are unmistakably great rather than bloated with features.

Taste, product superpowers, and closing corners: jobs-to-be-done, UBI hot take, AGI plans, contrarian/failure lessons

Max defines taste as a predictive mental model shaped by repetition and feedback, then zooms out to what makes products win: a tiny “superpower core.” He also covers jobs-to-be-done as a user-honesty check, his provocative view on UBI, what he’d do with AGI, and lessons from failures and contrarian views on exclusivity.

EVERY SPOKEN WORD

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome