No PriorsNo Priors Ep. 68 | With Zapier Co-Founder and Head of AI Mike Knoop
Elad Gil and Mike Knoop on zapier’s Mike Knoop Challenges LLM Dominance With New AGI Prize.
In this episode of No Priors, featuring Elad Gil and Mike Knoop, No Priors Ep. 68 | With Zapier Co-Founder and Head of AI Mike Knoop explores zapier’s Mike Knoop Challenges LLM Dominance With New AGI Prize Mike Knoop, Zapier co-founder and Head of AI, discusses why he believes progress toward AGI has stalled despite rapid advancements in large language models and economic applications.
At a glance
WHAT IT’S REALLY ABOUT
Zapier’s Mike Knoop Challenges LLM Dominance With New AGI Prize
- Mike Knoop, Zapier co-founder and Head of AI, discusses why he believes progress toward AGI has stalled despite rapid advancements in large language models and economic applications.
- He contrasts the prevailing “economically useful work” definition of AGI with François Chollet’s definition centered on efficiently acquiring new skills, arguing that current LLMs are powerful memorization systems but not generally intelligent.
- Knoop outlines the ARC Prize, a $1M+ nonprofit challenge to beat Chollet’s ARC-AGI benchmark with open-source solutions, intentionally designed to attract outsiders and new paradigms like program synthesis and neural architecture search.
- He also explains how Zapier is productizing AI through tools and agents, advocates for open-source and open research, and cautions against prescriptive AI/AGI regulation without empirical evidence of capabilities.
IDEAS WORTH REMEMBERING
7 ideasRedefine AGI around skill acquisition, not just economic usefulness.
Knoop favors François Chollet’s definition of general intelligence as the efficient acquisition of new skills, arguing that the popular “can do most economically useful work” framing overestimates our proximity to true AGI.
LLMs are powerful memorizers, not yet true general reasoners.
He characterizes current language models as high-dimensional memorization systems that can recombine existing patterns but struggle with open-ended problems whose solution patterns don’t appear in training data.
Beating ARC-AGI likely requires new paradigms, not just scale.
State-of-the-art performance on the ARC-AGI benchmark has only moved from ~20% to ~34% in four years, and has resisted LLM and scale-based approaches, suggesting we need fundamentally different techniques.
Program synthesis and relaxed neural architecture search are promising paths.
Knoop highlights approaches that search over program or architecture space (rather than just gradient descent on fixed models) as promising ways to discover more general reasoning systems.
Outsiders and small teams may drive key AGI breakthroughs.
The ARC competitions have attracted many one- and two-person teams outside major labs, and Knoop believes the winning approach may come from someone not entrenched in current LLM orthodoxy.
Agentic AI is already economically valuable but must be carefully constrained.
Zapier’s AI bots show that users will pay for agent-like workflows today; the challenge is giving customers tools to ‘clamp’ what agents can and cannot do so risk scales appropriately with the use case.
Open-source and open research are critical to unlocking new AI ideas.
Knoop argues that closed frontier labs and reduced publishing slow foundational progress, and that open protocols, code, and datasets dramatically increase the chance of genuine AGI breakthroughs.
WORDS WORTH SAVING
5 quotesMy belief is that AGI’s progress has really stalled out over the last four or five years.
— Mike Knoop
General intelligence is a system that can effectively, efficiently acquire new skill.
— Mike Knoop (describing François Chollet’s definition)
Effectively, what large language models do today is they are high-dimensional memorization systems.
— Mike Knoop
Language models do not work to beat ARC. And people have tried.
— Mike Knoop
If you care about actually discovering AGI in our lifetime, then I think it’s sort of incumbent to try and promote things that increase the likelihood that we’re generating new ideas.
— Mike Knoop
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsIf ARC-AGI resists LLM-based approaches, what concrete characteristics might a successful, more general architecture need to have?
Mike Knoop, Zapier co-founder and Head of AI, discusses why he believes progress toward AGI has stalled despite rapid advancements in large language models and economic applications.
How could existing LLMs and program-synthesis-style systems be combined into hybrid models that perform better on ARC and similar tasks?
He contrasts the prevailing “economically useful work” definition of AGI with François Chollet’s definition centered on efficiently acquiring new skills, arguing that current LLMs are powerful memorization systems but not generally intelligent.
What practical milestones, short of beating ARC-AGI, would indicate we’re making real progress toward systems that can efficiently acquire new skills?
Knoop outlines the ARC Prize, a $1M+ nonprofit challenge to beat Chollet’s ARC-AGI benchmark with open-source solutions, intentionally designed to attract outsiders and new paradigms like program synthesis and neural architecture search.
How should startups balance building on today’s economically powerful LLM stack with investing in riskier, more speculative AGI-oriented research?
He also explains how Zapier is productizing AI through tools and agents, advocates for open-source and open research, and cautions against prescriptive AI/AGI regulation without empirical evidence of capabilities.
What governance or safety mechanisms should be in place once we start seeing empirical signs of systems that genuinely exhibit general skill acquisition?
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome