Skip to content
The Twenty Minute VCThe Twenty Minute VC

Ethan Mollick: Why OpenAl Abandons Products, The Biggest Opportunities They Have Not Taken | E1184

Ethan Mollick is the Co-Director of the Generative AI Lab at Wharton, which builds prototypes and conducts research to discover how AI can help humans thrive while mitigating risks. Ethan is also an Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship, and also examines the effects of artificial intelligence on work and education. His papers have been published in top journals and his book on AI, Co-Intelligence, is a New York Times bestseller. -------------------------------------------------------------------- Timestamps: (00:00) Intro (02:31) Thoughts on the New Llama 3.1 Model (05:52) Four Potential Outcomes: A Framework for the Future (08:24) Will AI Achieve Escape Velocity or Plateau Like the iPhone? (09:56) Identifying the Core Bottleneck: Compute, Data, or Algorithms? (13:53) Why Aren't AI Providers Offering User-Friendly Guides? (15:28) Should Powerful AI Models Be Open Source or Closed? (18:49) Will Regulations Limit AI Growth? (22:10) What Are AI Labs Missing About Business Needs? (26:00) How Can We Better Harness AI to Drive Productivity? (28:22) Will AI Redistribute Talent or Eliminate Jobs? (33:23) AI and Consumers: The Future Interface Experience (36:09) AI Ambition in Startups: What's Holding Them Back? (41:35) Founders' Diverging Views on AGI Timelines & Funding (43:33) Will You Thrive or Get Steamrolled? (49:49) The Future of Education with AI (57:33) Energy Demands & Compute as Currency (01:00:00) The Role of AI in Future Electoral Systems & Politics (01:04:40) Quick-Fire Round -------------------------------------------------------------------- In Today’s Episode with Ethan Mollick We Discuss: 1. Models: Is More Compute the Answer: How has Ethan changed his mind on whether we have a lot of room to run in adding more compute to increase model performance? What will happen with models in the next 12 months that no one expects? Why will open models immediately be used by bad actors, what should happen as a result? Data, algorithms, compute, what is the biggest bottleneck and how will this change with time? 2. OpenAI: The Missed Opportunity, Product Roadmap and AGI: Why does Ethan believe that OpenAI is completely out of touch with creating products that consumers want to use? Which product did OpenAI shelve that will prove to be a massive mistake? How does Ethan analyse OpenAI’s pursuit of AGI? Why did Ethan think Brad, COO @ OpenAI’s heuristic of “startups should be threatened if they are not excited by a 100x improvement in model” is total BS? 3. VCs, Startups and AI Labs: What the World Does Not Understand: What do Big AI labs not understand about big companies? What are the biggest mistakes companies are making when implementing AI? Why are startups not being ambitious enough with AI today? What are the single biggest ways consumers can and should be using AI today? -------------------------------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow Ethan Mollick on Twitter: https://twitter.com/emollick Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact -------------------------------------------------------------------- #20vc #harrystebbings #podcast #ethanmollick #wharton #professor #founder #venturecapital #openai #samaltman #bradlightcap #aitechnology

Ethan MollickguestHarry Stebbingshost
Jul 30, 20241h 9mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Ethan Mollick: AI’s Machine-God Race, Real-World Gaps, And Risks

  1. Ethan Mollick argues that major AI labs like OpenAI are singularly focused on building AGI—“a machine god”—and therefore chronically underinvest in real products, documentation, and practical workflows that would help normal organizations use AI effectively today.
  2. He outlines four futures for AI (from stagnation to superintelligence) and stresses that the most neglected scenarios are the “boring middle” ones: steady linear or continued exponential improvement that deeply reshapes work, startups, education, and regulation without immediate sci‑fi outcomes.
  3. Mollick criticizes both AI labs and enterprises: labs for building strange, half-finished products and abandoning them, and companies for poor adoption, vague policies, and secretive use of AI by employees who aren’t rewarded—or are even punished—for automation.
  4. He sees huge upside in areas like education and entrepreneurship but warns about job displacement, spear-phishing and persuasion risks, regulatory over- and under-reaction, and a looming “meaning of work” crisis as knowledge workers realize AI can do much of what they do.

IDEAS WORTH REMEMBERING

5 ideas

AI labs are over-optimized for AGI research and under-optimized for usable products.

Mollick claims OpenAI and peers direct top talent and compute toward scaling and frontier models, leaving transformative products like Code Interpreter underdeveloped, with minimal documentation and little focus on real enterprise workflows.

Most practical value now comes from integrating AI into human and organizational systems, not chasing architectural tricks.

He emphasizes that the bottleneck for value is often people, processes, incentives, and policy—how AI fits into companies, classrooms, and institutions—rather than whether we use transformers, mixture-of-experts, or the newest open-weight model.

Open-source models will drive both entrepreneurship and real-world security risks.

Llama 3.1-level open models will democratize GPT‑4‑class capabilities and spark innovation globally, but they will also enable large-scale spear-phishing, catfishing, and guardrail removal—areas he says lack serious monitoring and fast-response governance.

Organizations need clear AI policies, incentives, and reward structures—or employees will hide their most productive uses.

Because staff fear being fired, devalued, or just given more work, many use AI secretly; Mollick argues companies must define acceptable use, explicitly reward automation and experimentation, and decide whether they’re using AI for margin-cutting or expansion.

Startups and VCs should hold a concrete view of AI’s trajectory and build for a jagged, fast-changing frontier.

He says current “lean” methods and thin wrappers around models are mostly incremental bets that implicitly assume AGI won’t arrive soon; instead, founders and investors must be opinionated about how good models will get, where gaps remain, and how adoption actually happens inside organizations.

WORDS WORTH SAVING

5 quotes

OpenAI abandons products like crazy. They wanna build a machine god.

Ethan Mollick

There isn’t really a product there right now. It’s a chatbot and the API.

Ethan Mollick

The real problem right now is every startup in the world is betting against AGI… If it is [coming soon], why are you funding these startup companies?

Ethan Mollick

You wanna be a skilled artisan right now. You wanna figure out how to take the back and forth power of an LLM and convert that into usable work inside your organization.

Ethan Mollick

When you realize as a middle manager that AI does your work and nobody cares… what does that mean for the nature of work?

Ethan Mollick

OpenAI, AGI focus, and the abandonment of promising productsModel progress, Llama 3.1, open vs closed source, and scaling limitsRegulation, energy, and systemic risks (security, persuasion, democracy)Enterprise AI adoption, organizational policy, and hidden ‘cyborg’ workersStartups, venture capital, and how to build in a radical tech regimeAI in education: tutoring, flipped classrooms, and cheating dynamicsHuman factors: inequality, skills, interfaces, and the meaning of work

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome