Ethan Mollick: Why OpenAl Abandons Products, The Biggest Opportunities They Have Not Taken | E1184

Ethan Mollick: Why OpenAl Abandons Products, The Biggest Opportunities They Have Not Taken | E1184

The Twenty Minute VCJul 31, 20241h 9m

Ethan Mollick (guest), Harry Stebbings (host), Narrator

OpenAI, AGI focus, and the abandonment of promising productsModel progress, Llama 3.1, open vs closed source, and scaling limitsRegulation, energy, and systemic risks (security, persuasion, democracy)Enterprise AI adoption, organizational policy, and hidden ‘cyborg’ workersStartups, venture capital, and how to build in a radical tech regimeAI in education: tutoring, flipped classrooms, and cheating dynamicsHuman factors: inequality, skills, interfaces, and the meaning of work

In this episode of The Twenty Minute VC, featuring Ethan Mollick and Harry Stebbings, Ethan Mollick: Why OpenAl Abandons Products, The Biggest Opportunities They Have Not Taken | E1184 explores ethan Mollick: AI’s Machine-God Race, Real-World Gaps, And Risks Ethan Mollick argues that major AI labs like OpenAI are singularly focused on building AGI—“a machine god”—and therefore chronically underinvest in real products, documentation, and practical workflows that would help normal organizations use AI effectively today.

Ethan Mollick: AI’s Machine-God Race, Real-World Gaps, And Risks

Ethan Mollick argues that major AI labs like OpenAI are singularly focused on building AGI—“a machine god”—and therefore chronically underinvest in real products, documentation, and practical workflows that would help normal organizations use AI effectively today.

He outlines four futures for AI (from stagnation to superintelligence) and stresses that the most neglected scenarios are the “boring middle” ones: steady linear or continued exponential improvement that deeply reshapes work, startups, education, and regulation without immediate sci‑fi outcomes.

Mollick criticizes both AI labs and enterprises: labs for building strange, half-finished products and abandoning them, and companies for poor adoption, vague policies, and secretive use of AI by employees who aren’t rewarded—or are even punished—for automation.

He sees huge upside in areas like education and entrepreneurship but warns about job displacement, spear-phishing and persuasion risks, regulatory over- and under-reaction, and a looming “meaning of work” crisis as knowledge workers realize AI can do much of what they do.

Key Takeaways

AI labs are over-optimized for AGI research and under-optimized for usable products.

Mollick claims OpenAI and peers direct top talent and compute toward scaling and frontier models, leaving transformative products like Code Interpreter underdeveloped, with minimal documentation and little focus on real enterprise workflows.

Get the full analysis with uListen

Most practical value now comes from integrating AI into human and organizational systems, not chasing architectural tricks.

He emphasizes that the bottleneck for value is often people, processes, incentives, and policy—how AI fits into companies, classrooms, and institutions—rather than whether we use transformers, mixture-of-experts, or the newest open-weight model.

Get the full analysis with uListen

Open-source models will drive both entrepreneurship and real-world security risks.

Llama 3. ...

Get the full analysis with uListen

Organizations need clear AI policies, incentives, and reward structures—or employees will hide their most productive uses.

Because staff fear being fired, devalued, or just given more work, many use AI secretly; Mollick argues companies must define acceptable use, explicitly reward automation and experimentation, and decide whether they’re using AI for margin-cutting or expansion.

Get the full analysis with uListen

Startups and VCs should hold a concrete view of AI’s trajectory and build for a jagged, fast-changing frontier.

He says current “lean” methods and thin wrappers around models are mostly incremental bets that implicitly assume AGI won’t arrive soon; instead, founders and investors must be opinionated about how good models will get, where gaps remain, and how adoption actually happens inside organizations.

Get the full analysis with uListen

AI tutoring could massively improve learning—but only if we design it around real pedagogy, not shortcuts.

Effective AI tutors must push students through “desirable difficulties,” probing what they don’t know and supporting active, flipped classrooms; naive implementations that just solve homework (as seen in early GPT‑4 trials) improve homework scores while harming real learning.

Get the full analysis with uListen

The deepest long-term risk may be loss of human agency and meaning in work, not just job counts.

Mollick worries that as middle managers and knowledge workers realize AI can do their core tasks and even answer their emails, they may feel their roles are hollow, creating a “meaning crisis” unless we intentionally design roles and systems that enhance, rather than erase, human significance.

Get the full analysis with uListen

Notable Quotes

OpenAI abandons products like crazy. They wanna build a machine god.

Ethan Mollick

There isn’t really a product there right now. It’s a chatbot and the API.

Ethan Mollick

The real problem right now is every startup in the world is betting against AGI… If it is [coming soon], why are you funding these startup companies?

Ethan Mollick

You wanna be a skilled artisan right now. You wanna figure out how to take the back and forth power of an LLM and convert that into usable work inside your organization.

Ethan Mollick

When you realize as a middle manager that AI does your work and nobody cares… what does that mean for the nature of work?

Ethan Mollick

Questions Answered in This Episode

If frontier labs stay obsessed with AGI, who will step in to systematically build the missing “how to actually use this at work” layer for mainstream organizations?

Ethan Mollick argues that major AI labs like OpenAI are singularly focused on building AGI—“a machine god”—and therefore chronically underinvest in real products, documentation, and practical workflows that would help normal organizations use AI effectively today.

Get the full analysis with uListen AI

How should a startup founder today rigorously decide whether their idea still makes sense in a world where models might be 2–3× better within a year?

He outlines four futures for AI (from stagnation to superintelligence) and stresses that the most neglected scenarios are the “boring middle” ones: steady linear or continued exponential improvement that deeply reshapes work, startups, education, and regulation without immediate sci‑fi outcomes.

Get the full analysis with uListen AI

What concrete governance mechanisms or monitoring systems would a “fast-follow” regulatory approach to open-source AI actually require in practice?

Mollick criticizes both AI labs and enterprises: labs for building strange, half-finished products and abandoning them, and companies for poor adoption, vague policies, and secretive use of AI by employees who aren’t rewarded—or are even punished—for automation.

Get the full analysis with uListen AI

In a company that wants to expand rather than cut headcount with AI, how should leadership redesign roles, incentives, and performance metrics to make that credible to employees?

He sees huge upside in areas like education and entrepreneurship but warns about job displacement, spear-phishing and persuasion risks, regulatory over- and under-reaction, and a looming “meaning of work” crisis as knowledge workers realize AI can do much of what they do.

Get the full analysis with uListen AI

What does a realistic, near-term AI-powered education system look like in day-to-day life for students and teachers, beyond abstract talk of ‘AI tutors’ and ‘flipped classrooms’?

Get the full analysis with uListen AI

Transcript Preview

Ethan Mollick

OpenAI abandons products like crazy. They wanna build a machine god. If you have any talented people, you're going to have them building the next technology for AGI, but if you have a computer, that's what you throw it at. I mean, they're incidentally making $3 billion run rate this year, I think, by like, just accident. There isn't really a product there right now. It's a chatbot and the API. But I think a lot of people in this space are just assuming scale solves issues. The real problem right now is every startup in the world is betting against AGI, which I find really funny because all the funders are like, "Yeah, AGI's coming in the next five years." If it is, why are you funding these startup companies? None of them will survive in an AGI world.

Harry Stebbings

Ready to go? Ethan, I am so excited for this, dude. I told you just now, I am like your biggest fan from afar. So first, thank you so much for joining me today.

Ethan Mollick

I'm thrilled. It's, uh, it's great, and I've, I've been an entrepreneurship professor for a very long time before anyone knew about my AI work, so it's always great to be connecting to the VC and entrepreneurship world.

Harry Stebbings

Now, for anyone that doesn't know your work, can you just give a 60-second intro on your work and how you've become much more well-known in the last few years?

Ethan Mollick

I'm a former entrepreneur myself, so the startup company I, I helped co-found, um, invented the paywall, so I still feel like I'm, I'm trying to make up for that, uh, in the late '90s. Um, so just p- trying to pay back after, after, after that. Um, but then I, I've been a professor of entrepreneurship. I got trained in MIT, and then I've been at Wharton ever since. You know, I do a lot of work on teaching and thinking about, you know, research on how entrepreneurs become successful, but I also have this side gig of thinking about AI and teaching for a long time. So, I worked at the Media Lab, uh, with a guy named Marvin Minsky, who was one of the founders of AI, and I was like, the non-technical person there who was like, trying to translate what the lab was doing for the world. And then I've been building tools for, how do we teach entrepreneurship at scale? 'Cause it turns out it really matters. Little bits of entrepreneur training make a huge difference in peoples' lives, and we've been playing with AI and other tools. So, when AI sort of came out, I was in the weird place of actually practically using these tools for a long time beforehand. It turns out everybody else who was taking this stuff seriously was computer scientists, so I sort of was there at the early days of like, oh, I know business stuff and entrepreneurship stuff and education, and these things are actually quite useful. And I already had a fairly large Twitter following, so I just sort of became the go-to person, and then there is this, there is a, um, Matthew effect of like, all the labs started talking to me and I get insider information on everything, and it becomes this sort of self-reinforcing prophecy, uh, in that space.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome