Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Lex Fridman PodcastOct 3, 20191h 25m

Lex Fridman (host), Gary Marcus (guest)

Limits of deep learning and narrow AICommon sense reasoning and knowledge representationHybrid AI: combining symbolic systems with deep learningGeneral intelligence vs. narrow task performanceLanguage understanding and narrative comprehension as benchmarksInnate knowledge, evolution, and inspiration from biologyTrustworthy, value-aligned AI and societal oversight

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Gary Marcus, Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43 explores gary Marcus calls for hybrid AI: deep learning plus symbols Gary Marcus argues that current deep learning systems are powerful but fundamentally limited, especially in common sense, abstraction, language understanding, and flexible reasoning. He contrasts narrow successes in games and perception with the broader, general intelligence humans display across diverse domains. Marcus advocates a hybrid approach that combines symbolic, rule-based reasoning and explicit knowledge with learning-based methods, inspired by human cognition and evolution. He also emphasizes the need for AI systems we can trust, capable of representing concepts like harm and ethics explicitly, and calls for more realistic public expectations and better benchmarks for true understanding.

Gary Marcus calls for hybrid AI: deep learning plus symbols

Gary Marcus argues that current deep learning systems are powerful but fundamentally limited, especially in common sense, abstraction, language understanding, and flexible reasoning. He contrasts narrow successes in games and perception with the broader, general intelligence humans display across diverse domains. Marcus advocates a hybrid approach that combines symbolic, rule-based reasoning and explicit knowledge with learning-based methods, inspired by human cognition and evolution. He also emphasizes the need for AI systems we can trust, capable of representing concepts like harm and ethics explicitly, and calls for more realistic public expectations and better benchmarks for true understanding.

Key Takeaways

Deep learning’s strengths are narrow and correlation-based.

It excels at pattern recognition tasks like image classification and certain games, but struggles with abstraction, variable-based reasoning, causal understanding, and flexible transfer to new situations.

Get the full analysis with uListen AI

Common sense is the core missing ingredient for current AI.

Machines lack basic physical and psychological knowledge (e. ...

Get the full analysis with uListen AI

A hybrid of symbolic AI and learning is likely required.

Marcus argues neither expert systems (all symbols, no learning) nor pure deep learning (all learning, no explicit structure) are sufficient; future systems must integrate explicit rules, variables, and logic with powerful learning components.

Get the full analysis with uListen AI

General intelligence involves flexible transfer across domains.

Humans can apply knowledge from movies, life, or one game to novel variants and contexts; most current systems cannot even adapt a Go player to a slightly different board without full retraining.

Get the full analysis with uListen AI

True language understanding goes far beyond fluent text generation.

Models like GPT can produce grammatical output yet fail basic tests of narrative comprehension, character motivation, and consistency, revealing that surface fluency is not the same as deep understanding.

Get the full analysis with uListen AI

Innate structure and prior knowledge are crucial for learning.

Human infants likely start with built-in frameworks for space, time, causality, agents, and a kind of mental algebra; Marcus suggests AI should similarly bake in rich priors rather than treating everything as learned from scratch.

Get the full analysis with uListen AI

Trustworthy AI demands explicit concepts like ‘harm’ and values.

You cannot align systems with human ethics if they only manipulate correlations; we need representations that can encode rules like “first, do no harm” in a machine-interpretable way, developed with input from ethicists and society.

Get the full analysis with uListen AI

Notable Quotes

Just because you can build a better ladder doesn’t mean you can build a ladder to the moon.

Gary Marcus

We have to replace deep learning with deep understanding.

Gary Marcus

Right now we don’t have a way to translate ‘harm’ into something we can execute in Python or TensorFlow.

Gary Marcus

Intelligence is a multi-dimensional variable… machines are superhuman in some facets and far behind my five-year-old in others.

Gary Marcus

People have this fantasy that you can machine learn anything. There are some things you would never want to machine learn.

Gary Marcus

Questions Answered in This Episode

What concrete architectures or research programs best embody the hybrid symbolic–deep learning vision Marcus advocates, and how should they be evaluated?

Gary Marcus argues that current deep learning systems are powerful but fundamentally limited, especially in common sense, abstraction, language understanding, and flexible reasoning. ...

Get the full analysis with uListen AI

How can we systematically represent and acquire common sense knowledge at scale without relying on brittle, fully hand-coded ontologies?

Get the full analysis with uListen AI

What kinds of benchmarks or ‘Turing Olympics’ tasks would most reliably distinguish shallow pattern-matching from genuine understanding?

Get the full analysis with uListen AI

To what extent should AI designers mimic specific biological mechanisms versus only high-level cognitive principles observed in humans and animals?

Get the full analysis with uListen AI

How can ethicists, policymakers, and AI researchers practically collaborate to formalize concepts like “harm” and “fairness” into machine-interpretable rules?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Gary Marcus. He's a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence. The latter is a machine learning company that was acquired by Uber in 2016. He's the author of several books on natural and artificial intelligence, including his new book, Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and AI in general, and discussing the challenges before our AI community that must be solved in order to achieve artificial general intelligence. As I'm having these conversations, I try to find paths toward insight, towards new ideas. I try to have no ego in the process, it gets in the way. I'll often continuously try on several hats, several roles. One, for example is the role of a three-year-old who understands very little about anything and asks big what and why questions. The other might be a role of a devil's advocate who presents counterideas with a goal of arriving at greater understanding through debate. Hopefully, both are useful, interesting, and even entertaining at times. I ask for your patience as I learn to have better conversations. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. And now here's my conversation with Gary Marcus. Do you think human civilization will one day have to face an AI-driven technological singularity that will, uh, in a societal way modify our place in the food chain of intelligent living beings on this planet?

Gary Marcus

I think our place in the food chain has already changed. So there are lots of things people used to do by hand that they do with machine. If you think of a singularity as like one single moment, which is I guess what it suggests, I don't know if it'll be like that. But I think that there's a lot of gradual change and AI is getting better and better. I mean, I'm here to tell you why I think it's not nearly as good as people think, but, you know, the overall trend is clear. Maybe, you know, maybe Ray Kurzweil thinks it's an exponential and I think it's linear. In some cases, it's close to zero right now, but it's all gonna happen. I mean, we are gonna get to human-level intelligence or whatever you want, w- what you will, um, artificial general intelligence at some point. And that's certainly gonna change our place in the food chain, 'cause a lot of the tedious things that we do now, we're gonna have machines do. And a lot of the dangerous things that we do now, we're gonna have machines do. And I think our whole lives are gonna change from, uh, people finding their meaning through their work through people finding their meaning through creative expression.

Lex Fridman

So the- the singularity will be a very gradual, in fact removing the meaning of the word singularity, it'll be a very gradual transformation in your view?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome