Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence | Lex Fridman Podcast #71

Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence | Lex Fridman Podcast #71

Lex Fridman PodcastFeb 14, 20201h 44m

Lex Fridman (host), Vladimir Vapnik (guest), Narrator, Narrator

Engineering intelligence vs. scientific understanding of intelligencePredicates, invariants, and Plato’s world of ideasWeak vs. strong convergence and admissible sets of functionsHandwritten digit recognition as a minimal test of intelligenceCritique and reinterpretation of deep learning and convolutional networksDiscovery of good predicates via contradictions and invariant violationsAnalogies from literature and music criticism for understanding predicates

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Vladimir Vapnik, Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence | Lex Fridman Podcast #71 explores vapnik Explores Predicates, Invariants, and Plato’s Blueprint for Intelligence Lex Fridman and Vladimir Vapnik explore the distinction between engineering intelligence (building useful systems) and the science of intelligence (understanding its underlying principles).

Vapnik Explores Predicates, Invariants, and Plato’s Blueprint for Intelligence

Lex Fridman and Vladimir Vapnik explore the distinction between engineering intelligence (building useful systems) and the science of intelligence (understanding its underlying principles).

Vapnik argues that intelligence arises from a small set of abstract predicates—functions capturing integral properties of data—that generate invariants, linking Plato’s world of ideas to real-world observations.

Using handwritten digit recognition as a canonical testbed, he proposes that discovering a few powerful, human-meaningful predicates (like symmetry and structure) should drastically reduce the data needed for high performance.

They connect these ideas to weak vs. strong convergence in statistical learning, critique current deep learning practices as predicate-poor engineering, and speculate that insights from literature and music criticism may help reveal universal predicates.

Key Takeaways

Differentiate building intelligent systems from understanding intelligence.

Vapnik insists that engineering systems that imitate human behavior (e. ...

Get the full analysis with uListen AI

Intelligence is about discovering a small set of powerful predicates.

He posits, in a Platonic spirit, that there exists a relatively small set of abstract predicates (functions over data) that, when combined with reality, generate invariants and make learning data-efficient and interpretable.

Get the full analysis with uListen AI

Use weak convergence to restrict admissible functions and avoid overfitting.

Beyond strong (pointwise) convergence, weak convergence focuses on integral properties (inner products with predicates). ...

Get the full analysis with uListen AI

Handwritten digit recognition is a clean proving ground for “intelligent” learning.

Vapnik challenges the community to match state-of-the-art MNIST performance using orders of magnitude fewer samples by leveraging good predicates (like degree and types of symmetry), arguing this would demonstrate genuine progress in understanding visual intelligence.

Get the full analysis with uListen AI

Deep learning currently uses few, relatively crude predicates.

He characterizes convolution as essentially a single, human-designed predicate enforcing translation invariance, and criticizes the field for exploring huge classes of piecewise-linear functions without a clear, idea-level account of the invariants they respect.

Get the full analysis with uListen AI

Good predicates are found by seeking contradictions and broken invariants.

Analogous to physics, he suggests identifying situations where existing predicates fail (where invariants don’t hold), then adding new predicates that restore invariance, iteratively refining the admissible function set.

Get the full analysis with uListen AI

Artistic criticism may reveal candidate predicates for perception tasks.

Vapnik notes that music and literature critics routinely use a compact vocabulary of abstract descriptors; mining such language could expose high-level predicates that transfer to domains like image understanding and digit recognition.

Get the full analysis with uListen AI

Notable Quotes

Engineering is imitation of human activity. Understanding intelligence is a completely different problem.

Vladimir Vapnik

I believe in a scheme which starts from Plato: there exists a world of ideas. Intelligence is this world of ideas combined with reality, creating invariants.

Vladimir Vapnik

Good predicates are those that make the admissible set of functions very small.

Vladimir Vapnik

Deep networks are just piecewise linear functions. What matters is not the network, but the invariants it keeps.

Vladimir Vapnik

When solving a problem of interest, do not solve a more general problem as an intermediate step.

Vladimir Vapnik

Questions Answered in This Episode

How might we systematically discover high-quality predicates for vision beyond symmetry and basic structure?

Lex Fridman and Vladimir Vapnik explore the distinction between engineering intelligence (building useful systems) and the science of intelligence (understanding its underlying principles).

Get the full analysis with uListen AI

Can neural networks be redesigned explicitly around predicate discovery and weak convergence rather than large, undifferentiated function classes?

Vapnik argues that intelligence arises from a small set of abstract predicates—functions capturing integral properties of data—that generate invariants, linking Plato’s world of ideas to real-world observations.

Get the full analysis with uListen AI

What concrete methods could we use to mine literature and music criticism for transferable predicates relevant to perception?

Using handwritten digit recognition as a canonical testbed, he proposes that discovering a few powerful, human-meaningful predicates (like symmetry and structure) should drastically reduce the data needed for high performance.

Get the full analysis with uListen AI

Is there empirical evidence that solving Vapnik’s MNIST challenge with very few examples leads to predicates that generalize to more complex, real-world images?

They connect these ideas to weak vs. ...

Get the full analysis with uListen AI

To what extent can the Platonic notion of a small world of ideas be reconciled with the apparent complexity and diversity of real-world tasks and data?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Vladimir Vapnik, part two, the second time we spoke on the podcast. He's the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC labs, Facebook AI Research, and now is a professor at Columbia University. His work has been cited over 200,000 times. The first time we spoke on the podcast was just over a year ago on one of the early episodes. This time, we spoke after a lecture he gave titled Complete Statistical Theory of Learning as part of the MIT series of lectures on Deep Learning and AI that I organized. I'll release the video of the lecture in the next few days. This podcast and lecture are independent from each other, so you don't need one to understand the other. The lecture is quite technical and math heavy. So if you do watch both, I recommend listening to this podcast first since the podcast is probably a bit more accessible. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. As usual, I'll do one or two minutes of ads now, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIPC. Since Cash App allows you to send and receive money digitally peer-to-peer, and security in all digital transaction is very important, let me mention the PCI data security standard, PCI DSS level 1 that Cash App is compliant with. I'm a big fan of standards for safety and security, and PCI DSS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now, we just need to do the same for autonomous vehicles and AI systems in general. So again, if you get Cash App from the App Store or Google Play and use the code LEXPODCAST, you get $10 and Cash App will also donate $10 to FIRST, one of my favorite organizations that's helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Vladimir Vapnik. You and I talked about Alan Turing yesterday a little bit.

Vladimir Vapnik

Yes.

Lex Fridman

And that he, as the father of artificial intelligence, may have instilled in our field an ethic of engineering and not science, seeking more to build intelligence rather than to understand it. What do you think is the difference between these two paths of engineering intelligence and the science of intelligence?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome