No PriorsNo Priors Ep. 68 | With Zapier Co-Founder and Head of AI Mike Knoop
EVERY SPOKEN WORD
55 min read · 11,366 words- 0:00 – 1:10
Introduction
- EGElad Gil
(music plays) Hi, listeners, and welcome to No Priors. Today, we're talking with Mike Nute, the co-founder and head of AI at Zapier. Mike co-founded the company in 2011 and was an early adopter of the power of AI in the enterprise. Recently, he's joined forces with François Chollet to launch a competition to accelerate progress towards AGI called the ARC Prizes. Mike, welcome to No Priors, and, um, maybe you can start off just by telling us a little bit more about what you're up to on the prize side. That sounds really exciting.
- MKMike Knoop
Yeah, thanks for, uh, having me. I'm super excited. Uh, I've been a No Prior listener since, uh, literally episode one. So, uh, finally, excited to- to get on and, uh, e- introduce yourself. So I'm Mike. I'm one of the co-founders of Zapier. Um, I've run and advised all of our AI projects over the last, uh, two years or so. And my day job has been, um, you know, building AI at the application layer for Zapier. But my, kind of nights and weekends, um, have been more interested in this, like, AGI research and progress. In- in fact, this kind of curiosity goes all the way back to kind of my college days, pre-Zapier. Um, you know, I think actually this is one of the reasons why Zapier got- was so early into some of the AI stuff, was kind of this curiosity (laughs) in, like, AGI. Uh, the- the chain of thought paper that came out
- 1:10 – 2:16
Redefining AGI
- MKMike Knoop
in Jan 2022 was what kind of, like, shook me loose. I- I was, uh, running half the company actually at that point, and, um, I gave up my exec team role to go, like, kind of back to being an IC, and answer for myself, like, "How close are we to AGI?" And, um, as it turns out, uh, we are not that close. Um, you know, I- my belief is that AGI's progress has really stalled out over the last four or five years. Um, and I think there's a- kind of a handful of reasons for that. I think the- the biggest one is that the kind of consensus definition of what AGI is, uh, the definition of it is wrong. I think we're measuring the wrong things, and this leads people to think that we're closer to AGI than we actually are. Uh, this causes, like, AI researchers and kind of generally the world to be overinvested in exploiting this large language model, like, paradigm and regime, ex- as opposed to exploring, like, new ideas, which are desperately needed. Um, and, like, frontier AI researchers also basically, like, completely stopped publishing. You know, the GPT-4 paper had zero technical details. Uh, the Gemini paper had zero technical details on a lot of the context stuff. Um, and I- I just wanted to help fix this, so I- I wanted to see if there was something I could do to help accelerate. And so yeah, I'm excited to share, uh, we
- 2:16 – 3:08
Introducing ARC Prize
- MKMike Knoop
just launched ARC Prize. It's a million dollar plus non-profit, uh, public challenge to beat François Chollet's, um, ARC AGI Eval, and open source the solution to it, and open source the progress towards it. Um, ARC AGI, to best of my knowledge, is the only true AGI eval that actually exists in the world, and measures a actually good definition, correct definition of what AGI is, which we can talk about. Um, this- there's an AI lab called Lab 42 out of Switzerland that's been running a small annual contest over the last four years, uh, to try and beat this eval, and state of the art today is 34%. Um, state of the art four years ago when it was first introduced was 20%, so we've made very, very little marginal progress towards it. And, uh, this was pre-L and pre-scale, right? So it's like it has successfully resisted the advent of scale and LMs. Um, the ARC AGI actually looks like an IQ test if you go look at some of the puzzles. Maybe we can, like, uh, overlay some of the puzzles, uh, and- and show some stuff.
- 3:08 – 5:14
Definition of AGI
- MKMike Knoop
- EGElad Gil
Yeah, could we actually get into that? I'd love to hear sort of what you view as the consensus definition of AGI today, what's wrong about it, and then what do you think is the right way to measure or calibrate against that?
- MKMike Knoop
Yeah. Uh, the sort of consensus definition that I think is most popular in sort of the AI industry right now is that AGI is a system that can do, like, the majority of economically useful work that humans can do. I- I think Vinod, uh, gets credit for coining this one, and, um, I- I- you know, I think it's a useful definition, actually. (laughs) Uh, you know, look, I spent my day job building application AI. There is legitimate economic value that is sort of unlocked by the current regime with language models. Um, however, I don't think it's a good AGI definition, though. Um, you know, I think it's a good definition of systems that are useful and economically useful, but, you know, I kind of joke that, like, I think it says more about what many humans do for work than it does about actual general intelligence. And, uh, François' definition, which is the one that I think is the right one, is, uh, this definition that general intelligence is a system that can effectively, efficiently acquire new skill. That's- that's it. Efficiently acquire new skill, being able to solve these open-ended problems with that ability. And h- here's sort of the simple, like, maybe, um, argument in this line of thinking is, you know, we've had AI systems over the last 10, 15 years that can now, uh, you know, win at poker, uh, fold proteins, drive cars, win at chess. And yet I can't take any system that was, like, trained to beat, you know, poker, and go teach it to drive a car. And yet, you know, oh, that's something, like, incredibly, uh, easy for you to do, right? I could take you out into the parking lot and probably teach you to drive a different car and show you a variant of poker and teach it to you. Your- your ability to, like, you know, very efficiently sample, efficiently- energy efficiently to be able to acquire that new skill and learn it is really what, um, makes you human and, like, shows the general intelligence ability that you have. And that's what's missing from pretty much every AI eval that exists today. Um, and this ARC AGI eval that François built back in 2019, um, is- is an actual measure of it and formalizes the definition and the measure of it that we can actually test against and see progress towards.
- EGElad Gil
Yeah. You mentioned that
- 5:14 – 8:20
LLMs and AGI
- EGElad Gil
you feel like LLMs aren't good progress in this direction, but I think one of the arguments, um, for LLMs as something that's unlocking so much economic and other value is the fact that it is generalizable in different ways that didn't exist before, and it does open up the aperture in terms of one system that's kind of trained broadly but then can do a lot of very specific subtasks. So, could you explain more about why you don't feel that just scalability of LLMs sort of leads in this direction eventually, or scalability of some multimodal modeling?
- MKMike Knoop
You know, the- the- the sort of claim goes like this. Uh, effectively, what large language models do today is they are high-dimensional memorization systems, right? They are trained on lots of training data. They're able to find and generalize patterns off of the training data that they're trained on and then apply those in- in new contexts. And...... uh, memorization is a form of intelligence, I would claim. Um, but it's not a form of general intelligence, right? We need something... There's something more that we need in order to be able to go discover and invent alongside us. You know, these are the things that I care about, like, with AGI. This is why I wanna build AGI. I think, like, if we wanna pull forward the future and actually have AI systems that are able to, you know, discover new branches of physics or pull forward our understanding of the universe, um, pull forward, like, new therapeutics, the answers to those don't show up in high-dimensional patterns from our existing training data. Because, like, the answer is, is literally unknown, right? The pattern is unknown, in fact. You might be able to find some sub-patterns that can apply in, like, similar reasoning chains, and, and that's actually how current sort of AI agent systems work, right? If the reasoning chain that you need an agent to follow is simple enough, such that the reasoning chain shows up in an abstract way in the training data, um, it can oftentimes pluck that and apply it. And it works. Like, this is how Zapier's AI bots actually work, is they're able to, like, you know, see enough sort of small-chained reasoning examples and apply that in your context. Um, but for AGI systems that are gonna go do, like, completely new things for us and solve open-ended problems that, where the sort of reasoning chain doesn't exist in the training data anywhere, that's where LMs are just gonna fall flat and be inefficient. And, you know, at the end of the day, I'm an empiricist, I think. I think that's the only thing that really works in AI, is you have to just look at what works and what doesn't, and, um, just sort of objectively, language models do not work to beat ARC. Uh, and people have tried.
- EGElad Gil
But, I mean, I guess the counterargument to that is, well, we just need more scale, and then we need to focus on certain types of reasoning modules or other things, and some notion of memory. Like, there's basic components that just feel to still be missing, and maybe that's your point, you know, to some extent.
- MKMike Knoop
Scaling language models purely will not get there. I think there is a... Like, transformers, maybe, right? I think transformer might be a comp- potential component of it. Like, I think the, the, the, the biggest thing that we get from, uh... Maybe the biggest thing I think transformer has shown is, like, we now know how to build a really effective robust perception stack, right?
- EGElad Gil
Mm-hmm.
- MKMike Knoop
Where we can take a ne- a deep learning network, show it multimodal data, and come up with, like, numerical representations of that data and do, like, operations over it, right?
- EGElad Gil
Mm-hmm. Mm-hmm.
- MKMike Knoop
Um, and I think that's, that likely is a, probably a solution path towards true AGI. But the language model version of it, where we're just sort of doing next token prediction-
- EGElad Gil
Mm-hmm.
- MKMike Knoop
... um, and training on data, like, that system alone is the one I would claim that, like, no amount of scale
- 8:20 – 13:51
Promising techniques to developing AGI
- MKMike Knoop
will n- Like, that system, if you just put, you know, double the number of parameters, 10x the number of parameters into it, 10x-
- EGElad Gil
Yeah.
- MKMike Knoop
... the number of data into it, um, you're, you're never gonna get to AGI. Like, we, we do need, we need something more. There's something addition, in addition to that we need.
- EGElad Gil
Mm-hmm. Okay. And then what ideas do you think are missing, or what areas do you think people should be exploring further?
- MKMike Knoop
I, I have two thoughts here. Um, one is working (laughs) . So there's, uh, one of the, uh, techniques that has showed some promise, um, on the ARC challenge in past years has been this technique of program synthesis. For instance, it's actually been even a long, longer than sort of code gen models have been. So it's the idea of, like, having a computer program that, like, searches through program space of possible programs and assembles them together in order to do something. You typically have, like, you know, um, an input and output, and you're trying to discover a program that can, like, map your input to your output. And so it's a very relaxed, universal search space, right? Um, you're not sort of following a back propagation gradient of, like, a signal in order to figure out what the program is. You're actually, like, looping through all possible programs. And because you're sampling from, like, the full sort of search space there, uh, it increases the likelihood that you'll actually discover, like, a general form solution to it. And so that was what got some of the, like, mid 20% range progress towards ARC, was, was in that direction. Um, and it, it's just like, it's very, very orthogonal to sort of the language model transformer, like, thought chain (laughs) stuff. Um, but, but that's, that's, I think, one very promising technique. And then I think the other one is figuring out ways that you can have computers do the architecture discovery itself. Um, this is a, uh, not a new field or new idea. It's called neural architecture search. It's been around for a long time, like, I think maybe even 10 years now. Um, it's never really amounted to much, uh, i- interestingly. You know, I think a lot of, um... It's mostly from the academic side of things, and neural architecture search, oftentimes researchers don't have access to large-scale compute, so they're all using a computer program effectively to search through possible AI architectures. And because academic researchers often don't have access to a lot of compute, they take shortcuts in order to find results that they can publish. And I suspect now over the last four years, we have... We might now have enough compute that's come online at a cheap enough, like, kind of cost per flop, that some of those old neural architecture search methods we should revisit and relax the search. Basically, try to, try to take the learning from the bitter lesson of, like, you know, not biasing these searches with human priors and human bias, and try to relax the search and, and leverage a lot of the cheap compute that's come online towards that.
- EGElad Gil
Mm-hmm. When you talk about AGI, um, you know, I think there's some books like Blindsight which tries to differentiate between intelligence and sentience, right? Self-awareness versus actually being able to intelligently do things. When you talk about AGI, is there an embedded concept of sentience, or is it purely intelligence?
- MKMike Knoop
I'm not a philosopher, so, uh, like, I'm probably the worst person to ask about this question. Look, I wanna live in the future. That's, like, kind of one of the things I've always been really excited about. Like, you know, if I can help pull forward the future, I w- I want to. And I, I think one of the, the best ways we could pull forward the future is to invent systems that can invent and discover alongside us. And I think in order to do that, we need this general form of intelligence, or a system that can represent this, demonstrate this general form of intelligence about being able to efficiently acquire those new skills and help us solve these open-ended problems. So I, I don't... Um, like, I haven't thought deeply about, like, "Okay, well, is that system sentient? Conscious?" (laughs)
- EGElad Gil
Yeah. The main reason I ask is more, um, depending on your viewpoint, that increases or decreases the relative risk of AI as a threat to humanity. And so there's sort of the doomer argument of...... sentience is, kind of, more of an issue than maybe just intelligence. Intelligence, to your point, is, hey, you're harnessing this machine tool to be more efficient in different ways or help you in different ways, which is kind of what, your view on the current state. And you said, "Well, let's focus on AGI as something that we wanna pull forward, because the current approach is just gonna do a bunch of economic value, but it's not gonna create these intelligent things," right? Or truly intelligent or generally intelligent things. So that- that's kind of the basis for the question, is the degree to which you view there being increased risks of pulling this technology forward versus not, and, you know, how you think about that more generally.
- MKMike Knoop
Yeah, um, that's a good question. You know, I think the ... (laughs) You sort of get close to, um, you know, the ultimate alignment problem, which is, which is a phil- philosophical question, probably more than an engineering question today. I- I think the only way that you really can approach this stuff is through an empirical lens. I think you just have to look at what systems can do and make decisions based on that lens. I think it's incredibly dangerous to try and make predictions about future capabilities, about where the technology will go, and make rules, legislations, laws, like, prohibiting or enforcing or requiring certain research directions, um, through a theoretical lens. I- I... It just, like, hasn't empirically worked. I don't think anyone could sit here today and say that we would've... This is where AI would even be five years ago. So it feels just, like, incredibly shortsighted to say, "Well, okay, we're gonna, like, enforce the- the, like, sort of language model regime is gonna be the only one that we're gonna allow to happen for- for the... forever." (laughs) Um, so I think that's- that's where I'm- that's kind of where I end up starting, is, like, you gotta be empirical about this stuff. And until I think we have some empirical evidence of what the systems can do, I think it is sort of dangerous to... or at least harmful to progress, to try and sort of, uh, limit- limit- limit the research direction or add a lot of overhead in sort of exploring new ideas on that front.
- EGElad Gil
Makes sense. And then, um, you know, you've- you've
- 13:51 – 16:28
Prize model vs investing
- EGElad Gil
now established this Arc Prize, which I think is super exciting. It's a million-dollar prize towards, um, you know, an open-source model that, you know, meets certain criteria against your metrics of a- artificial general intelligence. Why do it as a prize versus investing in companies or, you know, taking, uh, funding, uh, more traditional funding of startups or efforts model versus a prize model?
- MKMike Knoop
I think outsiders are needed. Um, you know, there- there was 300 teams that actually competed in the Arc, like, small version of the contest last year in 2023. And if you go look at all the teams that competed, you know, these are like one- or two-person teams, they are outsiders to the industry. They're not working at AI startups. Many of them don't even live in, like, the Bay Area or Silicon Valley or California. It's a very globally distributed set of people with new ideas that are working on this stuff. I am more confident, actually, that, uh... or I guess I would bet that the solution to Arc probably comes from an outsider. Um, I think it's probably gonna come from somebody who's sort of not indoctrinated in the current way of thinking about language models and scale. Or arguably, like, the solution to Arc doesn't even require that much scale. Um, you know, the- the cool thing about the puzzle, the ARC-EJIBEL is, it- it's like kind of a minimal reproduction of general intelligence. Uh, it fits into a two-by-two game board that's, like, at max, like, 15 by 15 squares big. Like, it's- it's so small and reproducible. The data fits into such a small, uh, small set that, um, it's quite likely, actually, that the solution, um, it- it can be, like, written in, like, 10,000 lines of code or less. Uh, and it's not gonna require these, like, you know, gigantic, you know, 200 billion, you know, large-parameter models in order to solve it. And so it's within, I think it's within the throes of outsiders. I think it's within the throes of people that, like, wanna tinker on sort of the nights and weekends. And really the goal of the prize is to... Uh, like, my hope is that I sort of can encourage, like, the- the would-be re- AI researcher, you know, who has, like, choice of what they work on on their nights and weekends, to instead of saying, like, "Well, maybe I could go build, like, another LM startup and maybe sell it," to instead say, "Ooh, maybe I could go try to beat this ARC-EJIBEL." And if I do it, now not only is there status attached to it, but there's mon- there's money attached to it, right? There's like... I get- I get upside. (laughs) There's like an- there's an economic incentive to, like, try and win. Um, and I'm trying to... Like, use- use the prize as kind of a- a way to counterbalance some of the, like, economic, you know, unlock that language models have on startups and things.
- EGElad Gil
You mentioned that Arc in part was inspired by your engagement with AI as part of Zapier and, and that strategy
- 16:28 – 19:08
Zapier AI innovations
- EGElad Gil
there. Um, can you tell me a little bit more about what Zapier has built on the AI side and how you all both got to it early and then how you ended up approaching what to actually focus on? Because I feel like as people adopt this technology, there's almost like a multi-month phase of just figuring out what it can even do. So could you tell me about that journey, and yeah, how that all worked out?
- MKMike Knoop
The summer of 2022, uh, both Brian and I actually, my co-founder Brian, um, CTO, uh, gave up our exec team roles. We went all in, back to sort of being IC with no direct reports. And for about six months, all we did was, like, build, like, try to figure out what was possible. We... So we built a version of, you know, chain of thought, tree of thought. We built a version of ChatGPT actually internally at Zapier before it got... before it came out. (laughs) Um, and I... it felt... I- I think it gave us some confidence that we had, like, st- to the best of our abilities, fully explored the search base of what, uh, call it GPT-3 at that point, um, you know, intelligence style model could do. And what it led us to see was... probably the- the big gap was that sort of the models are frozen in time, right? This was kind of pre-tool use. And, um, the- the most obvious thing to do was like, well, Zapier has a lot of tools, right? We have 6,000 integrations. Could we hook these language models up to use those tools? And that's ultimately what led to Zapier being a launch partner for the ChatGPT plugin, which I think is one of the first moments that Zapier, like, kind of became known more popularly in association with AI stuff.
- EGElad Gil
Do... Is there anything you can share in terms of, um, adoption or metrics or usage by Zapier users or customers of, of your AI products?
- MKMike Knoop
Yeah, we've got, um... At this point over, 50 million AI tasks have run on the platform to date over the last year and a half or so since we started tracking. So this is like... You know, think of a Zap, right? Where it's like you got a trigger and set of actions where one of those actions is an AI step.Uh, dominantly, this is OpenAI or a ChatGPT step where, you know, a user is doing content generation or feature extraction or summarization. Um, using AI in the middle of a workflow, uh, is, is kind of the dominant way people are adopting AI today. Um, over the last couple of months, we've introduced, uh, uh, other products in our AI space, so w- we're using AI basically across the entire product. We've, we launched a new product called Zapier, um, Central, which are effectively these AI bots that, um, you don't have to build, practically. Uh, you know, the classic way I think most people experience Zapier is you have to build in the editor, right? You go have to, you know (laughs) , do lots of configuration and click, click, click in order to get your Zaps set up and just tuned to the way you want. And one of the cool things with these new AI bots is you program them with natural language. And we're not actually even doing natural language to structure mapping. It is a pure inference-based engine interpreting the user's instructions of what they want the bot to do and getting access to the, all the integrations and authentications that they, uh, equip it with. And so we're seeing some just like (techno music plays) ... order of magnitude e- easier to use products because of that.
- 19:08 – 21:48
Economic value of agents
- EGElad Gil
Yeah, that's really cool. I guess, um, one potential future direction that really fits well with what Zapier has provided in the past, uh, is sort of the agentic world or r- really having some of these tasks turn more and more into agents, right? You can imagine that you're setting up some workflow automation or something else, and eventually it does things a bit more on its own, or you can be a bit more directive, and it just goes and does it for you. How far away from that world do you think we are?
- MKMike Knoop
It's happening today. I mean, we have people literally paying for Zapier's AI bots. (laughs) Uh, there's enough value that it's unlocked where people are willing to pay it, right? I think that's been shown. The way that I think about this is, like, concentric rings of use cases that got unlocked as the consistency and reliability of the, of the technology matures. So today, the sort of consistency and reliability thresholds that we're able to meet, that users are able to sort of get to, kind of requires adopt... first adoption in, like, personal use cases or team-based workflow use cases where the risk is relatively low if something goes wrong. One interesting thing is, like, there's actual use cases, like bot, templates of bots that we've built and given to different users, where one user takes the exact same template, say one of these AI bots that can watch for a certain email hitting your, landing in your inbox and sending a message to your team in Slack if it qualifies. Let's say, hey, you're looking out for a certain, you know, payment notification email or a refund notification email, and you want those, like, you know, routed to a certain channel in Slack. That, that exact use case might be completely acceptable for, like, a startup, right, um, that maybe has three Slack channels (laughs) and it's, like, just the founding set or the founding team. And you take that exact same bot, same template, same, exact same thing and go give it to, you know, a mid-market company that's got thousands of Slack channels, partner channels, lots of production things happening, and they might not be comfortable with that risk, right? They might wanna clamp down the possibility space of what the bot can do, um, in a tighter way, whereas, you know, the first one would say, "Hey, like, sure, have the bot just choose which Slack channel, write the message however it wants." You know, "I, I kind of want it to just figure it all out." And as you kind of move up an- up the risk chain, you kind of want to install more and more clamps. So that's been a big part of our product build thesis for AI bots is, like, how do we allow end users to provide clamping behavior on what the bot can and can't do in order to increase the, like, size of the circles that, of use cases that sort of get unlocked? So I think that's probably the march of technology we're gonna s- I, I would expect to see is, like, um, there's things that we can do, still do, that we haven't done yet in terms of making the product and, and bots more reliable and consistent that we're working on right now, and I think there's things that the underlying sort of technology and models are gonna improve at as well that'll increase the reliability and consistency. And as that goes forward, I think you'll just see more and more, like, the risk of use case will go up.
- EGElad Gil
So, um, open source
- 21:48 – 24:20
Open source to achieve AGI
- EGElad Gil
software as well as just open source ideas, papers, data sets, et cetera, have really helped drive multiple areas of science and technology forward. How do you think about open source software in the context of AI, in particular given some of the regulatory and other movements that have been happening at both the California level, the national level, et cetera?
- MKMike Knoop
My, my beliefs here are formed through how f- much we stall out, I think, on AGI progress. We still need fundamental research breakthroughs. We still need f- fundamental new ideas, and I think the internet and open source has been one of the world's best inventions, uh, in order to generate new ideas. Um, and so I think if you care about actually discovering AGI in our lifetime, then I think it's sort of incumbent to try and promote things that increase the likelihood that we're generating new ideas, um, and having lots of AI researcher brains (laughs) , or would be AI researcher brains, sort of encountering this stuff, and it's not locked and closed behind, you know, a hiring process at a big lab. And so, you know, I'm, I'm very much in favor of supporting open progress, open research sharing, especially at the, like, foundational scientific level because we t- we just need new ideas, um, and I- I think the best way to generate those ideas is through open source and open sharing at this point. I mean, the proof here is, like, literally OpenAI, right? Like, the sort of genesis of the company came out of a published research result from, from Google. Um, and sadly, I don't think that's likely to happen now as a result of kind of a lot of the commercialization and market incentives causing a lot of frontier publishing getting, getting closed up because now these, you know, companies sort of have... they, they know the economic value of the research, so they're kind of playing more tight to the chest. And that's just, like, kind of worrying or upsetting. It's, it's certainly at least stalling progress, and, um, I'm hoping to play a small part in trying to counterbalance that a bit.
- EGElad Gil
You, you raise an interesting point, which is the internet was basically driven by open protocols, uh, because there were a lot of closed proprietary protocols in terms of both how networks function and how machines talk to each other, and then open source, right, in terms of Linux space servers and other things that were really the workhorses of the early internet. And relatedly, um, there was a lot of attempts to regulate cryptography in the '90s, uh, for adjacent but overlapping reasons in terms of why people are now trying to regulate AI where they say it's a threat or they're, you know, uh, malicious actors could do malicious things, and everything's been fine with cryptography (laughs) and it's been net positive for the world to have it in place. So, uh, it's kind of interesting to see some of those analogs or parallels.
- MKMike Knoop
Yeah, I mean, I think my sort of underlying beliefs on AI
- 24:20 – 26:00
Regulating AI and AGI
- MKMike Knoop
are AI should likely get regulated through the existing regulatory frameworks that exist. I don't see a lot of new harm or use cases or damage caused by just the narrow form of AI systems that we have today that existing sort of regulatory frameworks or agencies don't have power already to sort of regulate and make decisions over. That, that feels, like, smart and the right way to sort of think about that stuff. Now, on the AGI front, I think it's just really, really dangerous to put in prescriptive legislation ahead of seeing any empirical evidence of what those systems can or cannot do yet. Uh, I would not trade personal, independent freedom for the sort of, what it would take in order to, like, prevent AGI from ever getting developed, uh, just personally. I, like, that's kind of my (laughs) philosophical framework on that. Um, you know, I'm open to us actually discovering, okay, here's what the f- forms of AGI are gonna look like, what they can and can't do, and then making decisions about, okay, how do we wanna release that, what is it... h- you know, how, how are we gonna control that, um, making decisions at that point based on what we're seeing. Um, but, but I, I would be very, very strongly against trying to, like, predict what those things are in a theoretical sense. I think that's just... hasn't worked historically.
- EGElad Gil
Mm-hmm. Great. Well, thank you so much for covering all these, uh, wide diversity of topics, telling us more about ARC. It sounds like a very exciting initiative, and so I'm sure there's more to come there. And, uh, thank you so much for joining us today on No Priors.
- MKMike Knoop
Thanks for having me.
- NANarrator
(instrumental music plays) Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.
Episode duration: 26:00
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode LFm_lSiMLm4
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome