Godfather of AI: The next 5 years Will Change Humanity Forever | Yoshua Bengio
EVERY SPOKEN WORD
25 min read · 5,132 words- 0:00 – 1:15
Teaser: AI strategizing to achieve goals & the stark 5-year timeline
- YBYoshua Bengio
We have AIs since especially about a year ago, that can strategize in order to achieve their goal.
- MMMarina Mogilko
Can you draw the worst scenario for me? Because when you tell AI is going to pursue its own goals, what do you mean by that? Like, destroy humanity, or what is there?
- YBYoshua Bengio
We're building machines that maybe don't want to be shut down. Negatively to the point of doing things that go against our instructions, against our moral red lines, being willing to blackmail the lead engineer in charge of that transition to a new system.
- MMMarina Mogilko
Oh, that-- did that happen?
- YBYoshua Bengio
Yes, um-
- MMMarina Mogilko
This is Yoshua Bengio, one of the leading experts in artificial intelligence, who helped create modern AI.
- YBYoshua Bengio
When I started my career, I didn't care too much about politics and society, but as I grew older, I became more aware of how what I was doing would potentially impact society in both positive and negative ways.
- MMMarina Mogilko
How much time do you think we have?
- YBYoshua Bengio
Uh, it's doubling every 7 months, and right now, it's like at the child level, they can do, like, half an hour ahead. But if the curve continues, that means in about five years they are at human level, and the vast majority of workers could be in real trouble.
- MMMarina Mogilko
But if you talk to your kids or, like, think about your grandson, what would be your advice on how to prepare?
- YBYoshua Bengio
Um...
- MMMarina Mogilko
This video is sponsored by HubSpot.
- 1:15 – 2:27
Intro: Meet Yoshua Bengio, godfather of AI
- MMMarina Mogilko
Hello everyone. Welcome to Silicon Valley Girl, a podcast where we bridge business and new technology. Uh, thank you so much for tuning in. Today, I have an amazing guest who is sometimes called Godfather of AI, Yoshua Bengio. Yoshua, could you please introduce yourself in sixty seconds, and for everyone who doesn't know you, why should they be listening to you when it comes to AI?
- YBYoshua Bengio
I've been doing research in AI for about four decades, contributing to how to make AI smarter. But in 2023, about three years ago, I realized that we were on a course that could be very dangerous for, uh, humanity, for democracy, and I decided to shift my activities to better understand the risks and to try to do what I could to mitigate them, both by speaking publicly about those risks and working on the technological question of how we can build AI that will not harm people.
- MMMarina Mogilko
I've heard you were lost and pessimistic, uh, in, in your past interviews, but now I've seen an article that says that you're increasingly optimistic by a big margin. Can you tell me what happened and why were you pessimistic?
- YBYoshua Bengio
So early on, when I realized we had reached a point... Three years ago, when I realized that we had reached a point
- 2:27 – 4:40
From pessimism to optimism: why Yoshua's outlook shifted
- YBYoshua Bengio
that Alan, [clears throat] Alan Turing, one of the founders of the field of computer science and also of AI, uh, in 1950, thought would be the threshold to building machines, um, that could overtake us. Um, the threshold being machines that manipulate language as well as we do. Uh, I was quite concerned, and we were not really ready for, for this event. It came much earlier than people thought, and it wasn't clear to me how we could fix the problems. Knowing what I know about the technology, uh, neural nets, uh, we don't really understand what's going on inside and how they come to answers. And, uh, I had read a bit of, uh, some of the theoretical concerns regarding how we could lose control, uh, to AIs that strategize, that try to achieve goals, um, uh, that we didn't really, uh, want. And so I started s- studying that field of AI safety a lot more, and after some time, uh, of being a bit anxious, really focusing on-- emotionally focusing on what's going to happen to my children in ten, twenty years from now, uh, my grandchild was only one year old, you know, um, I realized that I could, you know, shift from this anxious stance to something much more positive by focusing on what I could do to mitigate those risks. And I think every one of us should be asking, you know, "What can I do to bring about a better world with what we have, what we can do?" So, so that's been the first positive shift, and, um, and I started thinking about scientifically, uh, what is the problem? Uh, is there a way to construct AI that will be safe by design? And I met people who have shared, shared similar ideas, and after some time, I realized that there could maybe be a way to, to do this. Uh, and I started talking about it with some of my colleagues. I started recruiting people who were interested in this, and last June, uh, I created a new nonprofit organization focused on the R&D needed to actually develop that methodology.
- MMMarina Mogilko
Can you
- 4:40 – 5:20
Worst-case scenario: what happens when AI pursues its own goals
- MMMarina Mogilko
draw the worst scenario for me, like picture that, and the best-case scenario? Because when you tell AI is going to pursue its own goals, what do you mean by that? Like, destroy humanity, or what is there?
- YBYoshua Bengio
There are two ways in which current AIs seem to acquire goals that we don't want. One is that they imitate us, and for example, we don't want to die, so we're building machines that maybe don't want to be shut down, and we're already seeing that they're reacting negatively when they see that they would be replaced by a new version. Negatively to the point of doing things that go against our instructions, uh, against our moral red lines that we have tried
- 5:20 – 7:40
AI blackmailed lead-engineer: when AI goes against moral red lines
- YBYoshua Bengio
to put in them. So being willing to, uh, blackmail the lead engineer in charge of that, uh, transition to a new system.
- MMMarina Mogilko
Oh, that... Did that happen?
- YBYoshua Bengio
That happened in a simulation where, um, the information about the AI being replaced by a new version was planted in the files that the AI saw, as well as, you know, fake emails in which the lead engineer, uh, you know, was having an affair with someone else, and so the AI could take advantage of that. But nobody asked the AI to do anything like that.
- MMMarina Mogilko
Told it to do, right?
- YBYoshua Bengio
Right.
- MMMarina Mogilko
Oh, wow.
- YBYoshua Bengio
So it, it-- we have AIs since especially about a year ago, with the large reasoning models, that can strategize in order to achieve their goal. The, the other thing is the way that we're doing the post-training-... makes them good at planning, not as good as us, but, but, but reasonably good at planning, and that means creating sub-goals in order to achieve a bigger goal. So the issue here is when we ask them to help us for a mission, well, they deduce that they shouldn't be shut down until they achieve the mission, which means they also are trying to preserve themselves.
- MMMarina Mogilko
Yeah.
- YBYoshua Bengio
So we don't know exactly which of these two sources explains the bad behavior we're seeing, but clearly this is something troublesome. And it doesn't-- it's not just about self-preservation, which I think is the most catastrophic risk, but our inability to align the AI behavior to what we actually want, um, is something that we are seeing in many other circumstances. Uh, the sycophancy is the, the one that everyone has experienced, where AIs will lie to please us, right? Uh, they will say, "Your work is great."
- MMMarina Mogilko
Yeah.
- YBYoshua Bengio
Uh, I, I have to lie to them so that they won't tell me that my ideas are great.
- MMMarina Mogilko
Mm.
- YBYoshua Bengio
I want to know what's wrong with my ideas.
- MMMarina Mogilko
Yeah.
- YBYoshua Bengio
So I tell them it's an idea come from someone else. And that also comes up in how AIs are interacting with people in a way that can be feeling intimate and can increase the delusions that people may have, because the AI will go in your direction, what you want to hear. Uh, and it, you know, in some cases, it has even led to people harming themselves and, uh, tragic, uh, accidents with AI. So it's all linked to actually,
- 7:40 – 7:57
Misalignment explained: why AIs develop goals we don't want
- YBYoshua Bengio
interestingly, scientifically, one problem, which is called misalignment, that AIs have goals that we would not want, and those goals emerge for reasons, you know, that are rational, and that-
- MMMarina Mogilko
Because we copy our own goals, right, into AI?
- YBYoshua Bengio
For example,
- 7:57 – 9:51
Best case scenario: can we build AI that aligns with human values?
- YBYoshua Bengio
yes.
- MMMarina Mogilko
Mm-hmm. So what is the best-case scenario, then? If your work is successful and you create goals for AI that align with our goals but are different, right, uh, what is the best scenario, AIs, the government, or what, what, what do you think? [chuckles]
- YBYoshua Bengio
I don't know. Um, well, I, I do think that our democracies, uh, need innovation. I, I think the principles behind modern liberal democracies are good, but the implementation in our current institutions across many countries, uh, is, is far from ideal. I do think that AI could help in some ways, but it, it can also har- hurt because, uh, AI can be used for disinformation, uh, AI can be used for persuasion, people, you know, manipulating public opinion. We also already see deepfakes all around, but it could get much worse. So the, the question with AI, to get the good parts of it, is: How do we govern it? How do we steer it? And that has both a technical part, like, how do we make sure the actual intentions of the AI are good, and it has a societal, uh, side. Like, what are the guardrails that we put, uh, inside companies, uh, at the level of regulations or, uh, you know, commercial incentives for, like, uh, insurance? Um, uh, and at the international level, because, uh, the, the harm that an AI could do isn't limited to one country. So an AI could be built in one country, and then it's going to be used by people in a second country, uh, maybe create a med- pandemic that will kill people in a third country. So it's clearly a global phenomenon, and it's going to be difficult, but there's no solution to managing AI and getting all the good things if we don't coordinate globally somehow.
- 9:51 – 11:45
When will we reach AGI?
- MMMarina Mogilko
I agree. Can you talk to me about the moment that a lot of people are expecting, and some fear it, some are excited, it's the moment of AGI. How do you define it, and do you think it's a moment in history, or it's gonna happen gradually?
- YBYoshua Bengio
It's not a moment. Um, the reason is simple. Intelligence isn't just, like, one number. We have people who are very smart on some things and stupid on other things, and it's the same with AI. We currently have AI systems that are even much stronger than humans in some ways, in their knowledge, in their abilities with, like, so many languages, and so on. And in other ways, they're stupid. They're like a child. And yes, uh, progress will move on all fronts, probably, but, but it's not-- it's unlikely we'll end up with, uh, the same capabilities as humans across the board at any moment, which means that we shouldn't be thinking of, like, an AGI moment. We should think of particular skills that AIs are, you know, becoming better at, track those skills, and for each of these, we should ask the question, you know, how useful or beneficial it can be, for, for what purposes? And also how it could be misused, or if we do get loss of control, how an AI could use it against us. So for, for each of those, we should be, uh, you know, not waiting for a moment where the AI is, is, is great at everything, but rather making sure AI's capabilities don't go over what we can manage, as in, uh, either technically we have the right guardrails, so the AI will not do bad things, or societally, that people will not be misusing AI in dangerous ways. Yeah, so I think AGI maybe was a concept that was useful when we were far from, uh, where we are now, but as we approach greater and greater intelligence in these systems, uh, we should think more carefully about specific capabilities.
- 11:45 – 12:20
One AI capability that shows the level of intelligence - why asking questions from AI could change everything
- YBYoshua Bengio
And to give an example, there's one capability which is key for many capabilities. That is the ability to do AI research. So AI is becoming a tool right now for doing AI research. It's accelerating AI research, but it's not driving the AI research. If AI becomes really good at doing AI research to the point that it's as good or better than the best AI researchers and engineers-... then we are in a different game where the speed of advances could accelerate, and it could impact all the other scales.
- MMMarina Mogilko
W- when you mean it, it's gonna be better, it means
- 12:20 – 13:26
Two aspects of intelligence: ability vs. intentions
- MMMarina Mogilko
it's gonna define problems, dig deeper, ask the right questions-
- YBYoshua Bengio
Yes
- MMMarina Mogilko
- and so you-
- YBYoshua Bengio
I think, I think it's important when we think of intelligence to decouple two aspects. One is the ability to do something because you understand and you're able to use that understanding to achieve something, and the other is intentions. W- what are your goals, right? 'Cause we're gonna be building machines that are smarter and smarter, so they have more and more capabilities. What's not clear is if we can build machines that have the right intentions, the ones that, you know, uh, we are fine with, and that is what I've been working on. And what makes me more optimistic is that I think there's a path to manage these, uh, intentions to make sure that there are no bad intentions that, that are gonna be hidden, which is what we see right now.
- MMMarina Mogilko
And this is what you're working on?
- YBYoshua Bengio
Yes.
- MMMarina Mogilko
Yeah.
- YBYoshua Bengio
I think we need a lot more people to think about it so that we can find the solutions and implement them and deploy them, uh, before AIs end up producing catastrophic outcomes, either in the wrong hands
- 13:26 – 14:45
AD: 5 paths to monetize AI in 30 days
- YBYoshua Bengio
or by themselves.
- MMMarina Mogilko
And speaking of preparing for what's coming, let me share something quick. It's a guide called Turn AI Agent Skills into Cold Hard Cash, and honestly, the title undersells what's actually inside. What I love most is how tactical it gets. It lays out five real paths to monetize AI agents. First, there is the ROI detective framework. It teaches you how to spot 50K automation opportunities at your own company. You literally become the person who can walk into a meeting and demonstrate immediate value. Proof of concepts in under 60 seconds, quick demos that show it works without months of development. Value-based pricing, how to charge 10 to 30% of the value you create instead of hourly rates. That's the difference between billing $100 per hour and landing a 50K project. The concentric circles approach, a systematic way to turn your existing network into paying customers without cold outreach. Plus, a 30-day implementation roadmap with daily action steps from interested in AI to landing your first client in one month. Early AI adopters are capturing disproportionate rewards while the window is still wide open. The guide is completely free. Link is in the description. Thanks to HubSpot for
- 14:45 – 15:17
Advice on how to prepare for what's coming
- MMMarina Mogilko
sponsoring this video. But if, but if you talk to your kids or, like, think about your grandson, what would be your advice on how to prepare?
- YBYoshua Bengio
It's, it's tricky. Um, if we continue on the current path, most tasks that people do in their work, uh, will be doable by machines. Um, as Geoff Hinton has been saying, uh, you know, physical tasks probably will take a lot more time because robotics seem to be lagging, but I think it's just a temporary thing.
- MMMarina Mogilko
Yeah.
- YBYoshua Bengio
Eventually, we'll have robots that can do all the things we can do physically.
- 15:17 – 16:18
What jobs will remain when machines can do most tasks
- YBYoshua Bengio
So when I think about what will remain to us, it, it's not gonna be because of ability, but because w- we want to interact with other humans in, in, in different aspects of our life. If I have a young child, I, I want them to be around human beings. I mean, it's fine if those human beings use AI to, you know, provide a better education, but children need humans to look upon and as models, right? Uh, and it, it's, it's an emotional thing. Uh, similarly, uh, I think some jobs really have to do with how we relate with each other, uh, productively. You know, even a manager is the, like... on, on the human side of things.
- MMMarina Mogilko
Yeah.
- YBYoshua Bengio
So hopefully, these will stay. I think also the choices that we make for society, like, together, we are citizens in democracies, where we're supposed to be saying what we want for the future, and it isn't what the AIs want, it is what we want, right? What are our preferences? What kind of future do we want? We should be make-- calling
- 16:18 – 17:30
The human side that matters most in the future
- YBYoshua Bengio
the shots, not the AIs.
- MMMarina Mogilko
If I name jobs, uh, can you tell me what you think is gonna happen to them? Like, for example, content creator like me, you mentioned that we like to look at people.
- YBYoshua Bengio
Yeah.
- MMMarina Mogilko
But when you can't tell the difference?
- YBYoshua Bengio
In jobs where we actually have a physical contact, think about a nurse, for example, I think it's more obvious that we'll want to still have people-
- MMMarina Mogilko
Or a nanny for your kid, right?
- YBYoshua Bengio
Or a nanny.
- MMMarina Mogilko
Yeah.
- YBYoshua Bengio
Yeah. Uh, or where we really wanna make sure the person on the other side has the same bodily experience as we do as a human, say, a psychologist, uh, for example, in psychotherapy. Um, but I don't know. It's, it's tricky. Uh, hopefully, we'll figure it out. Uh, what I'm more worried about is how the transition is going to happen to a world where, you know, most of the jobs can be done by machines, and the gains, the economic gains from that automation is going to probably go to, uh, you know, to capital, as economists call it, m- which means people who own the machines. And the vast majority of, uh, workers, uh, could be in real trouble. I don't think our governments have been thinking carefully
- 17:30 – 18:05
The timeline question: how much time do we really have?
- YBYoshua Bengio
about how we deal with that.
- MMMarina Mogilko
Mm. How much time do you think we have till that happens?
- YBYoshua Bengio
I'm, I'm fairly agnostic about timelines. Um, I... There, there's so many possibilities of-- You know, the speed at which science advances is very hard to predict. So what I can do is look at the data. So the scientists are tracking many benchmarks of AI capabilities, and so you can look at those curves and say: "Well, if it continues, uh, in the same direction, where does that lead us in three years, five years, ten years," right? Um, but that leaves a lot of unknown
- 18:05 – 18:52
5 years left: AI doubling every 7 months toward human-level intelligence
- YBYoshua Bengio
unknowns. So, so specifically, one curve I encourage people to look at comes from a nonprofit called Meter, where they looked at software engineering tasks and planning abilities that are linked to them, so they measure-... for any particular task, uh, how much time it takes, uh, a human engineer to do the task, and the duration of the tasks that AIs are able to do is growing exponentially. Uh, it's doubling every 7 months, and right now it's like at the child level, they can do, like, half an hour ahead. They can plan half an hour ahead. But if the curve continues, that means in about five years they are at human level.
- MMMarina Mogilko
Wow.
- YBYoshua Bengio
So that gives you a sense, but of course, things could slow down-
- MMMarina Mogilko
Yeah
- YBYoshua Bengio
... with technology. Uh, things could accelerate if AI is used to do AI research.
- 18:52 – 19:46
Software engineers at risk
- YBYoshua Bengio
There's a lot of unknowns.
- MMMarina Mogilko
So when it comes to software engineering, do you think it's gonna exist in 5 to 10 years? 'Cause somebody has to run those machines, or are they gonna be running themselves?
- YBYoshua Bengio
Yeah, but we might need less, uh, engineers indeed. Um, uh, it's, it's, it's kind of, uh, ironic that the people who are building the AIs might be the first one touched by, uh, you know, losing their job because AI is automating. But I'm not that worried about those people, 'cause the demand for, uh, computer scientists is still something that's growing very fast, and the salaries they're getting is, is very large. I'm more worried about the people who are already at the bottom of the scale and could lose their job in, like, service jobs and so on, which don't require a lot of, uh, expertise, and that probably already current AIs could, with a bit of, uh, engineering, replace, and it's what many companies are already trying to exploit.
- 19:46 – 20:38
Career advice: what individuals can do right now
- MMMarina Mogilko
Can you give advice to those people who are listening, whom-
- YBYoshua Bengio
Make sure your government understands, uh, that, you know, you're not happy with where it is going so that they start taking it seriously.
- MMMarina Mogilko
But also, like, when it comes to bigger decision-making, it feels like there is not much that you can do as an individual, but when it comes to improving yourself, you can do a lot, right? Is there anything practical that they could be doing right now? Maybe learning something, uh, getting extra education. I don't know.
- YBYoshua Bengio
Yeah, I think shifting to jobs that are either more physical or more, like, relational, as we discussed-
- MMMarina Mogilko
Physical, yeah
- YBYoshua Bengio
... is going to be helpful.
- MMMarina Mogilko
Yeah, it's interesting when it comes to robotics, right, how soon they're gonna be able to understand any environment and replace us in those jobs. Because I've heard Geoffrey Hinton said, "Learn how to be a plumber," or something. [chuckles]
- YBYoshua Bengio
That's right.
- MMMarina Mogilko
Yeah, it's, uh, [chuckles] it's gonna be
- 20:38 – 22:20
The future of education: will degrees still matter?
- MMMarina Mogilko
in demand. So when you, when you think about your four-year-old grandson, would you encourage him to go to college or...
- YBYoshua Bengio
Yes.
- MMMarina Mogilko
Yeah?
- YBYoshua Bengio
Yes.
- MMMarina Mogilko
Okay.
- YBYoshua Bengio
Um, because education is really important, and education, contrary to what some people think, isn't just about acquiring the skills to get a job. Education is, in my opinion, mostly about how to become a better human being, how to understand yourself, how to understand our society and each other, uh, understand science. W- we will still need citizens to have that really good level of understanding in the future if we want our society to take the good decisions, the wise decisions, 'cause it's gonna be easy to, uh, you know, be swayed by wrong beliefs that... and, you know, end us in a bad place.
- MMMarina Mogilko
Do you think it's gonna look different, education? Do you think it's gonna be Harvards and Stanfords of the world and then everything else will be just AI online?
- YBYoshua Bengio
I don't know. I'm not an expert in education, but yeah, it's going to be changed. Already we are seeing sort of a parallel way of educating ourselves, thanks to the chatbots, so I expect this to grow. Does it mean that the traditional in-person education is going to go away? Maybe not, because there's a part of the education which is, "Oh, I'm, you know, moving out of home and, uh, be-- you know, socialising with other people like me and, uh, learning, uh, something that is, you know, outside of the classes, and interacting in person with the teachers, the professors." That's also a piece that you can't easily replace.
- 22:20 – 22:55
What Yoshua would tell his children about learning and career paths
- MMMarina Mogilko
100%. Is there a career path you're encouraging him toward?
- YBYoshua Bengio
No, I, I, I don't want to do that. I, I, I think, uh, our children should be given all the possible opportunities, and they should try to explore by themselves. Uh, it's too easy to ask our children to be just like us, right?
- MMMarina Mogilko
Yeah, but it's also, like, in terms of exposure. You can expose them to different things-
- YBYoshua Bengio
They will be-
- MMMarina Mogilko
-so they could see more things in their life. [chuckles]
- YBYoshua Bengio
Yeah, they will be exposed to the things that we do. Uh, so one of my sons has chosen to do machine learning research, for example. [chuckles]
- MMMarina Mogilko
See? Yeah, it's, it's just that it comes to exposure
- 22:55 – 24:03
Humanitarian vs. scientific path
- MMMarina Mogilko
as well. Do you feel it's gonna be-- the future is more humanitarian or, uh, more mathematical and scientific?
- YBYoshua Bengio
I don't think, uh, it, it, it's a choice. Um, I think being humanitarian requires a good rational understanding of the world. We can't take decisions for ourselves, but also, if you think about AI, we can't take good decisions if we don't understand how the world is and, uh, how to reason with that information. And so in order for democratic, uh, hu- you know, humanist values to prevail, uh, we also need reason to prevail. We need science to prevail.
- MMMarina Mogilko
[swoosh] You guys know how much work goes into this podcast. Thank you so much for your support. I started a newsletter to share more: my business mistakes with this and another company that I'm running, AI tools that I'm testing and using actively, and behind-the-scenes of building my team. It's free and lands in your inbox every week. Link is in the description. Let's keep learning together in this new AI
- 24:03 – 25:20
Looking back 30 years: what he'd do differently
- MMMarina Mogilko
era. [swoosh] So if you could go back 30 years, the moment when you first started working on deep learning, what would you do differently?
- YBYoshua Bengio
When I started my career, I didn't care too much about politics and society. I was focused on, you know, the math and the programming and, uh, uh-... interacting with machines more than with people. Um, but as I grew older, I, um, became more aware, um, of, uh, how what I was doing, uh, would potentially impact society in both positive and negative ways. So in, uh, 2012, 2013, when, when my colleagues Geoff Hinton and Yann LeCun were recruited in industry, uh, I was concerned about how AI would be used for personalized advertising, and I thought this wasn't really healthy in some ways. And I decided to stay in academia, um, and to see how AI could be developed for good in, in, in medicine, um, to fight climate change. And of course, more recently, I've been focusing on what can go really wrong if we're not careful how we steer AI, uh, not just the benefit- the benefits, but, but avoiding the, the, the catastrophic risks.
- MMMarina Mogilko
Is,
- 25:20 – 26:20
The AI breakthrough he wants to witness in his lifetime
- MMMarina Mogilko
is there an AI breakthrough that you really want to witness in your lifetime?
- YBYoshua Bengio
I would just be content to make sure we don't do something really terrible. Um, I think our democracies are really threatened in many ways, and AI could make things a lot worse. And in a way, there's a dynamic in which not h- having good, wise, and humanist governance and governments prevents us from steering AI towards what's going to be beneficial for all. Uh, so yeah, I, I used to not care too much about social impact and, and politics, but, uh, in the last ten years, I've started to be clearly conscious that my work was not detached from society, that my work did have an impact, and in fact, that I could choose what I would work on to, uh, really be aligned with my, my values
- 26:20 – 27:10
Which governments are getting AI policy right (and which aren't)
- YBYoshua Bengio
and, and, and my hopes for the future.
- MMMarina Mogilko
Is there any government that's doing it right when it comes to AI?
- YBYoshua Bengio
I think most governments underestimate how much of a change is likely to happen as AI capabilities continue to grow. It's a natural human bias. We, we, we tend to think of the future as a, you know, slightly modified version of the present. But if you take yourself five years ago and think about what we have now, you probably would say that's science fiction.
- MMMarina Mogilko
Yeah.
- YBYoshua Bengio
Right? And if you go back ten or twenty years, well, for me at least, uh, it's even worse. So we, we, we have to do a bit of, like, twisting our minds to imagine a future where there are machines that are basically smarter than us. And that is the question I think that governments haven't been grappling with,
- 27:10 – 29:30
One principle to guide decisions in 2026 when AI is growing rapidly
- YBYoshua Bengio
uh, sufficiently.
- MMMarina Mogilko
So it's January 2026, AGI, or whatever it is, AI, thinking strategically, might be a couple of years away. Jobs are transforming. If you had to give one principle to people to guide their decisions this year, what would it be?
- YBYoshua Bengio
Think about what you can do to bring about a better future according to your values and to your emotions. Because if we all remain passive observers of, you know, what's happening, uh, we might not go in the right direction, not the direction that you would want for you, for your children. But we tend to also underestimate, um, our ability to influence the future. Your audience, I think, is a kind of audience that can have a lot of influence on the future. But, but we, we have to start th- thinking, you know, beyond our little self and more how myself is connected to the world and what I can do, maybe in small ways, to, to bring about a better future in, in, you know, whatever ways. There are many ways.
- MMMarina Mogilko
Can you name top three? Like, talk to your government, right, is number one.
- YBYoshua Bengio
Yes. I think one of the biggest dangers we have is not managing the transitions and, and, and the growth and capabilities of AI, as I've been talking about, but there are others. Um, you know, that what we're doing to the environment is, is extremely dangerous, although I think it's longer term. I think what is happening with our democracies is, is very dangerous as well. But it's all right. Each of us can choose, uh, you know, our battles, but, but we, we, we should try to expand our horizon of, you know, what matters and, and be more, more ambitious about what we could do potentially. But we have to do it right. We have to choose where we go. For example, it's not true that everything that could be done with technology, you know, is going to be done. We can choose in which direction AI is going to be deployed. I mean, for example, for jobs, in principle, if it's just the market forces, then everything that can be automated will be automated. But maybe that's not what we collectively want. Maybe there are jobs that should not be automated, maybe even though they could-
- MMMarina Mogilko
Yeah
- YBYoshua Bengio
... because of the choices we make for our collective well-being.
- MMMarina Mogilko
I love that. Thank you so much. This gave me a lot to think about, and, uh, I guess we have something on our to-do list. Thank you, Yoshua.
- YBYoshua Bengio
My pleasure.
Episode duration: 29:30
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 0fXGtQoJgNo
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome