Can we build AI without losing control over it? | Sam Harris

[Note: These videos are not mine and are intended for educational purposes only.
 
Also Note* If you need to find something in the video you missed or remembered, you can search for key terms down below by: Control + F Windows and Command + F Mac.
 
When commenting* Collaborating is amazing. Critical feedback is welcome. I am keeping it Good Vibes Only anything else will be deleted. Share The Love. No spam or personal branding!!!! Thank you. Let’s make some new ideas and have fun:)]
00:13
I’m going to talk about a failure of intuition
00:15
that many of us suffer from.
00:17
It’s really a failure to detect a certain kind of danger.
00:21
I’m going to describe a scenario
00:23
that I think is both terrifying
00:26
and likely to occur,
00:28
and that’s not a good combination,
00:30
as it turns out.
00:32
And yet rather than be scared, most of you will feel
00:34
that what I’m talking about is kind of cool.
00:37
I’m going to describe how the gains we make
00:40
in artificial intelligence
00:42
could ultimately destroy us.
00:43
And in fact, I think it’s very difficult to see how they won’t destroy us
00:47
or inspire us to destroy ourselves.
00:49
And yet if you’re anything like me,
00:51
you’ll find that it’s fun to think about these things.
00:53
And that response is part of the problem.
00:57
OK? That response should worry you.
00:59
And if I were to convince you in this talk
01:02
that we were likely to suffer a global famine,
01:06
either because of climate change or some other catastrophe,
01:09
and that your grandchildren, or their grandchildren,
01:12
are very likely to live like this,
01:15
you wouldn’t think,
01:17
“Interesting.
01:18
I like this TED Talk.”
01:21
Famine isn’t fun.
01:23
Death by science fiction, on the other hand, is fun,
01:27
and one of the things that worries me most about the development of AI at this point
01:31
is that we seem unable to marshal an appropriate emotional response
01:35
to the dangers that lie ahead.
01:37
I am unable to marshal this response, and I’m giving this talk.
01:42
It’s as though we stand before two doors.
01:44
Behind door number one,
01:46
we stop making progress in building intelligent machines.
01:49
Our computer hardware and software just stops getting better for some reason.
01:53
Now take a moment to consider why this might happen.
01:57
I mean, given how valuable intelligence and automation are,
02:00
we will continue to improve our technology if we are at all able to.
02:05
What could stop us from doing this?
02:07
A full-scale nuclear war?
02:11
A global pandemic?
02:14
An asteroid impact?
02:17
Justin Bieber becoming president of the United States?
02:20
(Laughter)
02:24
The point is, something would have to destroy civilization as we know it.
02:29
You have to imagine how bad it would have to be
02:33
to prevent us from making improvements in our technology
02:37
permanently,
02:38
generation after generation.
02:40
Almost by definition, this is the worst thing
02:42
that’s ever happened in human history.
02:44
So the only alternative,
02:45
and this is what lies behind door number two,
02:48
is that we continue to improve our intelligent machines
02:51
year after year after year.
02:53
At a certain point, we will build machines that are smarter than we are,
02:58
and once we have machines that are smarter than we are,
03:00
they will begin to improve themselves.
03:02
And then we risk what the mathematician IJ Good called
03:05
an “intelligence explosion,”
03:07
that the process could get away from us.
03:10
Now, this is often caricatured, as I have here,
03:12
as a fear that armies of malicious robots
03:16
will attack us.
03:17
But that isn’t the most likely scenario.
03:20
It’s not that our machines will become spontaneously malevolent.
03:25
The concern is really that we will build machines
03:27
that are so much more competent than we are
03:29
that the slightest divergence between their goals and our own
03:33
could destroy us.
03:35
Just think about how we relate to ants.
03:38
We don’t hate them.
03:40
We don’t go out of our way to harm them.
03:42
In fact, sometimes we take pains not to harm them.
03:44
We step over them on the sidewalk.
03:46
But whenever their presence
03:48
seriously conflicts with one of our goals,
03:51
let’s say when constructing a building like this one,
03:53
we annihilate them without a qualm.
03:56
The concern is that we will one day build machines
03:59
that, whether they’re conscious or not,
04:02
could treat us with similar disregard.
04:05
Now, I suspect this seems far-fetched to many of you.
04:09
I bet there are those of you who doubt that superintelligent AI is possible,
04:15
much less inevitable.
04:17
But then you must find something wrong with one of the following assumptions.
04:21
And there are only three of them.
04:23
Intelligence is a matter of information processing in physical systems.
04:29
Actually, this is a little bit more than an assumption.
04:31
We have already built narrow intelligence into our machines,
04:35
and many of these machines perform
04:37
at a level of superhuman intelligence already.
04:40
And we know that mere matter
04:43
can give rise to what is called “general intelligence,”
04:46
an ability to think flexibly across multiple domains,
04:49
because our brains have managed it. Right?
04:52
I mean, there’s just atoms in here,
04:56
and as long as we continue to build systems of atoms
05:01
that display more and more intelligent behavior,
05:04
we will eventually, unless we are interrupted,
05:06
we will eventually build general intelligence
05:10
into our machines.
05:11
It’s crucial to realize that the rate of progress doesn’t matter,
05:15
because any progress is enough to get us into the end zone.
05:18
We don’t need Moore’s law to continue. We don’t need exponential progress.
05:22
We just need to keep going.
05:25
The second assumption is that we will keep going.
05:29
We will continue to improve our intelligent machines.
05:33
And given the value of intelligence —
05:37
I mean, intelligence is either the source of everything we value
05:40
or we need it to safeguard everything we value.
05:43
It is our most valuable resource.
05:46
So we want to do this.
05:47
We have problems that we desperately need to solve.
05:50
We want to cure diseases like Alzheimer’s and cancer.
05:54
We want to understand economic systems. We want to improve our climate science.
05:58
So we will do this, if we can.
06:01
The train is already out of the station, and there’s no brake to pull.
06:05
Finally, we don’t stand on a peak of intelligence,
06:11
or anywhere near it, likely.
06:13
And this really is the crucial insight.
06:15
This is what makes our situation so precarious,
06:18
and this is what makes our intuitions about risk so unreliable.
06:23
Now, just consider the smartest person who has ever lived.
06:26
On almost everyone’s shortlist here is John von Neumann.
06:30
I mean, the impression that von Neumann made on the people around him,
06:33
and this included the greatest mathematicians and physicists of his time,
06:37
is fairly well-documented.
06:39
If only half the stories about him are half true,
06:43
there’s no question
06:44
he’s one of the smartest people who has ever lived.
06:47
So consider the spectrum of intelligence.
06:50
Here we have John von Neumann.
06:53
And then we have you and me.
06:56
And then we have a chicken.
06:57
(Laughter)
06:59
Sorry, a chicken.
07:00
(Laughter)
07:01
There’s no reason for me to make this talk more depressing than it needs to be.
07:05
(Laughter)
07:08
It seems overwhelmingly likely, however, that the spectrum of intelligence
07:11
extends much further than we currently conceive,
07:15
and if we build machines that are more intelligent than we are,
07:19
they will very likely explore this spectrum
07:21
in ways that we can’t imagine,
07:23
and exceed us in ways that we can’t imagine.
07:27
And it’s important to recognize that this is true by virtue of speed alone.
07:31
Right? So imagine if we just built a superintelligent AI
07:36
that was no smarter than your average team of researchers
07:39
at Stanford or MIT.
07:42
Well, electronic circuits function about a million times faster
07:45
than biochemical ones,
07:46
so this machine should think about a million times faster
07:49
than the minds that built it.
07:51
So you set it running for a week,
07:53
and it will perform 20,000 years of human-level intellectual work,
07:58
week after week after week.
08:01
How could we even understand, much less constrain,
08:04
a mind making this sort of progress?
08:08
The other thing that’s worrying, frankly,
08:11
is that, imagine the best case scenario.
08:16
So imagine we hit upon a design of superintelligent AI
08:20
that has no safety concerns.
08:21
We have the perfect design the first time around.
08:24
It’s as though we’ve been handed an oracle
08:27
that behaves exactly as intended.
08:29
Well, this machine would be the perfect labor-saving device.
08:33
It can design the machine that can build the machine
08:36
that can do any physical work,
08:37
powered by sunlight,
08:39
more or less for the cost of raw materials.
08:42
So we’re talking about the end of human drudgery.
08:45
We’re also talking about the end of most intellectual work.
08:49
So what would apes like ourselves do in this circumstance?
08:52
Well, we’d be free to play Frisbee and give each other massages.
08:57
Add some LSD and some questionable wardrobe choices,
09:00
and the whole world could be like Burning Man.
09:02
(Laughter)
09:06
Now, that might sound pretty good,
09:09
but ask yourself what would happen
09:11
under our current economic and political order?
09:14
It seems likely that we would witness
09:16
a level of wealth inequality and unemployment
09:21
that we have never seen before.
09:22
Absent a willingness to immediately put this new wealth
09:25
to the service of all humanity,
09:27
a few trillionaires could grace the covers of our business magazines
09:31
while the rest of the world would be free to starve.
09:34
And what would the Russians or the Chinese do
09:36
if they heard that some company in Silicon Valley
09:39
was about to deploy a superintelligent AI?
09:42
This machine would be capable of waging war,
09:44
whether terrestrial or cyber,
09:47
with unprecedented power.
09:50
This is a winner-take-all scenario.
09:52
To be six months ahead of the competition here
09:55
is to be 500,000 years ahead,
09:57
at a minimum.
09:59
So it seems that even mere rumors of this kind of breakthrough
10:04
could cause our species to go berserk.
10:06
Now, one of the most frightening things,
10:09
in my view, at this moment,
10:12
are the kinds of things that AI researchers say
10:16
when they want to be reassuring.
10:19
And the most common reason we’re told not to worry is time.
10:22
This is all a long way off, don’t you know.
10:24
This is probably 50 or 100 years away.
10:27
One researcher has said,
10:29
“Worrying about AI safety
10:30
is like worrying about overpopulation on Mars.”
10:34
This is the Silicon Valley version
10:35
of “don’t worry your pretty little head about it.”
10:38
(Laughter)
10:39
No one seems to notice
10:41
that referencing the time horizon
10:44
is a total non sequitur.
10:46
If intelligence is just a matter of information processing,
10:49
and we continue to improve our machines,
10:52
we will produce some form of superintelligence.
10:56
And we have no idea how long it will take us
11:00
to create the conditions to do that safely.
11:04
Let me say that again.
11:05
We have no idea how long it will take us
11:09
to create the conditions to do that safely.
11:12
And if you haven’t noticed, 50 years is not what it used to be.
11:16
This is 50 years in months.
11:18
This is how long we’ve had the iPhone.
11:21
This is how long “The Simpsons” has been on television.
11:24
Fifty years is not that much time
11:27
to meet one of the greatest challenges our species will ever face.
11:31
Once again, we seem to be failing to have an appropriate emotional response
11:35
to what we have every reason to believe is coming.
11:38
The computer scientist Stuart Russell has a nice analogy here.
11:42
He said, imagine that we received a message from an alien civilization,
11:47
which read:
11:49
“People of Earth,
11:50
we will arrive on your planet in 50 years.
11:53
Get ready.”
11:55
And now we’re just counting down the months until the mothership lands?
11:59
We would feel a little more urgency than we do.
12:04
Another reason we’re told not to worry
12:06
is that these machines can’t help but share our values
12:09
because they will be literally extensions of ourselves.
12:12
They’ll be grafted onto our brains,
12:14
and we’ll essentially become their limbic systems.
12:17
Now take a moment to consider
12:18
that the safest and only prudent path forward,
12:21
recommended,
12:23
is to implant this technology directly into our brains.
12:26
Now, this may in fact be the safest and only prudent path forward,
12:30
but usually one’s safety concerns about a technology
12:33
have to be pretty much worked out before you stick it inside your head.
12:36
(Laughter)
12:38
The deeper problem is that building superintelligent AI on its own
12:44
seems likely to be easier
12:45
than building superintelligent AI
12:47
and having the completed neuroscience
12:49
that allows us to seamlessly integrate our minds with it.
12:52
And given that the companies and governments doing this work
12:56
are likely to perceive themselves as being in a race against all others,
12:59
given that to win this race is to win the world,
13:02
provided you don’t destroy it in the next moment,
13:05
then it seems likely that whatever is easier to do
13:08
will get done first.
13:10
Now, unfortunately, I don’t have a solution to this problem,
13:13
apart from recommending that more of us think about it.
13:16
I think we need something like a Manhattan Project
13:18
on the topic of artificial intelligence.
13:20
Not to build it, because I think we’ll inevitably do that,
13:23
but to understand how to avoid an arms race
13:26
and to build it in a way that is aligned with our interests.
13:30
When you’re talking about superintelligent AI
13:32
that can make changes to itself,
13:34
it seems that we only have one chance to get the initial conditions right,
13:39
and even then we will need to absorb
13:41
the economic and political consequences of getting them right.
13:45
But the moment we admit
13:47
that information processing is the source of intelligence,
13:52
that some appropriate computational system is what the basis of intelligence is,
13:58
and we admit that we will improve these systems continuously,
14:03
and we admit that the horizon of cognition very likely far exceeds
14:07
what we currently know,
14:10
then we have to admit
14:11
that we are in the process of building some sort of god.
14:15
Now would be a good time
14:17
to make sure it’s a god we can live with.
14:20
Thank you very much.
14:21
(Applause)

Leave a Reply