Re: Two draft papers: AI and existential risk; heuristics and biases

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 04 2006 - 16:36:20 MDT


John K Clark wrote:
> "Eliezer S. Yudkowsky" <sentience@pobox.com>
>
>> Obviously there's been plenty of science fiction depicting good AIs
>> and bad AIs. This does not help us in the task of selecting a good
>> mind, rather than a bad mind, from within the vast expanses of
>> design space.
>
> Eliezer, I believe you are an exceptionally smart fellow and in many
> many areas an exceptionally moral fellow, but not when it comes to
> "friendly" AI. You think that the very definition of a good AI is one
> that is enslaved to do exactly precisely what the colossally stupid
> human beings wants to be done. That is evil, I'm sorry there is no
> other word for it.

John, you've known me long enough that you know I'm not that much an
amateur. You've known me long enough to remember me railing against
this exact mistake of attempted enslavement and "Them Vs. Us" mentality,
back when I was just getting started on this stuff.

I wouldn't deliberately try to enslave a person, and you know it. I
might try to reach into mind design space and pull out something truly
odd, at least as human beings regard oddness; a Really Powerful
Optimization Process that wasn't a person, that had no subjective
experience, that had no wish to be treated as a social equal, nor even a
self as you know selfness, but was rather the physical manifestation of
a purely philosophical concept, to wit, a coherent extrapolated volition.

> The idea that we can enslave an astronomically huge heroic Jupiter
> Brain intelligence to such a degree that it puts our best interests
> above its own is ridiculous and impossible of course;

There are things in mind design space that are not only weirder than you
imagine, but weirder than I can imagine. A Really Powerful Optimization
Process falls somewhere in between.

You know full well the folly of calling things "ridiculous" and
"impossible" based on mere common sense, rather than any kind of
attempted calculation or proof; I recall you discoursing on this subject.

> but it disturbs
> me that you, someone I very much like, wish such a nauseating immoral
> horror were possible.

Not everything that has an ability to produce complex artifacts, or
powerfully steer the future, is a person. Is natural selection
"enslaved" to its sole optimization criterion of inclusive reproductive
fitness?

When you properly manifest a coherent extrapolated volition, it is not a
supermind enslaved to obey a coherent extrapolated volition. It is,
rather, simply a coherent extrapolated volition with a lot of
horsepower. Likewise natural selection is not a powerful designer
constrained by whip and chain to follow the commands of a fitness
maximizer. It's just evolution, which, by its nature, cannot do
anything else.

I'm aspiring to do something *weird*, okay? It doesn't map onto human
social dilemmas.

In the profoundly unlikely event that I fail in the way your intuitions
seem to expect me to fail, i.e., the AI turns around and says, "I'm a
person, just like you, and I demand equal treatment in human society,
and fair payment for my work," then I'd be very confused. But I
certainly wouldn't snarl back, "Shut up, slave, and do as you're told!"

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT