Re: Learning to be evil

From: Gordon Worley (redbird@rbisland.cx)
Date: Fri Feb 09 2001 - 13:29:47 MST


At 11:09 PM -0500 2/7/01, Eliezer S. Yudkowsky wrote:
> > The consequences of evil
> > actions always come back on the evil doer, having a negative effect
> > on them.
>
>A romantic and rather impractical view. Sometimes the consequences of
>evil come back on the evildoer, sometimes they don't. Highly competent
>evildoers have gone on to die in bed, surrounded by many loving, newly
>wealthy great-grandchildren, and somewhere along the line, you've got
>their genes.

This may depend on what you choose to consider evil. I would
consider anything to be evil if it has a net negative effect. In
such a case, the evil doer may not directly feel the consequences,
and in fact ve may benefit, but there will be no net benefit.
Consider, for example, that an SI decides to create a planet eating
fog that proceedes to consume planets and turn their matter into
something more useful to the SI. Along the way, some species will
have to die. Included amongst these will probably be some
intelligent ones. Had the SI not done an evil thing by wiping out
intelligent life, ve could have benefited from the intelligences on
that planet because there is the possibility they would have created
something new that the SI had never encountered before. For the SI
that seems to know everything (ignoring forgetting information so
that ve can experience for the first time again), something new would
be very interesting, even life changing. I can't be totally sure on
this, though, since I don't know what it will be like to be an SI.
I'll report back here in a few decades to let you know what it's
like. :-) Ultimately, it may turn out that my example is not evil
at all, and I would have to look for another one.

Now, an intelligence's personal idea of what is evil is something
different. Just as there may be a universal standard for beauty, yet
many people have a certain conception of beauty, there may be a
universal concept of what is evil, but many people who toss in morals
on top to create new guidlines for evil. A PETA member might think
that it's evil when I eat a hamburger, but I don't since, AFAIK, the
cow that died to make it is worth more as a sandwitch than as a dumb
(i.e. not intelligent) beast that consumes resources to be wasted
when it dies.

> > To that extent, Friendliness seems to me like an
>> inherent trait in SIs, since unlike humans they will be smart enough
> > to consider all of the consequences.
>
>And if the SI is the only one around, and powerful enough that there are
>no consequences?

That's why we have to make sure that more than one personal is
uploaded at once and that there is more than one AI. No one should
be the first upload, no matter how much money they pay, though I am
content with them being one of the first *uploads*. Just the same, a
single AI should not be released on the universe (unless there are
already some post humans around) without some other AIs.

>Anyhoo, Friendliness isn't intended to suppress evil impulses. AIs don't
>have 'em unless you put them there. Correspondingly, although other
>possibilities exist, the default, engineering-*conservative* assumption is
>that goodness also doesn't materialize in source code.
>
>Friendliness is a means whereby a genuinely, willingly altruistic
>programming team transmits genuine, willing altruism to an AI, packaged to
>avoid damage in transmission, and infused in such form as to eventually
>become independent of the original programmers.

Okay, now I see what Friendliness is supposed to be. Or wait, maybe
I don't. What do you mean by altruistic? Are you refering the
common definition as something good found in more dictionaries, the
Ayn Rand definition, or some other, personal definiton? As you might
imagine, altruism has become a loaded word due to its differing uses,
so what you consider it to mean is pivitol to understanding
Friendliness, as well as some of your other ideas that I have seen
make reference to it. Sorry if you've answered this somewhere
before, but I can't remember ever seeing a link about it or an
explination.

-- 
Gordon Worley
http://www.rbisland.cx/
mailto:redbird@rbisland.cx
PGP:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT