Re: Threats to the Singularity.

From: Gordon Worley (redbird@rbisland.cx)
Date: Mon Jun 17 2002 - 19:19:02 MDT


On Monday, June 17, 2002, at 06:51 PM, Samantha Atkins wrote:

> There must be some core, some set of fundamental values, that is
> unassailable (at least at a point in time) for an ethical system to be
> built. It is only in the context of such that the question of "some
> reason" can even be addressed meaningfully. The life and well-being of
> sentients *is* part of my core. It is not itself subject to further
> breakdown to reasons why this is a core. To further break it down
> would require another core reason that this one could be examined in
> terms of. A large part of my questions here are an attempt to
> determine what that core is for various parties.

My core looks something like this. I want to make the universe a better
place. A better place to live. A place that solves new, interesting
problems. A place that I'd like to stay, but wouldn't want to visit.

I'd like for this to include me in it, but if it turns out that the
universe can't be better so long as I'm still in it, then I'll get out.
I think the "yuck factor" in this is that I think the same way about
everything. If you're making the universe a worse place, I don't really
want you in it. I hope that getting "you" out of it only involves
convincing you not to do whatever it is that is making the universe
worse.

I think this seems yucky because this sounds just like the kind of thing
Hitler would say. The difference is that I have compassion for all
life. I want to see the universe better and would like for that to
include everyone and everything in it. However, if all attempts at this
proves impossible, I'm not going to say "well, okay, I guess the
universe is just going to suck", but "okay, let's see what the limiting
factors are and what we have to do to get around them".

>>> Whether we transform or simply cease to exist seems to me to be a
>>> perfectly rational thing to be a bit concerned about. Do you see it
>>> otherwise?
>> Sure, you should be concerned. I think that the vast majority of
>> humans, uploaded or not, have something positive to contribute,
>> however small. It'd be great to see life get even better post
>> Singularity, with everyone doing new and interesting good things.
>
> Then we shouldn't shoot for any less, right?

Right!

> On what basis will you judge what is rational? In terms of what
> supergoals, if you will?

I think that I answered this above: making the universe better.

>>>> Some of us, myself included, see the creation of SI as important
>>>> enough to be more important than humanity's continuation. Human
>>>> beings, being
>>>
>>>
>>> How do you come to this conclusion? What makes the SI worth more
>>> than all of humanity? That it can outperform them on some types of
>>> computation? Is computational complexity and speed the sole measure
>>> of whether sentient beings have the right to continued existence?
>>> Can you really give a moral justification or a rational one for this?
>> In many ways, humans are just over the threshold of intelligence.
>
> Whose threshold? By what standards? Established and verified as the
> standards of value how?

You're asking for a definition of intelligence. Though question!

One way of looking at intelligence is the ability to solve interesting
problems. Ants solve some mildly interesting problems, apes and
dolphins solves slightly more interesting problems, humans solve yet
more interesting problems. At any level of intelligence, though, all
the problems at the limits of solvability look interesting. As great as
we think we are, we can already see that there are some interesting
problems out there that we can't find solutions to (like the halting
problem). And, unless it turns out that intelligence doesn't scale very
well, the trend tells us that even more interesting questions are out
there for more intelligent minds to solve. I doubt that anything will
ever be so intelligent that it will be able to solve every problem.

With infinite problems, we see that all intelligence is "just over the
threshold", but it's clear that ants are closer to the threshold than
humans and humans are closer than SIs.

This is just one way of thinking about it, though. Ask anyone and they
could probably give you a different way of thinking about the same thing.

>> Compared to past humans we are pretty smart, but compared to the
>> estimated potentials for intelligence we are intellectual ants.
>> Despite
>
> So we are to think less of ourselves because of estimated potentials?
> Do we consider ourselves expendable because an SI comes into existence
> that is a million times faster and more capable in the scope of its
> creations, decision making and understanding? This does not follow.

One should be humble, but not negative. Being negative is just as
irrational as flattery.

Much has humans get to clear away ants if they're keeping the universe
from getting better, an SI could clear away some humans if they got in
the way. If the SI is compassionate, ve will see that the humans are
doing some good and, being self aware, are able to change themselves to
do more good. Unlike the humans who is unable to solve the ant problem
by any means other than getting the ants out of the way (be that killing
them or displacing them), an SI can solve the human problem by helping
the humans.

If some humans prove to be beyond help, though, I don't think it's
totally wrong to clear them out in some way. Maybe that just means
letting them live in a simulation where they can kill their virtual
selves. I'll leave the solution up to a much more intelligent SI.

>> But, it's not nearly so simple. All of us would probably agree that
>> given the choice between saving one of two lives, we would choose to
>> save the person who is most important to the completion of our goals,
>> be that reproduction, having fun, or creating the Singularity. In the
>> same light, if a mob is about to come in to destroy the SI just before
>> it takes off and there is no way to stop them other than killing them,
>> you have on one hand the life of the SI that is already more
>> intelligent than the members of the mob and will continue to get more
>> intelligent, and on the other the life of 100 or so humans. Given
>> such a choice, I pick the SI.
>
> But that is not the context of the question. The context is whether
> the increased well-being and possibilities of existing sentients,
> regardless of their relative current intelligence, is a high and
> central value. If it is not then I hardly see how such an SI can be
> described as "Friendly".

To a Friendly intelligence, this is important.

>>>> self aware, do present more of an ethical delima than cows if it
>>>> turns out that you might be forced to sacrifice some of them. I
>>>> would like to see all of humanity make it into a post Singularity
>>>> existence and I am willing to help make this a reality.
>>>
>>>
>>> How kind of you. However, from the above it seems you see them as an
>>> ethical dilemna greater than that of cows but if your SI, whatever it
>>> turns out really to be, seems to require or decides the death of one
>>> or all of them, then you would have to side with the SI.
>>>
>>> Do I read you correctly? If I do, then why do you hold this
>>> position? If I read you correctly then how can you expect the
>>> majority of human beings, if they really understood you, to consider
>>> you as other than a monster?
>> If an SI said it needed to kill a bunch of humans, I would seriously
>> start questioning its motives. Killing intelligent life is not
>> something to be taken lightly and done on a whim. However, if we had
>> a FAI that was really Friendly and it said "Gordon, believe me, the
>> only way is to kill this person", I would trust in the much wiser SI.
>
> OK, that seems better. But how would you evaluate how Friendly this
> superintelligence really was?

With my Friendly-O-Meter, of course. ;-)

That's a rather complex question that I don't really have the answer to
(and if I ever came up with some kind of partial answer, I've forgotten
what it was). Maybe someone else knows?

--
Gordon Worley                     `When I use a word,' Humpty Dumpty
http://www.rbisland.cx/            said, `it means just what I choose
redbird@rbisland.cx                it to mean--neither more nor less.'
PGP:  0xBBD3B003                                  --Lewis Carroll


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT