Re: Threats to the Singularity.

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jun 17 2002 - 20:54:01 MDT


Gordon Worley wrote:

>
> On Monday, June 17, 2002, at 06:51 PM, Samantha Atkins wrote:

> My core looks something like this. I want to make the universe a better
> place. A better place to live. A place that solves new, interesting
> problems. A place that I'd like to stay, but wouldn't want to visit.
>

Thanks for offering it. Now, if we could just get a bit more
handle on what you/we mean by "hetter" we could move right along.

 
> I'd like for this to include me in it, but if it turns out that the
> universe can't be better so long as I'm still in it, then I'll get out.
> I think the "yuck factor" in this is that I think the same way about
> everything. If you're making the universe a worse place, I don't really
> want you in it. I hope that getting "you" out of it only involves
> convincing you not to do whatever it is that is making the universe worse.
>

Doesn't this assume that you and perhaps other entities are
relatively immutable over time no matter what your wishes? I
don't consider that a particularly tenable notion assuming an
ability to upload and/or continuously augment, self-examine and
change. I would also challenge any being to conclusively prove
another being both of more harm than good to the universe AND
utterly incapable of ever changing.

> I think this seems yucky because this sounds just like the kind of thing
> Hitler would say. The difference is that I have compassion for all
> life. I want to see the universe better and would like for that to
> include everyone and everything in it. However, if all attempts at this
> proves impossible, I'm not going to say "well, okay, I guess the
> universe is just going to suck", but "okay, let's see what the limiting
> factors are and what we have to do to get around them".
>

I don't believe in an "improved universe" by eradicating
sentients that seem problematic except in very very limited
circumstances where an entity is capable of such destruction not
stoppable otherwise as to force the decision. I don't consider
not being as productive as somme might like to be a reasonable
criteria. I don't consider not being as rational or intelligent
such as criteria either. If your goal is maximizing life then I
don't think you see these as criteria for extermination either.

>>>> Whether we transform or simply cease to exist seems to me to be a
>>>> perfectly rational thing to be a bit concerned about. Do you see it
>>>> otherwise?
>>>
>>> Sure, you should be concerned. I think that the vast majority of
>>> humans, uploaded or not, have something positive to contribute,
>>> however small. It'd be great to see life get even better post
>>> Singularity, with everyone doing new and interesting good things.
>>
>>
>> Then we shouldn't shoot for any less, right?
>
>
> Right!
>
>> On what basis will you judge what is rational? In terms of what
>> supergoals, if you will?
>
>
> I think that I answered this above: making the universe better.
>

Fair enough.

>>>>> Some of us, myself included, see the creation of SI as important
>>>>> enough to be more important than humanity's continuation. Human
>>>>> beings, being
>>>>
>>>>
>>>>
>>>> How do you come to this conclusion? What makes the SI worth more
>>>> than all of humanity? That it can outperform them on some types of
>>>> computation? Is computational complexity and speed the sole measure
>>>> of whether sentient beings have the right to continued existence?
>>>> Can you really give a moral justification or a rational one for this?
>>>
>>> In many ways, humans are just over the threshold of intelligence.
>>
>>
>> Whose threshold? By what standards? Established and verified as the
>> standards of value how?
>
>
> You're asking for a definition of intelligence. Though question!
>

A tougher one is why intelligence is on top of your value stack
as the measure of the worthiness of various beings to exist and
as the principle measure of "better".

> One way of looking at intelligence is the ability to solve interesting
> problems. Ants solve some mildly interesting problems, apes and
> dolphins solves slightly more interesting problems, humans solve yet
> more interesting problems.

So the measure of goodness or worth is how interesting a set of
problems one can solve? The idea is admirabe in its nerdiness
but rather problematic if the question is one of right to life.

> At any level of intelligence, though, all
> the problems at the limits of solvability look interesting. As great as
> we think we are, we can already see that there are some interesting
> problems out there that we can't find solutions to (like the halting
> problem).

The halting problem is provably unsolvable by any and all levels
of intelligence.

> And, unless it turns out that intelligence doesn't scale very
> well, the trend tells us that even more interesting questions are out
> there for more intelligent minds to solve. I doubt that anything will
> ever be so intelligent that it will be able to solve every problem.

Sure, but so? Is this all there is? Is it criteria enough for
what is and is not of value?

>>> Compared to past humans we are pretty smart, but compared to the
>>> estimated potentials for intelligence we are intellectual ants. Despite
>>
>>
>> So we are to think less of ourselves because of estimated potentials?
>> Do we consider ourselves expendable because an SI comes into existence
>> that is a million times faster and more capable in the scope of its
>> creations, decision making and understanding? This does not follow.
>
>

I would add to the above that humans, individual humans, are not
any smarter than they have been for the last few thousand years
of recorded history. Go read some of the classics from the
Greek and Roman era if you have doubts of this. Culturally we
have gotten much more intellectualy efficient and much better at
accumulating, storing and processing information.

> One should be humble, but not negative. Being negative is just as
> irrational as flattery.
>
> Much has humans get to clear away ants if they're keeping the universe
> from getting better, an SI could clear away some humans if they got in
> the way. If the SI is compassionate, ve will see that the humans are
> doing some good and, being self aware, are able to change themselves to
> do more good. Unlike the humans who is unable to solve the ant problem
> by any means other than getting the ants out of the way (be that killing
> them or displacing them), an SI can solve the human problem by helping
> the humans.
>

Ants, while they may be inconvenient at a picnic or marching
across the kitchen, are not in the way fo the universe getting
better. People are likely to be even less so. I agree of
course that there are much better solutions in the case of
humans than extermination.

 From your earlier post, at what point would you not battle for
an SI in the process of being born? Suppose you had super
weapons capable of laying waste to entire nations of opposition.
  Would you use them?

> If some humans prove to be beyond help, though, I don't think it's
> totally wrong to clear them out in some way. Maybe that just means
> letting them live in a simulation where they can kill their virtual
> selves. I'll leave the solution up to a much more intelligent SI.
>

Define "beyond help". I would support popping the ones that
were too great a danger to self and others into a safety zone of
some kind (VR or otherwise) until they learn better and/or can
be cured in a way they are willing to undergo. I don't think we
should leave it up to the not-yet-existent SI now though. It is
these kinds of questions and the answer to them that will make
the difference in the level of support and vilification.

>>> But, it's not nearly so simple. All of us would probably agree that
>>> given the choice between saving one of two lives, we would choose to
>>> save the person who is most important to the completion of our goals,
>>> be that reproduction, having fun, or creating the Singularity. In
>>> the same light, if a mob is about to come in to destroy the SI just
>>> before it takes off and there is no way to stop them other than
>>> killing them, you have on one hand the life of the SI that is already
>>> more intelligent than the members of the mob and will continue to get
>>> more intelligent, and on the other the life of 100 or so humans.
>>> Given such a choice, I pick the SI.
>>
>>
>> But that is not the context of the question. The context is whether
>> the increased well-being and possibilities of existing sentients,
>> regardless of their relative current intelligence, is a high and
>> central value. If it is not then I hardly see how such an SI can be
>> described as "Friendly".
>
>
> To a Friendly intelligence, this is important.
>

OK. And that is what you wish to build, right? :-)

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT