Re: Threats to the Singularity.

From: Samantha Atkins (samantha@objectent.com)
Date: Sat Jun 22 2002 - 22:10:27 MDT


Hi Ben,

>
> Let me be clear on one thing: I was not *advocating* indifference toward
> humanity in my post!
>
> I was merely pointing out that indifference to humanity is one possible
> motive behind caring more about future superinteligent beings -- contempt
> (which you mentioned in the post to which I was replying) being a
> *different* motive.
>
> As you know, my best guess is that superhuman AI's will rapidly become
> relatively indifferent to humans -- not competing with us for resources
> significantly, nor trying to harm us, but mostly being bored with us and
> probably helping us out in offhanded ways.
>

That is a relief! :-) Indifference of this kind from an SI is
less worrisome as long as the SIs don't decide we are expendable
   if one of their goals seems aided by our demise. However, if
the SIs are to be of any help to our suviving the Singularity a
bit more than indifference seems to be required.

I was originally more concerned with human indifference to the
fate of human beings, especially if those humans are the very
ones working on creating such SIs.

>
>>If one is for increasing intelligence (how one defines that and
>>why it is the only or most primary value are good questions) and
>>the increase of sentience, I fail to see how one can be cavalier
>>about the destruction of all currently known sentients. How can
>>one stand for intelligence and yet not care about billions of
>>intelligent beings that already exist?
>
>
> How can one care about life and yet accept the immense murder of ants that
> comes along with, say, digging the foundation for a new house?
>

We are not ants and I am not talking about ants or arbitrary
living things. I am talking about human beings - sentient
beings and of the kind we ourselves are.

> An advanced superhuman AI may become aware of 1000's of other types of
> life-forms or mind-forms that we cannot conceive of now. From its point of
> view, then, how critical will we be? From your point of view, as an upload
> with 1000x human intelligence and direct contact with these 1000's other
> life forms as well, how important will humanity be to "YOU"? Do you pretend
> to know the answers to these questions?
>

I hope that all sentients will be critical to an SI. To myself
as an upload un-uploaded humans will be of immense value as
their well-being and increased quality and quantity of life is,
after all, one of strongest motives for seeking increased
abilities and powers in the first place or to participate in the
creation of beings that have such great capabilities. I am not
pretending when I state an answer that is at the heart of my own
values and goals.

Also, none of us can answer but from where we are right now. If
the answer is that human beings are seen as expendable, even
while we are ourselves fully human, then I think that needs to
be examined and questioned carefully.

> which are very different attitudes. I do not personally hold either
> attitude, but I can sympathize more with the "indifference" attitude --
> because, from the grand perspective, one relatively primitive intelligent
> species may not be all that important.
>

I don't see anything at all "grand" about such a perspective.

> What attitude do I take?
>
> Personally I try (and occasionally succeed ;) to practice the two Buddhist
> virtues of compassion and nonattachment. The combination of these is
> tricky to master, as in a shallow sense they may seem to contradict each
> other.
>
> In the context of the present discussion, being compassionate toward humans
> means that one doesn't want them to suffer, and that one has respect for
> humans' right to continue even as more advanced beings come along. And
> nonattachment means *simultaneously* with compassion, also understanding
> that the human race does not have some kind of intrinsic special value as
> compared to other forms of existence, intelligence and life -- it means
> moving beyond one's biologically-based attachment to one's own species.
>

I don't think my attitude is limited to human sentients. It is
not biologically based although everyone seems willing to
continuously assert that it is. I would also point out that the
current form of a particular sentient and its current level of
intelligence is not cast in stone and immutable for all time.
Considering this it is even more difficult for me to dismiss any
sentient as of no real value no matter how much more advanced I
or other sentients may be or become.

>
>>>>How about we just grow a lot more sane human beings instead of
>>>>digging continuously for technological fixes that really aren't
>>>>fixes to the too often cussedness of local sentients?
>>>
>
> I think the right thing is for the human-race technological vanguard to
> simultaneously work on building artificial superintelligence, AND on
> creating better humans beings (genetic engineering, brain augmentation,
> etc.).
>

Ethics, morality, better social and economic systems...

> And in fact, this is what is happening.
>

It is not at all clear to me that the non-hardware parts of the
problem are being addressed much.

>
>>>>Replacing
>>>>them with something that is faster and arguably smarter but may
>>>>or may not be any more wise is not an answer. Scrapping
>>>>sentients is to be frowned upon even if you think you can and
>>>>even do create sentients that are arguably better along some
>>>>parameters.
>>>
>
> Yes, our value systems agree on this point.
>
> Like nearly all others on this list, I would like to see a future in which
> enhanced humans and superintelligent AI's coexist in harmony and with
> mutuall productive interactions.
>
> I do reject the notion that preserving the human race is of *absolutely
> primary* importance, but according to my own ethics and aesthetics, it is
> certainly *highly* important.
>

Can you say under what conditons you would be willing to scrap
or see the human race scrapped, not even transformed, but ended?

>
>>>"Wisdom" is a nebulous human concept that means different things to
>>>different people, and in different cultures.
>>>
>>
>>
>>I can tell you what it isn't. It isn't about writing off all
>>existing sentients in favor of something you think can be but
>>you have no idea what it will become.
>
>
> I feel like you're attacking a straw man here, because no one on this list
> suggested "writing off all existent sentients", did they? I certainly
> didn't, far from it.
>

It has been suggested that humans don't particularly matter if
we can build much more intelligent beings. It has been
suggested that concern with human beings is simply squemishness
due to "biological programming". I find suchs view short-sighted
to say the least and extremely dangerous. I am very glad to
hear that you do not hold them.

>
>>Human is the only
>>sentient basis we have to reason from and in any event is what
>>we ourselves are and we have no choice but to reason from that
>>basis.
>
>
> I think it is possible to achieve some degree of nonattachment from one
> species, in terms of one's reasoning and one's value system. Of course, one
> can never completely remove inferential and emotional bias from oneself --
> it's not even clear what this would mean!
>

I hardly see how to build a value system on the well-being of
that which does not yet exist.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT