Re: An essay I just wrote on the Singularity.

From: Perry E. Metzger (perry@piermont.com)
Date: Fri Jan 02 2004 - 13:43:18 MST


Tommy McCabe <rocketjet314@yahoo.com> writes:
>> So one can have AI. I don't dispute that. What I'm
>> talking about is
>> "Friendly" AI.
>
> Humans can, obviously not must, but can be altruistic,
> the human equivalent of Friendliness. And if one can
> do it in DNA, one can do it in code.

Can they? I am not sure that they can, in the sense of leaving behind
large numbers of copies of their genes if they behave in a
consistently altruistic manner.

BTW, I am not referring to aiding siblings or such -- Dawkins has
excellent arguments for why that isn't really altruism -- and I'm not
talking about casual giving to charity or such activities.

What I'm talking about is consistently aiding strangers in deference
to your own children, or other such activities. I'm far from sure that
such behavior isn't powerfully selected against, and one sees very
little of it in our society, so I'm not sure that it hasn't in fact
been evolved out.

>> > And if you have Friendly human-equivalent AI,
>>
>> You've taken a leap. Step back. Just because we know
>> we can build AI
>> doesn't mean we know we can build "Friendly" AI.
>
> Again, there are humans which are very friendly, and
> humans weren't built with friendliness in mind at all.

I don't think there are many humans who are friendly the way "Friendly
AI" has to be. A "Friendly AI" has to favor the preservation of other
creatures who are not members of its line (humans and their
descendents) over its own.

>> >> There are several problems here, including the fact that there
>> >> is no absolute morality (and thus no way to universally
>> >> determine "the good"),
>> >
>> > This is the postition of subjective morality, which is far from
>> > proven. It's not a 'fact', it is a possibility.
>>
>> It is unproven, for the exact same reason that the non-existence of
>> God is unproven and indeed unprovable -- I can come up with all
>> sorts of non-falsifiable scenarios in which a God could
>> exist. However, an absolute morality requires a bunch of
>> assumptions -- again, non-falsifiable assumptions. As a good
>> Popperian, depending on things that are non-falsifiable rings alarm
>> bells in my head.
>
> Perhaps I should clarify that: subjective morality is
> not only unproven, it is nowhere near certain. Neither
> is objective morality. The matter is still up for
> debate. A good Friendliness design should be
> compatible with either.

1) I argue quite strongly that there is no objective morality. You
cannot find answers to all "moral questions" by making inquiry to
some sort of moral oracle algorithm. Indeed, the very notion of
"morality" is disputed -- you will find plenty of people who don't
think they have any moral obligations at all!
   
Taking a step past that, though, it is trivially seen that the bulk of
the population does not share a common moral code, and that even those
portions which they claim to hold to they don't generally follow. Even
the people here on this mailing list will substantially agree on major
points of "morality".

There is, on top of that, no known way to establish what is 'morally
correct'. You and I can easily ascertain the "correct" speed of light
in a vacuum (to within a small error boun) with straightforward
scientific tools. There is, however, no experiment we can conduct to
determine the "correct" behavior in the face of a moral dilemma.

By the way, one shudders at what would happen if one could actually
build superhuman entities to try to *enforce* morality. See Greg
Bear's "Strength of Stones" for one scenario about where that
foolishness might lead.

>> >> that it is not clear that a construct like this would be able to
>> >> battle it out effectively against other constructs from
>> >> societies that do not construct Friendly AIs (or indeed that the
>> >> winner in the universe won't be the societies that produce the
>> >> meanest, baddest-assed intelligences rather than the friendliest
>> >> -- see evolution on earth), etc.
>> >
>> > Battle it out? The 'winner'? The 'winner' in this case is the AI
>> > who makes it to superintelligence first.
>>
>> How do we know that this hasn't already happened elsewhere in the
>> universe? We don't. We just assume (probably correctly) that it
>> hasn't happened on our planet -- but there are all sorts of other
>> planets out there. The Universe is Big. You don't want to build
>> something that will have trouble with Outside Context Problems (as
>> Ian Banks dubbed them).
>
> Another rephrasing: the first superintellgence that
> knows about us.

It doesn't matter who knows about us first. What matters is what
happens when the other hyperintelligence from the other side of the
galaxy sends over its probes and it turns out that it isn't nearly as
"friendly" and has far more resources. Such an intelligence may be the
*last* thing we encounter in our history.

>> >> Anyway, I find it interesting to speculate on possible constructs
>> >> like The Friendly AI, but not safe to assume that they're going to
>> >> be in one's future.
>> >
>> > Of course you can't assume that there will be a
>> > Singularity caused by a Friendly AI, but I'm pretty
>> > darn sure I want it to happen!
>>
>> I want roses to grow unbidden from the wood of my
>> writing desk.
>>
>> Don't speak of desire. Speak of realistic
>> possibility.
>
> I consider that a realistic possibility. And the
> probability of that happening can be influenced by us.

I'm well aware that you consider the possibility realistic. I
don't. Chacun a son gout. However, I'm happy to continue explaining
why I think it would be difficult to guarantee that an AI would be
"Friendly".

>> >> The prudent transhumanist considers survival in wide variety of
>> >> scenarios.
>> >
>> > Survival? If the first transhuman is Friendly,
>> > survival is a given,
>>
>> No, it is not, because it isn't even clear that there will be any
>> way to define "Friendly" well enough. See "no absolute morality",
>> above.
>
> That is the problem of Friendliness definition, which
> Eli knows a lot better than I do. A hard problem, I admit.

I suspect that for many reasons such a definition is impossible, or at
least beyond the reach of man.

Among other problems, we have the fact that no one has ever come up
with a universally acceptable morality, and we have Rice's Theorem
staring down the barrel at us too.

-- 
Perry E. Metzger		perry@piermont.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT