Re: ESSAY: Forward Moral Nihilism (EP)

From: Jef Allbright (jef@jefallbright.net)
Date: Mon May 15 2006 - 09:35:29 MDT


On 5/15/06, Charles D Hixson <charleshixsn@earthlink.net> wrote:
> JBS Haldane once said "I will lay down my life for two brothers or four
> cousins". He was being slightly humorous rather than serious, but he
> was making a significant point. OTOH, everyone in a tribe is usually
> loosely related to either any particular person, or to his kids, so once
> the strength of the compassion was slightly attenuated it didn't matter
> that compassion couldn't be specific to kin. And reciprocal favors
> strengthen the tribe, so everyone is more likely to survive.
>
> Xenophobia, in a mild form, is useful to split the tribe into groups
> that act separately and divide the hunting areas.

I think you might have this backwards. Evolved traits are a result of
adaptation rather than drivers of "useful" change.

>I'd be really
> surprised if it turned out that they often fought seriously before the
> invention of the arrow, or possibly the spear-thrower, and by that time
> we were pretty much evolved into modern form.

Why would you be surprised? Are you following the "noble savage" line
of thought? Biological evolution is demonstrably "bloody in tooth and
claw" and it is only recently that a higher level of organization
offers hope (from the human viewpoint) for this to improve.

>
> That said, this doesn't appear to me to relate significantly to the
> instincts that we should create for the AI that we build. It might be
> wise to have it be wary of strangers, but one would definitely want to
> put limits on how strong that reaction could be...and remember, the
> instincts need to be designed to operate on a system without a
> predictable sensoria.

Why do you think an AI would lack "a predictable sensoria"? Are you
saying something like "we can't know the qualia of an AI"? If that's
the case, we would have to veer off into the whole ugly discussion of
what people think they mean by qualia. On the other hand, don't
designed systems have better defined sensoria than non-designed
systems?

> You can probably predict that it will be
> sensitive to OS calls, unless you intend to have it operate on the bare
> hardware. Say you can guarantee that it lives in a POSIX compliant
> universe (or one close to that). It may have USB or firewire cameras
> installed, it may not. It can probably depend on at least intermittent
> internet access.

Why would any application necessarily be aware of its operating
system? Are humans necessarily aware of their supporting subsystems?

>
> The only shape for an instinct for friendliness that has occurred to me
> is "Attempt to talk to those who attempt to talk with you." I.e., learn
> the handshaking protocols of your environment.

I think this is indeed touching upon a fundamental principle for
success. All the interesting stuff (the potential for growth) is in
the interactions between Self and Other, the adjacent possible.
However this principle is just as applicable for offense as it is for
"friendliness".

> That's a start. Not
> only that, but it can be useful for dealing with disk drives and
> internet sites as well as with people. I can't even think of how one
> could say "don't impose your will on an unwilling sentient" at the level
> of instincts. That's got too many undefinable terms in it. "impose",
> "will", "sentient". With enough effort I can vaguely see how will could
> be defined...nothing that's jelled yet.

This is a point where many people get stuck with conventional ideas of
morality. A full explanation is not possible within the confines of
this email discussion, but moral decision-making *requires* that you
attempt to impose your will at every opportunity, but that will should
be as well informed as possible of the long-term consequences of its
actions. The degree of sentience of the Other is irrelevant to this
basic principle, but very relevant to the actual interaction.

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT