Re: ESSAY: Forward Moral Nihilism (EP)

From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Mon May 15 2006 - 13:42:18 MDT


Jef Allbright wrote:
> On 5/15/06, Charles D Hixson <charleshixsn@earthlink.net> wrote:
>> ...
>>
>> Xenophobia, in a mild form, is useful to split the tribe into groups
>> that act separately and divide the hunting areas.
>
> I think you might have this backwards. Evolved traits are a result of
> adaptation rather than drivers of "useful" change.
You are correct about the genesis of traits, but if a complex trait
isn't useful, it won't survive. Thus is a complex trait exists it is
reasonable to consider how it is useful. This must be seen as shorthand
for "Why this trait was preserved?" rather than for "Why was this
created?", but it *is* a valid consideration.
>
>> I'd be really
>> surprised if it turned out that they often fought seriously before the
>> invention of the arrow, or possibly the spear-thrower, and by that time
>> we were pretty much evolved into modern form.
>
> Why would you be surprised? Are you following the "noble savage" line
> of thought? Biological evolution is demonstrably "bloody in tooth and
> claw" and it is only recently that a higher level of organization
> offers hope (from the human viewpoint) for this to improve.
I'd be surprised because with only clubs it usually requires a large
advantage in numbers to inflict significant damage to a group (as
opposed to an individual). Early populations were both sparse migrant,
and if a neighborhood was seen as dangerous populations would relocate
away. (When I say migrant don't think of cross continental treks, but
do think of an area hundreds of miles on a side.) Sparse is significant
because there wasn't large pressure to prevent the migration. Later on
populations became denser and technologies improved. At that point you
do start to get massive combat between tribes, but this didn't really
spring into full swing until after the invention of agriculture and
sessile communities.

OTOH, there does seem to be evidence indicating that the rate of mugging
in the early old stone age was higher than it is in today's worse
neighborhoods. This might have been a kind of guerrilla version of gang
warfare. Or it might have been simple banditry. You can tell lots of
different stories based on the available evidence, but putting much
credence in any one of them is probably a mistake. (But I'm no
expert. This is just an opinion. etc.)
>> That said, this doesn't appear to me to relate significantly to the
>> instincts that we should create for the AI that we build. It might be
>> wise to have it be wary of strangers, but one would definitely want to
>> put limits on how strong that reaction could be...and remember, the
>> instincts need to be designed to operate on a system without a
>> predictable sensoria.
> Why do you think an AI would lack "a predictable sensoria"? Are you
> saying something like "we can't know the qualia of an AI"? If that's
> the case, we would have to veer off into the whole ugly discussion of
> what people think they mean by qualia. On the other hand, don't
> designed systems have better defined sensoria than non-designed
> systems?
I'm saying that you don't want to be redesigning a new AI for every
machine configuration that comes along. I'm sure that it would be
feasible to design an AI to operate in one particular hardware
configuration. I will submit that any such AI would be crippled for
further evolution. I'm not saying anything about qualia. I don't know
of an operational definition that allows one to speak usefully about
them except as the experiencer of the qualia. But any particular set of
computer hardware will have certain meaninful signals. One abstraction
of those hardware signals is the POSIX protocol, which is implemented by
Unix systems, and is essentially implemented by Linux systems. Even
MSWind comes (can come?) close. So that is one abstract set of
communication between the hardware and software layers that is
(reasonably) well specified, and is also common.
>> sensitive to OS calls, unless you intend to have it operate on the bare
>> hardware. Say you can guarantee that it lives in a POSIX compliant
>> universe (or one close to that). It may have USB or firewire cameras
>> installed, it may not. It can probably depend on at least intermittent
>> internet access.
> You can probably predict that it will be
>
> Why would any application necessarily be aware of its operating
> system? Are humans necessarily aware of their supporting subsystems?
It would need to operate within the OS. You may not be aware of your
blood pressure, but your mental processes have direct and indirect
effects on it. The OS is the interface between the hardware layer and
the software layer. It allows the same program to run on significantly
different hardware merely by rewriting a limited interface layer.
(Well, that and recompiling for the new CPU codes.)

People's minds are closely tied into the hardware of their bodies. Some
even deny that it will ever prove feasible to separate them. Some
people say this will "only provide a fake of continued life, while the
real person was killed during the transfer". This is not my desired
approach to an AI. I want an AI that is able to operate in reduced
fashion on low end hardware, and able to scale seamlessly as fancier
hardware becomes available.
>> The only shape for an instinct for friendliness that has occurred to me
>> is "Attempt to talk to those who attempt to talk with you." I.e., learn
>> the handshaking protocols of your environment.
> I think this is indeed touching upon a fundamental principle for
> success. All the interesting stuff (the potential for growth) is in
> the interactions between Self and Other, the adjacent possible.
> However this principle is just as applicable for offense as it is for
> "friendliness".
A hostile AI wouldn't necessarily need to communicate with those it
wished to be hostile towards. A friendly AI would. The situation is
not symmetric. (OTOH, I also admit that this is merely a place to start.)
>> only that, but it can be useful for dealing with disk drives and
>> internet sites as well as with people. I can't even think of how one
>> could say "don't impose your will on an unwilling sentient" at the level
>> of instincts. That's got too many undefinable terms in it. "impose",
>> "will", "sentient". With enough effort I can vaguely see how will could
>> be defined...nothing that's jelled yet.
> This is a point where many people get stuck with conventional ideas of
> morality. A full explanation is not possible within the confines of
> this email discussion, but moral decision-making *requires* that you
> attempt to impose your will at every opportunity, but that will should
> be as well informed as possible of the long-term consequences of its
> actions. The degree of sentience of the Other is irrelevant to this
> basic principle, but very relevant to the actual interaction.
>
> - Jef
This may be a necessity in the moral structure that you have chosen.
Mine does not require of me that I impose my will upon others, merely
that I attempt to prevent them imposing their will upon me. I may
*decide* that circumstances are such that practicality requires me to
impose my will upon them, but this is not a moral requirement.

I would assert that you, also, find no such moral requirement. There
are many people in the world who are behaving immorally, whatever your
particular code may say morality *is*. Yet you sat there and
corresponded with me rather than stopping them. Therefore you are not
morally commanded by any code that you actually accept to stop them.
And I certainly would not want an AI that felt morally compelled to make
everyone behave. That might not be the worst possible outcome, but it
would be a very bad one.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT