Re: Immoral optimisation - killing AIs

From: H C (lphege@hotmail.com)
Date: Wed Nov 16 2005 - 12:19:38 MST


>From: "Olie L" <neomorphy@hotmail.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Immoral optimisation - killing AIs
>Date: Wed, 16 Nov 2005 16:16:33 +1100
>
>I've searched through the archives, and noticed that there is very little
>said here about the meta-ethics of the value of AIs' existences. This is
>kinda important stuff, and although I respect that 1) it's a hard area 2)
>It's not as much fun as probability theory, it's kinda important as far as
>friendly-AI is concerned. Without a decent understanding of why an entity
>(humans) shouldn't be destroyed, all the practicalities of how to ensure
>their continued existence is kinda castles-in-the-air.
>
>*Warning! Warning! Typical introductory paragraph for a nutbag to prattle
>on about "Universal Morality"*
>
>Yeah, we don't want a Sysop to convert us into processing units, or to
>decide that the best way to solve our problems is to dope us up to the
>eyeballs with sedatives, but what's the reasoning for "vim" not to do so
>from ver perspective? Note that asking "why shouldn't?" is an entirely
>different question from "why wouldn't," which some of the people here are
>doing admirable work on (cue applause).
>
>The "would" can be addressed by goal-system examination etc.
>Unfortunately, the "should" issue can /only/ addressed with Morality, which
>attracts people to spout exceptional amounts of drivel. I'm going to
>venture out, and try to make a few meaningful statements about meta-ethics.
>
>First, a primer on what I mean by "meta-ethics". Ethics studies tend to
>fall into three categories: Meta, normative and practical.
>*Meta-ethics* looks at what moral statement are: what does "good" mean; can
>one derive "ought" from "is"; problems of subjectivity and whether or not
>moral propositions can even have any truth-value.
>*Normative ethics* looks at systems for going about achieving good - these
>tend to be focussed around variations of utilitarianism, justice or
>rule-based systems. Typical concerns: Is it ok to do some bad stuff to
>achieve lots of good stuff?
>*Practical ethics* is... you guessed it... applying normative ethics in
>practice. "We've got $10B to spend on healthcare. What do we do with it?"
>or "Under what conditions are 5th trimester abortions permissable?"
>
>One would expect that non-singularity AIs - I'm thinking advanced weak AIs
>- would still be useful/good at dealing with practical ethical problems,
>however, it will take some pretty savvy intelligences to get much ground on
>the meta-ethics end. Nb: don't get cocky with meta-ethics. Many of the
>best philosophers have taken a plunge and sunk. Anyhoo:
>
>*Puts on Moral Philosopher hat*
>
>Why might it be wrong to turn off an AI?
>
>Most moral philosophers have a pretty hard time dealing with the
>meta-ethics of killing people (contastively, the meta ethics on making
>living people suffer is pretty straightforward - suffering is bad, bad is
>Bad, avoid causing suffering).
>
>Apart from issues of making friends and family suffer, the meta-ethical
>grounding for proscribing killing usually comes down to (1) sanctity, which
>doesn't hold for non-religious types (2) Divine command - same problem (3)
>Rights - based approach (4) Actions-based approach. There are also a few
>others, such as social order considerations, but... I can't be stuffed
>wading through that. I'll focus on 3 and 4.
>
>The main idea behind these is that people have plans and intentions, and
>that disrupting these plans is "bad". The rights-approach says that there
>are certain qualities that give an entity a "Right" that shouldn't be
>violated - the qualities often stem back to the plans and intentions, so an
>examination of these is relevant...
>
>A key question for the issue of killing / turning off an AI is whether or
>not the AI has any plans, any desire to continue operating.
>
>There a few senses of the phrase "nothing to do" - if an intelligence is
>bored but wants to do stuff, that's a desire that off-ness (death)
>interferes with. If, on the other hand, an intelligence has no desire to
>do anything, no desire to think or feel, feels quite content to desist
>being intelligent, then death does not interfere with any desires. An
>intelligence that has no desire to continue existing won't mind being
>"offed"
>
>/I'm not doing a fantastic job of justifying any of these positions,
>largely because I disagree with most of these meta-ethical approaches. For
>the others, I lack a complete technical understanding. I'll therefore
>resort to the customary ethical technique of providing analogies, reductio
>ad absurdum, and relying on intuitions to invent rules (sigh)./
>
>Imagine someone wishing to commit suicide. Is this an ethically acceptable
>course of action? I think so, particularly if they're generally having a
>rough time (terminal illness etc). Just imagine they've put their affairs
>in order, said goodbye to their family, are about to put a plastic bag full
>of happy gas over their head... when somebody else shoots them in the back
>of the head, killing them instantly. Is the assassin here doing something
>ethically unacceptable? Are the intentions/ actions bad? Is the result
>bad? If the assasin is aware of the suicide-attempter's plans, does that
>make a difference?
>
>I would suggest that although the killer's intentions could be immoral, the
>result ain't bad. Whether the means of death is self-inflicted,
>other-inflicted or nature-inflicted, the suicidor's wish is granted.
>Killing a person with no desire to live is not necessarily such a terrible
>thing.
>
>Drag the analogy accross to AIs: if the AI has no desire to live, is
>killing them/ turning them off bad? Not really. An AI with no desire to
>continue operating would seem to be, necessarily, an AI with no intentions
>and no purpose. One can imagine this happening if the AI has a purpose
>that is completed.
>
>The interesting counter to this is: would it be extremely immoral to
>influence an entity to cease their intentions to live? By whatever means,
>causing a person to give up their desire to do, achieve and continue to
>live? How about comparing the goodness of creating a person with a high
>likelihood of will-to-live-cessation against creating a person more likely
>to want to keep living? My intuition says this is dancing around a very
>fine line between OK and hideous.

Morality is probably more difficult to understand than anything else. As
such, I will discuss how the frame of reference that came to mind while
reading this.

"Intention" and "Purpose" are very loaded words. Our intentions our derived
from intentions which are derived from beliefs which are derived from
reality.

Ultimately you have to reference some pre-programmed goal system, to really
refer to any *absolute* intentions or purposes. In terms of creating an AGI,
there are three different characteristics that must be accounted for: A
complicated probabilistic process, a knowledge representation schema (which
includes a goal system schema), and a seed (which is the preprogrammed
probabilistic process, the actual scheme of knowledge representation, some
initial preprogrammed values within these, and any information which is
added that modifies these)

The big question for the Singularity is whether or not the *true*
optimization of some "seed" within reality can be convergent for more than
one seed. The big question for Friendy AI is exactly what would a "seed"
look like, that, resulting from a RPOP in which this seed is based, whether
or not it could **account** (in it's RPOP) for all possible seeds in
existence (or at least within the realm of human seeds).

Or something like that.

Th3Hegem0n

>
>
>--Olie
>
>H C wrote:
>
>>"You've specified an AGI which feels desire, and stated it doesn't mimic
>>human desires"
>>
>>It wants to be Friendly, but it doesn't want to have sex with people or
>>eat food.
>>
>>
>
>Or get jealous when you ask for a second opinion, or react
>agressive-violently to actions it finds threatening?
>
>>>From: Phillip Huggan <cdnprodigy@yahoo.com>
>>>Reply-To: sl4@sl4.org
>>>To: sl4@sl4.org
>>>Subject: Re: Immorally optimized? - alternate observation points
>>>Date: Fri, 9 Sep 2005 11:15:04 -0700 (PDT)
>>>
>>>H C <lphege@hotmail.com> wrote:
>>> >Imagine (attempted) Friendly AGI named X, who resides in some computer
>>> >simulation. X observes things, gives meaning, feels desire,
>>>hypothesizes,
>>> >and is capable of creating tests for vis hypotheses. In other words,
>>>AGI X
>>> >is actually a *real* intelligent AGI, intelligent in the human sense
>>>(but
>>> >without athropomorphizing human thought procedures and desires).
>>>
>>> >Now imagine that AGI X has the capability to run "alternate observation
>>> >points" in which ve creates another "instance" of the [observation
>>>program -
>>> >aka intelligence program] and runs this intelligence program on one
>>> >particular problem... and this instance exists independently of the X,
>>> >except it modifies the same memory base. In other words "I need a
>>>program to
>>> >fly a helicopter" *clicks in disk recorded where an alternate
>>>observation
>>>point already learned/experienced flying helicopter* "Ok thanks."
>>>
>>> >Now if you optimize this concept, given some problem like "Program this
>>> >application", X could create several different AOPs and solve 5
>>>different
>>> >parts of the problem at the same time, shut them down, and start
>>>solving the
>>>main problem of the application with all of the detailed trial and error
>>>learning that took place in creating the various parts of the application
>>>already done.
>>>
>>> >The problem is, is it *immoral* to create these "parallel
>>>intelligences" and
>>> >arbitrarily destory them when they've fulfilled their purpose? Also, if
>>>you
>>>decide to respond, try to give explanation for your answers.
>>>
>>>
>>>You've specified an AGI which feels desire, and stated it doesn't mimic
>>>human desires. Which is it? If the AGI itself cannot answer this moral
>>>dillemma, it is not friendly and we are all in big trouble. I suspect
>>>the answer depends upon how important the application is you are telling
>>>the AGI to solve. If solving the application requires creating and
>>>destroying 5 sentient AIs, we are setting a precedent for computronium.
>>>
>>
>>
>>
>>Good point. You could focus on suboptimal performance while waiting for
>>the AGI to Singularitize itself and tell you the answer.
>>
>>
>>>
>>>__________________________________________________
>>>Do You Yahoo!?
>>>Tired of spam? Yahoo! Mail has the best spam protection around
>>>http://mail.yahoo.com
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT