RE: All sentient have to be observer-centered! My theory of FAI morality

From: Tommy McCabe (rocketjet314@yahoo.com)
Date: Sun Feb 29 2004 - 05:58:34 MST


--- Marc Geddes <marc_geddes@yahoo.co.nz> wrote:
> --- Rafal Smigrodzki <rafal@smigrodzki.org> wrote:
> >
> Marc wrote:
> >
> > > I don't regard the evolutionary arguments as
> very
> > > convincing. They're based on observation, not
> > > experiment. Besides, it's only very recently in
> > > evolutionary history that the first sentients
> > (humans)
> > > appeared. It's the class of sentients that is
> > > revelent to FAI work. Evolutionary observations
> > about
> > > non-sentients is not likely to say much of
> > relevence.
> >
> > ### You might wish to read some evolutionary
> > psychology texts.
>
> Um..well O.K sure I don't doubt that evolutionary
> psychology is very relevent to HUMAN psychology, but
> it is of much revelevence to the general class of
> SENTIENT psychology? I'm not sure evolutionary
> psychology says much one or the other.

Is evolutionary psychology really relevant to sentient
psychology in general? No!!! And that's why you can't
go tagging sentients-in-general with purely
evolutionary traits like observer-centered moralities
taht are hardwired in.
 
> >
> > -----------------------------
> > >
> > > In any event, I don't regard non observer based
> > > sentients as even desireable (See my other
> > replies).
> > > If you strip out all observer centered goals,
> > you're
> > > left with normative altruism. All sentients
> would
> > > converge on this, and all individual uniqueness
> > would
> > > be stripped away. You'd be left with bland
> > > uniformity. An empty husk. Universal morality
> is
> > > probably just a very general set of contrainsts,
> > and
> > > FAI's following this alone would be qute unable
> to
> > > distinguish between the myraid of interesting
> > personal
> > > goals that are consistent with it. Everything
> > that
> > > didn't hurt others (assuming that Universal
> > Morality
> > > is volition based) whould be equally 'Good' to
> > such an
> > > FAI. There would be no possibility of anything
> > > unquinely human or personal. For instance the
> two
> > > outcomes 'Rafal kills himself', 'Rafal doesn't
> > kill
> > > humself' would be designated as morally
> equivalent
> > > under Volitional Morality.
> >
> > ### I don't understand the first part of your
> > paragraph. As to your claim
> > about what would and would not be equivalent under
> > volitional morality, I
> > have to disagree. Since I am opposed to killing
> > myself, all other being
> > equal, one of the outcomes is regarded as inferior
> > in any moral system
> > striving to fulfill the wishes of sentients,
> > including mine.
> >
> > Rafal
> >
>
> Well, let me try to explain the first part of the
> paragraph. As I understand it, Eliezer believes
> that
> there exists a morality which is normative (all
> ethical sentients would converge on it if they
> thought
> about it for long enough). That's why I called it a
> 'Universal Morality' (It's morally symmetric). And
> he's trying to come up an FAI which converges on
> this
> morality. But, if all sentient morality was this
> Universal Morality alone, then all sentient
> moralities
> would be identical (Because the universal morality
> is
> normative and morally symmetric). So I'm asking why
> it's desirable to build an FAI which just follows
> this
> morality alone. Why shouldn't FAI's have some
> personal goals on top of the Universal Morality?
> (So
> long as these personal goals didn't contradict the
> Universal Morality). Do you see what I'm saying?

Perhaps there is room for differentiation (maybe a lot
of differentiation) among sentients, but you would
really want the Sysop (or whatever you call it) to be
not-observer centered at all, and the Sysop, or the
first AI to initiate the Singularity, or whatever you
want to call it, is the being we're trying to develop.

> As regards the second part of what I was saying, in
> the example given of course IF you think that
> killing
> yourself would not be desireable, then Eliezer's FAI
> agrees to designate your choice as 'good'. But the
> FAI can't morally distinguigh between any of the
> choices you do in fact make. For instance IF you
> did
> decide that you wanted to kill yourself one day,
> then
> the FAI would see this as 'just another choice', no
> better or worse than your previous choices that you
> wanted to live (It would in general see all
> requests
> consistent with 'volition' as equal). In order to
> have an FAI which valued transhumanist goals you'd
> probably have to directly program some 'Personal
> Values' into the FAI, in addition to having an FAI
> which could reason about Universal Morality (the
> class
> of morally symmetric interactions). You see what
> I'm
> saying?

I just don't get how Yudkowskian Friendliness = Unable
To Distinguish Event A From Event B.

__________________________________
Do you Yahoo!?
Get better spam protection with Yahoo! Mail.
http://antispam.yahoo.com/tools



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT