Re: All sentient have to be observer-centered! My theory of FAI morality

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Sun Feb 29 2004 - 22:15:55 MST


 --- Tommy McCabe <rocketjet314@yahoo.com> wrote: >

>
> Here's an idea: Perhaps (although I have no idea how
> you would relate them) you could have a supergoal of
> what you call Universal Morality, and then, if
> supporting Coke over Pepsi somehow supported
> Universal
> Morality, you could have it as a subgoal. That way,
>
> 1. You support Coke over Pepsi
> 2. Your support is justified
> 3. If, at any time in the future, supporting Coke
> contradicts Universal Morality, it can be easily
> dropped

Well, that's a possibility but if supporting Coke over
Pepsi supported Universal Morality, it would, by
definition be a part of Universal Morality. It
wouldn't be a part of Personal Morality, by my
definition.

The point I'm making is that being moral DOESN'T
require that all goals support Universal Morality.
All that is required for 'friendliness' (instead of
'Friendliness') is that all goals don't actually
contradict Universal Morality. There are many
possible personal goals, which, whilst not actually a
part of Universal Morality, can still be pursued
without conflicting with Universal Morality. I define
these 'Personal Moralities' as being congruent with
Universal Morality.

>
> That's like saying, "I don't know what the perfect
> car
> is, so that means I'm going to assume that having
> gum
> in the engine is necessary". Makes no sense at all.
> If
> you don't know how to build an engine, substituting
> sticks of gum isn't going to work.

That's not a fair analogy. See what I said below.
Anything at all not which isn't a part of Universal
Morality falls under the 'Personal Morality' category
(by my definition). Since the programmers won't get
everything exactly right to start with, my equation
accurately describes all human created FAI's. (Since
all such AI's will have a 'Personal Morality'
componenet to start with). I'm just pointing out that
in the real world no cars are perfect, then asking
what real world (non perfect) cars in general look
like.

>
> A 'guess' implies that Eliezer's ideas about
> morality
> aren't justified. I'm sure that Eli has good
> reasons,
> whatever they are, for thinking that Volitional
> Morality is the objective morality, or at least
> good.
> Anyway, AIs don't have moralities hardwired into
> them-
> they can correct programmer deficiencies later.
>

>
> I'll have to agree with you there. Programmers
> aren't
> perfect, and moral mistakes are bound to get into
> the
> AI. However, the AI can certainly correct these. And
> that's not even 'all FAIs'- it's just all FAIs built
> by humans.

O.K. But perhaps the FAI would come to value some of
it's 'non-perfect' goals for their own sake.

How would you as a human being like to have all the
goals which are not a part of Universal Morality
stripped out of you? It wouldn't be very nice would
it? Being moral doesn't require that all abitrary
goals are stripped out of you. It just requires that
you get rid of SOME of your abitrary goals in specific
situations (the one's that conflict with Universal
Morality).

 
>
> The FAI can't distinguish between heuristic A that
> says 'do B' and heuristic C that says 'don't do B'?

Well if course it can distinguish. But an FAI
operating off Volitional Morality can't MORALLY
distinguish between outsomes which are all equal with
respect to volition (The FAI couldn't see a MORAL
difference between two different requests which didn't
hurt anyone and didn't affect the FAI's ability to
pursue its altruistic goals).

>
> It may make life 'interesting' (even this isn't
> proven), but it's sure not something you would want
> in
> the original AI that starts the Singularity.
> "First come the Guardians or the Transition Guide,
> then come the friends and drinking companions"-
> Eliezer, CFAI
>

If it weren't for personal goals, then Universal
Morality would be pointless. Think about it. What
use would a desire to 'help others' be, if people
didn't have any personal goals? If people didn't have
some arbitrary goals like 'I want a Pepsi', 'I want a
Coke' etc, then there would be no requests to fulfil
and no point to morality at all.

Universal Morality actually REQUIRES personal goals.

Here's a thought experiment which proves it: Let's
imagine that the whole universe consisted solely of
Yudkowskian FAI's. So each FAI would would be looking
to 'help others'. But all the FAi's what to 'help
others'. The result is an infinite regress. Take a
look:

FAI number 1: I want to help others
FAI number 2: I want to help others
FAI number 3: I want to help others
FAI number 4: I want to help others

etc etc

FAI number 1 wants to help FAI number 2. But FAI
number 2 wants to help others as well. So FAI number
1 wants to 'help others to help others to help others
to help others....' danger! inifinite regress.

This proves that a totally altruistic Universal
morality is unstable. It does not meet the conditions
specified for a Universal Morality (moral symmetry,
normative, conisistent, not subjective etc).

Therefore, an input from Personal Morality is required.

=====
Please visit my web-site at: http://www.prometheuscrack.com

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT