Re: Defining Right and Wrong

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Nov 24 2002 - 17:21:35 MST


Michael Roy Ames wrote:

>Dear Ben Goertzel (and SL4)
>
>You wrote:
>
>
>>Michael, as I see it, ethical/moral values cannot be tested
>>according to "how usefully they describe reality."
>>
>>They are prescriptive rather than descriptive.
>>
>>
>>
>
>There are two separate 'questions' here that can become confused if
>mixed. Firstly there is question if whether ethical/moral values can
>be tested against reality. Secondly there is the question as to
>whether they are prescriptive or descriptive.
>
>For the first... I am positing that moral Rightness is an *absolute*
>quality of the universe, therefore if you are correct in saying that
>ethical/moral values cannot be tested against reality, then my whole
>argument falls.
>
Testability is a separate issue from moral absoluteness. On what
grounds would you posit that Rightness is a universal absolute? This is
exceedingly arbitrary and inherently unprovable and of questionable meaning.

>Indeed, if you are correct, then *any* argument that
>suggests there is absolute right and wrong will fall, and all
>morality is arbitrary - merely a random side-effect of human social
>consciousness.
>
False dichotomy.

> I suggest that any action taken by a sentience can be
>judged against this absolute value, by the sentience itself, viewed
>through a window of ver own intelligence and understanding of the
>situation. This implies that a variety of Rightness judgements made
>by any given set of independent sentients, starting off with
>differing 'windows', will gradually converge as intelligence and
>situational-knowledge increase. This 'convergence' is mentioned in
>Eliezer's work (peripherally) and I have been uncomfortable with the
>idea. However, if we are to create a self-enhancing machine that will
>outstrip us in every way, including ethical and moral values, then it
>simply due-diligence to attempt to understand in what direction it
>will develop *eventually*, and give it a good head-start in that
>direction. Thus: Friendly AI. The thing is, Friendly AI has to be
>Right... not just for us, but for whatever we might become in the
>future as well.
>
No. It does not require such absolute knowledge any more than we do to
make good ethical decisions.

>It has to be Right, not only for our biosphere, but
>for any other life we find in the universe also. Its a tall order, to
>*have* to be Right.
>
It is inherently impossible by this sort of definition. It is not at
all obvious that a brand-spanking new FAI has to be [R]ight for all
sentients for all time. I will be quite satisfied if it maximizes the
local intelligence quotient and insures room for us to survive and grow.
 That is quite sufficient for now. This having to be universally Right
is simply a formula for paralysis or worse.

>But there is no point shying away from the
>question... that won't help anyone. Sorry for the melodrama... but
>sometimes things have to be said - so they are not forgotten.
>
>
It is melodramatic and incorrect imho. There is no need to worry our
small little brains about such at this time nor to greatly worry the FAI
with it.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT