Re: Arrow of morality

From: Jef Allbright (jef@jefallbright.net)
Date: Fri Jan 02 2004 - 17:53:10 MST


Perry E. Metzger wrote:
> "Jef Allbright" <jef@jefallbright.net> writes:
>> While I agree with you that there is no absolute morality, and that
>> all morality is viewed against a background of practical survival
>> value, it seems to me that there is an arrow of morality that begins
>> to emerge as the context is widened in time, number of participants,
>> or scope of issues.
>
> Very good. So, when I've built the Friendly AI, should I inculcate it
> with a desire to intervene to stop abortions, or with a desire to
> intervene to stop those who would try to stop abortions?
>
> I have no trouble with the idea that there is a vague consensus
> morality that we all agree on the general dimensions of based on the
> fact that it helps us all to survive.
>
> What I have trouble with is the idea that I might be able to construct
> a "Guaranteed Friendly AI"(TM), because there I need more than just
> vague moral ideas -- if I'm to play Rabbi Loeb I need specific
> direction to give the Golem I wish to build to guard Prague.

Such built-in specifics would cause inconsistencies and only delay the
desired end results.

I'm not talking about concensus morality here. In fact it is certain that
the moral advice of a higher intelligence would not be widely accepted if
communicated directly, but humans will be persuaded by the indirect, but
tangible fruits of following its direction.

- Jef
www.jefallbright.net



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT