Re: Arrow of morality

From: Perry E. Metzger (perry@piermont.com)
Date: Fri Jan 02 2004 - 17:03:56 MST


"Jef Allbright" <jef@jefallbright.net> writes:
> While I agree with you that there is no absolute morality, and that all
> morality is viewed against a background of practical survival value, it
> seems to me that there is an arrow of morality that begins to emerge as the
> context is widened in time, number of participants, or scope of issues.

Very good. So, when I've built the Friendly AI, should I inculcate it
with a desire to intervene to stop abortions, or with a desire to
intervene to stop those who would try to stop abortions?

I have no trouble with the idea that there is a vague consensus
morality that we all agree on the general dimensions of based on the
fact that it helps us all to survive.

What I have trouble with is the idea that I might be able to construct
a "Guaranteed Friendly AI"(TM), because there I need more than just
vague moral ideas -- if I'm to play Rabbi Loeb I need specific
direction to give the Golem I wish to build to guard Prague.

Perry



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT