Re: Singularitarian Principles

From: Jef Allbright (jef@jefallbright.net)
Date: Fri Mar 23 2007 - 11:56:45 MDT


On 3/23/07, Mikko J Rauhala <mjrauhal@cc.helsinki.fi> wrote:
> On pe, 2007-03-23 at 14:22 +0000, BillK wrote:
> > This seems to indicate that FAI must be designed to be 'emotional'.
> > Otherwise it will make decisions that humans regard as 'ruthless',
> > 'morally unacceptable', etc. You may argue that this would be a
> > 'good' thing, but probably most humans would disagree with you.
>
> Indeed most humans would much prefer decisions that benefit them and
> their inner circle the most, never mind the "others".

Yes, and quite necessarily, if one's actions are to have a perceivable
effect on one's environment, thus completing a feedback loop enabling
subjective progress. Extending this thinking leads to the idea of an
expanding sphere of ethical inclusion and consequences, such expansion
corresponding to increasing moral wisdom.

> This is what the
> emotional level "moral" process optimizes.

I agree with the intent of this statement, although I don't see
emotionality as a level of organization so much as a functional
description, and I would rather say it satisfices than optimizes a
moral solution.

> I'd much rather have a relentlessly, indeed even ruthlessly ethical AI
> than a self-servingly "ethical" one, even if wannabe self-servers
> disagree. Which they would be rather stupid to do, by the way, since
> they're highly unlikely to be the inner circle and very likely to be
> "others", if the AI does emulate human emotionality in making the
> distinction.

This study's results were well-written, IMO, showing the key role of
emotion in *normal* human moral judgment. Unfortunately, most readers
will accept the implied dichotomy between emotional and utilitarian
moral judgment without realizing that these are each special cases of
following principles of cooperative advantage -- the former encoded
more at the level of the organism, the latter encoded more at the
level of culture.

My motivation for engaging in these discussions of morality is to sow
the idea that humanity is now facing the possibility--and the
necessity--of raising ethical judgment to a higher level, with
cognitive capability greater than that of humans or culture, based on
an overarching cognitive framework supported by evolving human values.

Yes, ethical actions must be ruthless, in the sense of focusing on
principles rather than ends. And such actions are assessed as moral
to the extent that they are assessed to promote shared human values
that work over increasing scope.

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT