Re: ESSAY: Forward Moral Nihilism (EP)

From: Jef Allbright (jef@jefallbright.net)
Date: Mon May 15 2006 - 15:41:14 MDT


On 5/15/06, Charles D Hixson <charleshixsn@earthlink.net> wrote:
> Jef Allbright wrote:

> > This is a point where many people get stuck with conventional ideas of
> > morality. A full explanation is not possible within the confines of
> > this email discussion, but moral decision-making *requires* that you
> > attempt to impose your will at every opportunity, but that will should
> > be as well informed as possible of the long-term consequences of its
> > actions. The degree of sentience of the Other is irrelevant to this
> > basic principle, but very relevant to the actual interaction.

> This may be a necessity in the moral structure that you have chosen.

I'm not talking about any chosen moral code or structure. I am saying
that logical consistency requires that to the extent your will is
based on understanding of the extended consequences of your actions,
it is morally imperative that you act in accordance with your will.
Looking at it from the other direction, it is impossible by definition
to act against your own will, and your actions (including intentional
inaction) will tend to lead to good to the extent that they are based
on awareness of the extended consequences.

> Mine does not require of me that I impose my will upon others, merely
> that I attempt to prevent them imposing their will upon me.

I would expect that your will includes providing a healthy respect for
the autonomy of others, and that you wholeheartedly impose that will
upon them.

>I may
> *decide* that circumstances are such that practicality requires me to
> impose my will upon them, but this is not a moral requirement.

If you think it is the right thing to do, based on your understanding
of the extended consequences of your actions, then such action is
moral.

>
> I would assert that you, also, find no such moral requirement. There
> are many people in the world who are behaving immorally, whatever your
> particular code may say morality *is*. Yet you sat there and
> corresponded with me rather than stopping them. Therefore you are not
> morally commanded by any code that you actually accept to stop them.

Moral decision-making is necessarily from the subjective point of view
of the actor. We should not confuse this with ethical codes which may
more or less correspond with subjective moral judgement. Note also
that it would be clearly immoral for me to take the position that I
must directly and immediately address all the wrongs in the world as
you suggest, because such an attempt by me would be ineffective and
the consequences therefore undesirable.

> And I certainly would not want an AI that felt morally compelled to make
> everyone behave. That might not be the worst possible outcome, but it
> would be a very bad one.

See the inconsistency? What basis do you assume the AI would use for
making everyone behave, if it can be seen in advance that the outcome
would be bad?

But some actions do indeed deliver better results than other actions.
With increasing awareness of the extended consequences of our actions
over increasing scope, we tend to make more effective decisions.
Apply this increasing awareness of effective methods, to increasing
awareness of our inter-subjective values (those which have been tested
and seen to work) and the result is decision-making that is seen as
increasingly moral.

Now if an AI were implemented as an engine for such moral
decision-making, it's moral decision-making process could easily be
more effective than any human's given our limited capability for
awareness. This "AI" could be implemented as a social framework using
actual humans as input, and it would provide outputs at a higher level
of wisdom than any human, and that might be a very good start, but
would be limited by human speed and capacity.

Note that applying this metaethical thinking gives you increasingly
moral decision-making--it facilitates discovery of increasingly
effective principles which promote shared values which work over
increasing scope--but it says nothing directly about the ends. In
fact two such moral engines, started in two separate environments,
could possibly diverge for quite some time, with each seen as becoming
increasingly moral.

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT