Re: [sl4] Weakening morality

From: Vladimir Nesov (robotact@gmail.com)
Date: Mon Feb 09 2009 - 18:08:18 MST


On Tue, Feb 10, 2009 at 2:37 AM, Johnicholas Hines
<johnicholas.hines@gmail.com> wrote:
>
> If I'm parsing the various speakers correctly, Petter
> Wingren-Rasmussen made a positive statement something like: "Any such
> AI will suffer such and so."
>
> Vladimir Nesov misunderstood Petter's prediction as a normative
> statement something like "We should strive to build amoral AIs." and
> responds with a rhetorical question. His question strongly implies a
> normative statement, something like: "We should strive to behave well,
> even if it means our destruction."
>
> Then Matt Mahoney responds with a strictly positive statement. In my
> experience, Dr. Mahoney strives to only use positive statements, never
> normative ones - I'm not sure why.
>

My statement isn't normative, and isn't necessarily about human
morality. All I say is that if party's utility tells that overall
state of the world is optimal, even if it happens to include
destruction and context from which the destruction arose, there is
little point in considering a particular component of the world state,
destruction, as a negative. And otherwise, the factual statement
becomes somewhat uninteresting (how is it not?).

On the other hand, I think it'd take a very risk-taking utility to
seriously compromise ability to persevere, ceteris paribus. The
strength of AI is in ability to vary utility of the world from other
parties' point of view without losing much of world's utility from its
point of view. If sufficiently wide area of world configurations looks
good for our AI, it can make other parties keep the world's state
there by cooperating within that area. If humanity meets a paperclip
AI, they may just merge with it by shaping humanity's computronium as
paperclips, with both parties moderately happy about the outcome.
Ceasing to exist means meeting completely incompatible utility,
something mutually morally null, not morally independent, which is
only likely if both parties' utilities are very narrow, with no regard
for states outside this narrow target. In that case, the likely
outcome is utter destruction of one of the parties, from its
perspective.

-- 
Vladimir Nesov
http://causalityrelay.wordpress.com/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT