Re: Friendliness and blank-slate goal bootstrap

From: Mark Waser (mwaser@cox.net)
Date: Sat Jan 10 2004 - 10:13:27 MST


> That sounds to me like you're agreeing with Metaqualia that death in
itself
> is morally neutral.

Except that I also posited that "death is innately minorly bad when it
reduces diversity" and unless you're talking about exact clones with exact
memory states, etc., death will always reduce diversity.

> Also, I think that by "killing off everyone" he means "killing off all
> humans" not "killing off all beings and leaving the universe a completely
> blank void". So the scenario he was hypothesizing was one in which a
> superhuman, highly moral AI came to a rational decision that killing off
all
> humans is the best thing to do. Presumably because this action would
> indirectly lead to some other benefit.

I took it as killing off all humans. I think that that would be a shame
(and bad) because you would lose the diversity point occupied by humanity.
This criteria could conceivably be overwhelmed by some massive evilness of
humanity or some other criteria but I would argue that there are most
probably other actions that would also fulfill those criteria (changing
humans, etc. - yes, which moves the diversity point) without having to wipe
them out.

BTW, I personally could countenance the destruction of humanity if that were
the ONLY way to save a more numerous, more advanced race but that is pretty
much the only circumstances that I could see where a superhuman, highly
moral AI might rationally do so.

    Mark

----- Original Message -----
From: "Ben Goertzel" <ben@goertzel.org>
To: <sl4@sl4.org>
Sent: Saturday, January 10, 2004 11:07 AM
Subject: RE: Friendliness and blank-slate goal bootstrap

>
> You're arguing that death can be either good or bad, depending on who dies
> and what replaces them.
>
> That sounds to me like you're agreeing with Metaqualia that death in
itself
> is morally neutral.
>
> I.e., "can be morally good or morally bad" ==> "is morally neutral"
>
> [If you argue that it's bad more often than it's good, then you're arguing
> against neutrality...]
>
> Also, I think that by "killing off everyone" he means "killing off all
> humans" not "killing off all beings and leaving the universe a completely
> blank void". So the scenario he was hypothesizing was one in which a
> superhuman, highly moral AI came to a rational decision that killing off
all
> humans is the best thing to do. Presumably because this action would
> indirectly lead to some other benefit.
>
> -- Ben G
>
>
>
>
> > -----Original Message-----
> > From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Mark
Waser
> > Sent: Saturday, January 10, 2004 10:58 AM
> > To: sl4@sl4.org
> > Subject: Re: Friendliness and blank-slate goal bootstrap
> >
> >
> > >> Death is morally neutral. Only suffering is evil.
> >
> > I will argue this to my last breath. Death is NOT morally neutral.
Death
> > is the END of something and that something is either good or bad. I
would
> > argue that A death is as good or as bad as the opposite of the
> > thing that it
> > ends - - tempered by the good or the bad of the thing that replaces it.
I
> > would also posit that death is innately minorly bad when it reduces
> > diversity. Given this premise, I would strongly argue that there
> > is no way
> > that killing off everyone could possibly be a good idea. Like abortion
in
> > many instances, it might be the best idea in a series of bad
> > trade-offs but
> > it will never be a good idea.
> >
> > > I take the moral law I have chosen to its logical extreme, and
> > won't take
> > it
> > > back when it starts feeling uncomfortable.
> >
> > You sound like a hardcore fanatic. Maybe your feelings of discomfort
are
> > telling you something valuable. Looks to me like a case of
righteousness
> > over integrity.
> >
> > Mark
> >
> > ----- Original Message -----
> > From: "Metaqualia" <metaqualia@mynichi.com>
> > To: <sl4@sl4.org>
> > Sent: Saturday, January 10, 2004 2:06 AM
> > Subject: Re: Friendliness and blank-slate goal bootstrap
> >
> >
> > > > Be very careful here! The easiest way to reduce undesirable
> > qualia is to
> > > > kill off everyone who has the potential for experiencing them.
> > >
> > > I want someone who is superintelligent, and that takes my basic
premises
> > as
> > > temporary truths, and who recursively improves himself, and who
> > understands
> > > qualia in and out, to decide whether everyone should be killed. If you
> > > consider this eventuality (global extermination) and rule it
> > out based on
> > > your current beliefs and intelligence, you are not being modest in
front
> > of
> > > massive superintelligence. I do not rule out that killing everyone off
> > could
> > > be a good idea. Death is morally neutral. Only suffering is evil. Of
> > course
> > > a transhuman ai could do better than that by keeping everyone alive
and
> > > happy, which will reduce negative qualia and also create huge positive
> > ones,
> > > so I do have good hopes that we won't be killed. What if the
> > universe was
> > > really an evil machine and there was no way of reversing this
> > truth? What
> > > if, in every process you care to imagine, all interpretations of the
> > process
> > > in which conscious observers were contained, were real to these
> > observers
> > > just like the physical world is real to us? What if there
> > existed infinite
> > > hells where ultrasentient ultrasensitive beings were kept
> > enslaved without
> > > the possibility to die? Is this not one universe that can be
> > simulated and
> > > by virtue of this interpreted out of any sufficiently complex
> > process (or
> > > simpler processes: read moravec's simulation/consciousness/existence)?
> > >
> > > I take the moral law I have chosen to its logical extreme, and
> > won't take
> > it
> > > back when it starts feeling uncomfortable. If the universe is
> > evil overall
> > > and unfixable, it must be destroyed together with everything it
> > contains.
> > > I'd need very good proof of this obviously but i do not discount the
> > > possibility.
> > >
> > > > It seems to me that a person's method for determining the desireable
> > > > morality is based partially on instincts, partially on training, and
> > >
> > > we are talking about different things, i have answered this
previously.
> > >
> > > > ... are you sure about that? Just how heavily do you want the AI to
> > > > weigh it's self interest? Do you want it to be able to justify
> > >
> > > its self interest? at zero obviously, other than the fact that the
> > universe
> > > is likely to contain a lot more positive qualia than negative
> > ones if the
> > > moral transhuman AI stays alive, so in the end its own survival would
be
> > > more important than the survival of humans, if you consider the
million
> > > worlds with biologically evolved beings that may be out there
> > and in need
> > of
> > > salvation. So at a certain point the best it could do morally to work
> > toward
> > > the goals we have agreed could be exactly exterminating humans.
> > >
> > > > >Remember, friendliness isn't Friendliness. The former would involve
> > > something
> > > > >like making an AI friend, the latter is nothing like it.
> > Where he says
> > > > >"Friendliness should be the supergoal" it means something more like
> > > "Whatever
> > > > >is really right should be the supergoal". Friendliness is an
external
> > >
> > > Is Friendliness creating a machine that wouldn't do something
> > we wouldn't
> > > like? Or is Friendliness creating a machine that wouldn't do
> > something we
> > > wouldn't like if we were as intelligent and altruistic as it is?
> > >
> > > > This is assuming that "right" has some absolute meaning, but this is
> > > > only true in the context of a certain set of axioms (call them
> > >
> > > I am proposing qualia as universal parameters to which every
> > sentient (at
> > > least evolved ones) can relate. That was the whole purpose, so we
don't
> > get
> > > into this "relativity" argument which seems to justify things that I
am
> > not
> > > ready to accept because they just feel very wrong at a level of
> > > introspection that is as close as it could be to reality and cannot be
> > > further decomposed (negative qualia).
> > >
> > >
> > > mq
> > >
> > >
> > >
> >
> >
> >
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:44 MDT