RE: Friendliness and blank-slate goal bootstrap

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jan 11 2004 - 07:25:27 MST


Good morning fellow SL4-oids...

I think Metaqualia is raising an important perspective, and I find the
reactions to his posts a bit severe overall.

We need to guard against this list becoming a philosophical orthodoxy!

On this list, it tends to be assumed that "Friendliness to humans" is an
immensely important value to be transmitted to the superintelligent AI's we
create...

Metaqualia is questioning this "orthodoxy" -- which should be permitted,
right? -- and proposing that, perhaps, we humans aren't so all-important
after all ... that, perhaps, we should seek to inculcate a more *abstract*
form of morality in our AGI's, and then let them, with their deep abstract
morality and powerful intelligence, make their own judgment about the
particular configurations of matter known as "humans"...

I note that, in my own AGI work, I intend to basically follow the SL4
orthodoxy and inculcate "Friendliness to humans and other sentients" as a
core value in my own AGI systems (once they get advanced enough for this to
be meaningful).

However, I also intend to remain open to the questioning of all values, even
those that seem extremely basic and solid to me -- even the SL4
orthodoxy....

One problem I have with Metaqualia's perspective is the slipperiness of this
hypothesized abstract morality. Friendliness to humans is slippery enough.
His proposed abstract morality ---- about the balance between positive and
negative qualia ---- is orders of magnitude slipperier, since it relies on
"qualia" which we don't really know how to quantify ... nor do we know if
qualia can be reliably categorized as positive vs. negative, etc.

Even if IN PRINCIPLE it makes sense to create AGI's with the right abstract
morality rather than a concrete Friendly-to-humans-and-sentients morality,
this seems in practice very hard because of the difficulty of formalizing
and "pinning down" abstract morality....

I also note that the gap between Metaqualia and the SL4 orthodoxy may not be
so big as it appears.

If you replace "Friendly to humans" with "Friendly to humans and sentients"
in the SL4 orthodox goal system, then you have something a bit closer to
Metaqualia's "increase positive qualia" -- IF you introduce the hypothesis
that sentients have more qualia or more intense qualia than anything else.
Right?

And when you try to quantify what "Friendly to X" means, you have two
choices

-- encouraging positive qualia on the part of X
-- obeying what X's volition requests, insofar as possible

But these need to be balanced in any case, because human volition is famous
for requesting more than is possible. In choosing which of the
mutually-contradictory requests of human volition to fulfill, our
hypothetical superhuman AI must make judgments based on some criterion other
than volition, e.g. based on which of a human's contradictory volitions will
lead to more positive qualia in that human or in the cosmos...

THIS, to me, is a subtle point of morality ---- balancing the desire to
promote positive qualia with the desire to allow sentients to control their
destinies. I face this point of morality all the time as a parent, and a
superhuman AGI will face it vastly more so....

Note that I have spoken about "abstract morality" not "objective morality."
About "objective morality" -- I guess there could emerge something in the
future that would seem to superintelligent AI's to be an "objective
morality." But something that appears to rational, skeptical *humans* as an
objective morality -- well, that seems very, very doubtful to ever emerge.
The very concept of "morality" appears to contain subjectivity within
itself -- when analyzed as a part of human psychology.... Even if a
superintelligent AI discovers an "objective morality" (in its view), we
skeptical rationalist humans won't be able to fully appreciate why it thinks
it's so "objective." We have a certain element of irrepressible
anti-absolutist skepticism wired into our hearts, it's part of what makes us
"human." Just ask the "Underground Man" ;-)

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Samantha
> Atkins
> Sent: Sunday, January 11, 2004 2:59 AM
> To: sl4@sl4.org
> Subject: Re: Friendliness and blank-slate goal bootstrap
>
>
> On Sat, 10 Jan 2004 16:06:59 +0900
> "Metaqualia" <metaqualia@mynichi.com> wrote:
>
> > > Be very careful here! The easiest way to reduce undesirable
> qualia is to
> > > kill off everyone who has the potential for experiencing them.
> >
> > I want someone who is superintelligent, and that takes my basic
> premises as
> > temporary truths, and who recursively improves himself, and who
> understands
> > qualia in and out, to decide whether everyone should be killed. If you
> > consider this eventuality (global extermination) and rule it
> out based on
> > your current beliefs and intelligence, you are not being modest
> in front of
> > massive superintelligence.
>
> No, it is not "modest". However, as an evolved sentient being, I
> have an overwhelming supergoal of survival and the survival of my
> kind. So I will asked to be excused from lining up for the no
> doubt "humane" extermination of humanity if the ever so
> inscrutable SAI decides it must be so.
>
> >I do not rule out that killing everyone off could
> > be a good idea.
>
> In that case I will not help you with any project you may
> undertake. By this statement you are a potential major enemy to
> much I hold dear. Please take your high-falutin intellectual
> stance down to the level of real people if you don't mind. Such
> things profit from periodic grounding.
>
>
> > Death is morally neutral. Only suffering is evil.
>
> So you would run humane death camps? I am sooo relieved!
>
> <snip>
>
> > I take the moral law I have chosen to its logical extreme, and
> won't take it
> > back when it starts feeling uncomfortable. If the universe is
> evil overall
> > and unfixable, it must be destroyed together with everything it
> contains.
> > I'd need very good proof of this obviously but i do not discount the
> > possibility.
> >
>
> If reality is "evil" (whatever *you* take that to mean) and
> "unfixable" then you will work to detroy it? Exactly how the
> heck do you think you, a part of reality, can go about destroying
> reality itself??
>
> - s
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT