RE: SL4 meets "Pinky and the Brain"

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Jul 16 2002 - 17:01:32 MDT


James wrote:
> Eliezer S. Yudkowsky wrote:
> > James Higgins wrote:
> > > This isn't the place to get into the details but having conversed
> > > with you and Ben for awhile I believe he is significantly wiser than
> > > you (not necessarily more intelligent).
> >
> > If that's true, it doesn't change the fact that Ben Goertzel has
> > posted a mathematical definition of how he would either ask an AI to
> > optimize the world according to his goal system C, or else create a
> > population of entities with goal systems N such that the
> > population-level effect would be to optimize C. Now this can be argued
>
> You know, actually, I don't remember ever having seen such a post by
> Ben. For a period of some months I didn't read many of the posts on SL4
> so I'm guessing this is why. Could you refer me to the post/thread in
> question, I would very much like to read that thread...

I don't remember posting anything exactly like that.

What I did say is that I would create an AGI with a goal system embodying
some approximation to my own morality, which is similar to the "generic
morality" of the transhumanist community, and different from the morality of
(e.g.) the Christian Scientist or Zoroastrian communities.

I asserted that it was going to be necessary to teach a baby AGI an
approximation to some particular human moral system. Eli seemed to disagree
with this, arguing that the baby AGI should be taught to treat all human
moral systems equally (or something like that). I don't think this makes
sense, since there are some human moral systems that say AGI and uploading
are evil, others that say AGI is a waste of resources, etc.

I do not intend to ask any AGI system to optimize the world, and I doubt if
I ever said anything like that.

I do think that once an AGI is created, it is going to affect the world.
But I think that an AGI should be taught a "live and let live" value, not a
"manipulate every molecule of the universe" value. (Yes, these are very
crude terms, and should be clarified, but not today, I don't have time.)

I think that, as human-level AGI gets nearer, we'll need to do a lot of
research aimed at figuring out how to increase the odds that the AGI's we
create, when they become superhuman, will NOT forcibly reprogram the brains
or molecules of lower life-forms.

So my view is:

1) In creating an AGI, we have no choice but to instill it with a particular
initial moral system, which not all humans will agree with

2) Part of this moral system should be a value that causes it to respect the
freedom and autonomy of other sentient and living beings

If I did not state this clearly enough before, I'm sorry; as I've emphasized
repeatedly, I have never written a manifesto on this stuff, just some
unsystematic e-mails and notes. I will address these issues seriously in
writing once I'm done with the mathematical treatment of the Novamente
design I'm now in the middle of revising...

The "Sysop Scenario" is a little ambiguous to me. If you have an AI with
Sysop-level powers, it may still decide to allow autonomy to humans and
other lower life forms to live freely in their regions of space. This is
something to work towards; I'd consider it a very positive scenario given
many of the other alternatives...

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT