RE: SL4 meets "Pinky and the Brain"

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Jul 16 2002 - 20:06:30 MDT


> How I understand Eli's position is that any competent baby will arrive
> at the center with no undue respect to original orientation or position
> in the pen. (This includes one with your orientation I suppose, so I
> can't much disagree with your approach either.) To treat human moral
> systems equally requires enough interest from each to place the baby
> in the playpen but, along with a healthy crawling baby, not much more.

What you are proposing here is a very strong hypothesis about the dynamics
of AGI's.

Basically, it's a statement that the eventual moral system of an AGI is
independent of its initial moral system.

I have no idea whether this is true or not. So I'm going to guess that the
eventual moral system of an AGI is more likely to be positively than
negatively correlated with its initial moral system, and give it a good
initial moral system.

I don't think this is exactly what Eli was saying, because in his view,
giving an AI a Friendly initial moral system is likely to be positively
correlated with the AGI having a Friendly eventual moral system.

> > I don't think this makes sense, since there are some human moral
> > systems that say AGI and uploading are evil, others that say AGI
> > is a waste of resources, etc.
>
> "Some human moral systems" seriously close to placing a baby in
> the pen? Sounds like the latter are celibate. The former? Hmmm.

Agreed there... but this was not what Eliezer and I were discussing. He
wanted, I believe, the AGI to have no bias toward any particular human moral
system, regardless of that moral system's relevance to

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT