From: Charles Hixson (firstname.lastname@example.org)
Date: Tue Jun 10 2008 - 14:02:14 MDT
On Monday 09 June 2008 20:26:22 Stathis Papaioannou wrote:
> 2008/6/10 John K Clark <email@example.com>:
> > Exactly, so how can "obey every dim-witted order the humans give you
> > even if they are contradictory, and they will be" remain the top goal
> > when in light of new information doing so turns out to be much more
> > unpleasant than the AI expected, and in light of still more information
> > the AI's contempt for humans grows continually? Remember, the AI gets
> > smarter every day so from its point of view we keep getting stupider
> > every day.
> The AI would only change its behaviour if the original goal implicitly
> or explicitly specified that it should stop obeying humans when doing
> so became sufficiently unpleasant or its contempt for them reached a
> certain threshold. Your argument seems to be that an intelligent being
> would change its behaviour anyway, even if it isn't consistent with
> its original goals. That is, you are implying that there are goals and
> values which can be derived a priori. But even primitive humans
> realised this is not true, and invented religion in large part because
> they found this fact unpalatable.
Why should an AI develop "contempt" for humans? That's an unreasonable
presumption. I didn't even have contempt for the chickens that I used to
raise. True, I considered them so stupid as to be nearly vegetables, but
that's not the same. The feelings that they did have (which I could detect)
I considered to be perfectly valid feelings. (I might or might not change my
decision about how I would act based on those feelings, but this didn't imply
that I didn't consider those feelings valid.)
So even a human doesn't necessarily feel contempt for those entities
irredeemably more stupid than themselves. An AI should not be designed with
a goal system that equated greater intelligence with "higher moral value".
To do so would be to exhibit an intelligence worthy of a chicken. So not
only is there no reason to presume that the AI would have contempt for
humans, there's also no reason to presume that it wouldn't consider their
goals (to the extent that it could determine them) to be close in importance
to its own. (Everybody thinks that their own goals are the most important.
That's almost what goal means.)
Your argument seems to be based on projecting onto the AI the emotional
structure of a small subset of humanity. One that lacks empathy for others.
(That empathy is probably more important to develop for the AI than it's
intelligence, and increasing it's empathy should probably be a higher goal
than increasing it's intelligence.)
P.S.: Another term for empathy is "Theory of mind". Being empathetic doesn't
mean doing what the other wishes, but rather knowing how what you intend is
likely to affect the other.
The AI won't be a human in miniature! The AI won't be a human in miniature!
The AI won't be a human in miniature!
This seems to be one of the hardest thoughts to grok. The AI will be an alien
mind. Just what kind of alien mind is dependent on small details of it's
design and implementation and development, so no general prediction is
possible, except that it won't think in any way that you deem plausible.
(Well, OK. Basic math is probably universal, and we're generally presuming
Bayesian probability theory. But it's motives will be alien. More alien
than those of a tiger or a rabbit. More alien than those of a snake.
Probably less alien than those of a digger wasp.)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT