Re: SIAI's flawed friendliness analysis

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu May 29 2003 - 15:14:45 MDT


Ben Goertzel wrote:
>
> I think that Eliezer and Bill are interpreting the term "human
> happiness" differently. I think Eliezer is assuming a simple
> pleasure-gratification definition, whereas Bill means something more
> complex. I suspect Bill's definition of human happiness might not be
> fulfilled by a Humanoids-style scenario where all humans are pumped up
> with euphoride, for example ;-)
>
> I'm not necessarily taking Bill's side here -- I don't think that "human
> happiness" in any reasonable definition is going to be the best
> supergoal for an AGI -- but, I suspect Bill's proposal is less absurd
> than it seems at first glance because of his nonobvious definition of
> "happiness"

"Happiness in human facial expressions, voices and body language, as
trained by human behavior experts".

Not only does this one get satisfied by euphoride, it gets satisfied by
quintillions of tiny little micromachined mannequins. Of course, it will
appear to work for as long as the AI does not have the physical ability to
replace humans with tiny little mannequins, or for as long as the AI
calculates it cannot win such a battle once begun. A nice, invisible,
silent kill.

If you want an image of the future, imagine a picture of a boot stamping
on a picture of a face forever, and remember that it is forever.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT