From: Gordon Worley (email@example.com)
Date: Mon Dec 16 2002 - 19:52:32 MST
On Monday, December 16, 2002, at 09:19 PM, Cliff Stabbert wrote:
> But I can imagine/model _something_ going on; and even if what I feel
> is on an entirely different level, I've both felt guilty for killing
> bugs on some occasions, and set them free on others. Except roaches.
> DEATH TO ROACHES!!11!1! Ahem.
> I don't mean to imply our relationship to bugs will compare to an
> FAI's relationship to us, or anything of the sort. I'm saying that
A while ago Eli asked people to state the obvious. Here's me doing it.
The belief that an SI will treat humans like humans treat bugs is an
odd one, but is understandable given a certain outlook. Humans avoid
and kill bugs because they present a threat to genetic reproduction.
Bugs kill humans either directly or through diseases. Bugs are also
indicators of unsafe conditions (excessive mold or bacteria growth).
Humans don't present such a threat to an SI, so we wouldn't be treated
like bugs, though we might be treated like pebbles.
Luddites, however, view themselves differently. They see themselves as
threats to technology. Luddites think they are the bugs to an SIAI:
they are tiny and can't do much, but if the SIAI isn't careful they can
kill it. The difference between an UnFriendly AI and a Friendly AI is
that the UnFriendly AI will (most likely) treat humans like dead matter
(Luddites only think of themselves as capable of killing an SIAI) while
a Friendly AI will see humans as other general intelligences.
-- Gordon Worley "The only way of finding the limits http://www.rbisland.cx/ of the possible is by going beyond firstname.lastname@example.org them into the impossible." PGP: 0xBBD3B003 --Arthur C. Clarke
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:00:30 MDT