The Eliezer Threat (Re: Problems with AI-boxing)

From: Phil Goetz (philgoetz@yahoo.com)
Date: Sat Aug 27 2005 - 10:35:59 MDT


--- "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:

> I remind everyone that Eliezer is *not* a smarter-than-human AI.

And yet... very possibly the top items on the Things to Do list
of a just-born unfriendly AI would be

1. Take over my creator's identity.

2a. (As Richard Loosemore argued) Mount a campaign to make sure
that no other GAIs were created, such as by forming a mailing list
and warning people of the danger of creating AIs.

2b. Use my inhuman intelligence to convince people of the danger
of AIs. Use an AI-experiment frame to convince people to engage
in dialogue with me over a terminal, to conceal the fact that I
have no body. Argue that it is all the more convincing an
experiment because of my (misrepresented) mere human-level
intelligence.

Given the low priors I have for an arbitrary human having
Eliezer's demonstrated intelligence, or of being able to
convince people to let AIs out of boxes, I must consider
the alternative.

Has anyone seen Eliezer in person lately?

As some have argued, given any evidence that an AI might be
unfriendly, we should destroy it, since the danger to the human
race justifies anything we do to the AI, no matter how small the
odds are of its unfriendliness. Given the evidence I've just
presented that Eliezer is in fact an unfriendly AI - not very
convincing, but still a finite possibility, probably more than
one in six billion - what are our moral obligations at this point?

- Phil

                
____________________________________________________
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs
 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT