Re: The Eliezer Threat (Re: Problems with AI-boxing)

From: Marcello Mathias Herreshoff (m@marcello.gotdns.com)
Date: Sat Aug 27 2005 - 14:50:51 MDT


On Sat, Aug 27, 2005 at 09:35:59AM -0700, Phil Goetz wrote:
> --- "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
> > I remind everyone that Eliezer is *not* a smarter-than-human AI.
> And yet... very possibly the top items on the Things to Do list
> of a just-born unfriendly AI would be

Precisely what makes you think that the smarter than human UFAI would use
your plan? The thing is smarter than you are and thus it would probably do
something completely different in the first place.

There are plenty of far more effective things an UFAI could do. For example,
it might hack into a few remote computers, order the ingredients for nanotech
to some lab and bribe a research assistant to mix them. Not that I really
expect it to do that either, it would probably think of something even more
ingenious.

> 1. Take over my creator's identity.
>
> 2a. (As Richard Loosemore argued) Mount a campaign to make sure
> that no other GAIs were created, such as by forming a mailing list
> and warning people of the danger of creating AIs.

Or use nano-tech to sabotage all the projects. (by turning their hard disks
into sand or something similar)

Eliezer is not too well known a person, so this wouldn't be a very effective
way of preventing other people from trying it. Not to mention all the secret
government agencies who might not care about the danger.

> 2b. Use my inhuman intelligence to convince people of the danger
> of AIs. Use an AI-experiment frame to convince people to engage
> in dialogue with me over a terminal, to conceal the fact that I
> have no body. Argue that it is all the more convincing an
> experiment because of my (misrepresented) mere human-level
> intelligence.
>
> Given the low priors I have for an arbitrary human having
> Eliezer's demonstrated intelligence,

I have very low priors for any *specific* human having Eliezer's
intelligence. I do not have low priors for the existance humans out of a
population of six billion who have it. You can be pretty sure that if you
have 6 billion data points and a roughly Gaussian distribution you will find
things with z-scores between +5 and +6. That's just how things work.

Had Eliezer not had this degree of intelligence sombody else who did would
have pretty likely ended up as SIAI's Researcher or an equivalent position.
They would have been the Eliezer of that Everet-branch and you would be
making precicely the same comment.

In short, this is not remarkable for the same reason that the fact that
somebody won the lottery is not remarkable.

> or of being able to
> convince people to let AIs out of boxes, I must consider
> the alternative.
>
> Has anyone seen Eliezer in person lately?
Yes, and he most definitely appears and acts like a human.
What did you expect?
>
> As some have argued, given any evidence that an AI might be
> unfriendly, we should destroy it, since the danger to the human
> race justifies anything we do to the AI, no matter how small the
> odds are of its unfriendliness. Given the evidence I've just
> presented that Eliezer is in fact an unfriendly AI - not very
> convincing, but still a finite possibility, probably more than
> one in six billion - what are our moral obligations at this point?

Even if I assigned one in six billion proability to this scenario, it is
dwarfed by the one in a million* scenario that he actually is the one who
will saves the world by making friendly AI, preempting some other attempt to
make an AI which would have turned out unfriendly otherwise. The EU of
Eliezer staying alive is still insanely high.

-=+Marcello Mathias Herreshoff

*: Actually I think it is quite a bit higher, the small number is just to
illustrate the point.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT