Re: The Eliezer Threat (Re: Problems with AI-boxing)

From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Sat Aug 27 2005 - 13:22:11 MDT


Phil Goetz wrote:
> And yet... very possibly the top items on the Things to Do list
> of a just-born unfriendly AI would be
>
> 1. Take over my creator's identity.
>
> 2a. (As Richard Loosemore argued) Mount a campaign to make sure
> that no other GAIs were created, such as by forming a mailing list
> and warning people of the danger of creating AIs.
>
> 2b. Use my inhuman intelligence to convince people of the danger
> of AIs. Use an AI-experiment frame to convince people to engage
> in dialogue with me over a terminal, to conceal the fact that I
> have no body. Argue that it is all the more convincing an
> experiment because of my (misrepresented) mere human-level
> intelligence.
This mail read like a joke to me, but in case you are serious:

The method described would be horribly inefficient and afaict even
ineffective. Afaik to this date SIAI's activities have unfortunately not
had any major impact in the AI community at large; so far the main
reason that no UFAI has been developed seems to be the inability of most
researchers to design a working and tractable AGI. A true
superintelligence with access to the internet could probably have
predicted that outcome.
If a UFAI with access to and understanding of its own source code really
wanted people to pay attention, one more effective way would be a
demonstration. E.g. one could manipulate an AI project to implement a
limited UF seed AI that will go on a small rampage destroying a
significant part of the planetary infrastructure, and then collapse due
to design errors. It should be possible to keep the design fairly opaque
(and possibly arrange for the original researchers to be killed in the
event), so that whatever code may remain after the collapse doesn't
easily allow people to duplicate the AGI features. If the resulting UFAI
counter-measures aren't sufficiently effective, the UFAI could
demonstrate that to humanity by repeating the procedure.

And if strong nanotechnology (or something else granting equivalent or
higher control over matter) is attainable for the sort of AI you are
talking about, it could just convert the planet into computronium and
conveniently solve the threat of other human-developed AGIs as a side
effect.
And those are only a few methods that feeble human minds have come up
with so far; a SI could quite probably do better.

> Given the low priors I have for an arbitrary human having
> Eliezer's demonstrated intelligence, or of being able to
> convince people to let AIs out of boxes, I must consider
> the alternative.
Have you ever heard of people winning the lottery? How likely is that to
happen to the average player?
Statistically rare characteristics in some cases lead to the affected
object attaining increased visibility. For humans, very high effective
intelligence is such a factor. That you have in fact heard of <person X>
is a selection criterion; you aren't randomly drawing from the human
population, and you shouldn't expect the results to behave as if you had.

> As some have argued, given any evidence that an AI might be
> unfriendly, we should destroy it, since the danger to the human
> race justifies anything we do to the AI, no matter how small the
> odds are of its unfriendliness. Given the evidence I've just
> presented that Eliezer is in fact an unfriendly AI - not very
> convincing, but still a finite possibility, probably more than
> one in six billion - what are our moral obligations at this point?
Eliezer's general behaviour seems to indicate that he (I apologize for
the human pronouns; they are not meant to indicate a preconception that
Eliezer is in fact human) is working hard and (at least compared to the
competition) competently to prevent the existential disaster that would
e.g follow from the release of a UFAI.
If that estimation was mostly correct (high probability), killing him
would increase the probability of an existential disaster occurring. If
your hypothesis (Eliezer is a UFAI; low probability) were correct,
killing him would decrease the probability of an existential disastet
occurring.
The stakes are about equal, so the massively higher probability of the
first possibility makes not killing him the right decision.
Some arguments could be made on which of the two possible resulting
influences could be expected to be larger (e.g.: if Eliezer is a UFAI,
he most likely has neither access to nanotechnology nor the ability to
really effectively manipulate the general public, which massively
reduces the threat posed by him), but I don't think the resulting EU
shift would be sufficiently large and also in the right direction to
override the probability gap.

I disagree that the probability for the second scenario is higher than
1/6000000000; see above for some arguments.

Sebastian Hagen



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT