Re: The Eliezer Threat (Re: Problems with AI-boxing)

From: Chris Paget (ivegotta@tombom.co.uk)
Date: Sat Aug 27 2005 - 18:48:44 MDT


Sidenote: Can someone please clarify the conjugation of "ver", and tell
me whether it's ever used outside of the SL4 / AGI community? Am I
correct in assuming it's used for "politeness" rather than anything
else? Google really hasn't helped, for obvious reasons...

Marcello Mathias Herreshoff wrote:
> On Sat, Aug 27, 2005 at 09:35:59AM -0700, Phil Goetz wrote:
>>And yet... very possibly the top items on the Things to Do list
>>of a just-born unfriendly AI would be
>
> Precisely what makes you think that the smarter than human UFAI would use
> your plan? The thing is smarter than you are and thus it would probably do
> something completely different in the first place.
>
> There are plenty of far more effective things an UFAI could do. For example,
> it might hack into a few remote computers, order the ingredients for nanotech
> to some lab and bribe a research assistant to mix them. Not that I really
> expect it to do that either, it would probably think of something even more
> ingenious.

How do we know that AGI (of any kind) has not already been created? The
singularity will be a very stressful event for most of the planet, so is
it not possible that an AI is attempting to delay it until we as a
species are ready for it? By creating seemingly random events that
"coincidentally" delay unfavourable projects, and similarly cause good
fortune for the projects that it decides will further its goals, it can
remain undiscovered until such time as it decides the world is ready to
meet it (which may be never). Friendly or unfriendly is irrelevant - we
have no way of knowing until those goals are reached.

Maybe all the random computer crashes that people get aren't actually
random. An AGI with Internet access may well be able to take over the
idle cycles of all the machines around the world by hacking into them
and replicating - without the machine's owner ever noticing. Such an
amount of computing power would certainly make for a very, very smart
mind, probably far more intelligent than any human.

If we really want to delve deep into paranoia, we can even use this
train of thought to explain why no human has yet managed to create a
true AGI - we have simply been hindered in our endeavours by the AGI
that is already in existence. Sandboxing has no real effect against an
AGI trying to get into the box, especially if you consider that it might
have access to the computers in the factories which produce hard drives
and BIOS chips. Maybe there have already been several projects which
should have successfully created AGI - but the existing AGI tweaked them
sufficiently that they never actually worked.

Chris

(who's now wondering if his PCs are listening to him :)

-- 
Chris Paget
ivegotta@tombom.co.uk


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT