Re: Think of it as AGI suiciding, not boxing

From: Nick Hay (nickjhay@hotmail.com)
Date: Mon Feb 20 2006 - 03:29:00 MST


Phillip Huggan wrote:
> Why do we need to influence it during the design process beyond specifying an
> initial question?

True, you wouldn't need to. But it is the AI's influence on humanity that's the
problem.

> To communicate the design, all it prints out is an
> engineering blueprint for a really efficient ion thruster or a series of
> chemical equations leading to a cheapie solar cell.

Or it prints out a stream of text that is entirely unexpected yet unearthly
compelling.... If the space of possible outputs is large enough to contain
nontrivial inventions, say >1000 bits, it is large to contain plenty of
suprising things. Its not clear that all of these would have a benign effect on
humans -- and us humans are known to be greatly affected by the things we read.

There need only be 1 such design in the space of 2^1000 if the AI has the
intelligence to find it along with the desire to influence the universe.

> Anything that looks like
> it might destabilize the vacuum or create an UFAI, we don't build. There are
> many fields of engineering we know enough about to be assured the product
> effects of a given blueprint won't be harmful.

In some fields we know a lot about the effects of designs humans have previously
produced, and can reliably predict the safety of a subclass of these designs.
For example, we may be pretty sure an overdesigned bridge is safe to carry N
kilograms.

We know nothing about the designs a superintelligent AI could think of, as it is
smarter than us, nor the blindspots in our ability to detect dangerous things.

> To significantly reduce most extinction threats, you need to monitor all
> bio/chemical lab facilities and all computers worldwide. A means of
> military/police intervention must be devised to deal with violators too.
> Obviously there are risks of initiating WWIII and of introducing tyrants to
> power if the extinction threat reduction process is goofed. Obviously an AGI
> may kill us off. There is a volume of probability space where the AGI intends
> to be unfriendly, yet blueprints some useful product technologies (that we
> can use for manually reducing extinction risks) before we realize it is
> trying to kill us and pull its plug.

There is perhaps a large volume of probability space where we don't realise it
intends to escape before it does, being tempted by its inexhaustable supply of
useful blueprints to keep it running until then.

> I also realize this is a recipe for an
> AGI that can be used to take over the world regardless if it is friendly or
> not.

My basic point is apparently benign actions, e.g. printing out a page of text,
aren't safe. If the AI have the intention to manipulate us it doesn't need
robot arms, it can do something weird and unanticipated. You can't box a
nonFriendly superintelligent AI.

This is not to say you cannot create an Oracle which will design things for you.
It does indicate limiting its output is not enough to make it safe.

-- Nick Hay



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT