Re: AI Boxing

From: James Higgins (jameshiggins@earthlink.net)
Date: Sat Jul 27 2002 - 09:14:04 MDT


Dani Eder wrote:
> Given the propensity for people to let the AI out
> of it's box, for safety's sake the seed AI hardware
> design will need a requirement of "no way for a
> human to let the AI out". For example, the computer
> hardware contained within a sealed box which when
> opened will release acid to destroy the hardware.
> Output devices (like monitor screens) behind glass
> windows. The room they are in full of nerve gas,
> so any attempt to hack the output devices is very
> difficult, etc. We discussed this at length
> previously, but it seems like we need to include
> the safety provisions in a seed AI installation.

Ok, so if it is completely impossible to ever let the Seed AI out why
bother at all?

Also, having the monitors behind glass, etc, serves very little (if any)
point. The programmers are going to need to be able to get access to
the machine electronically and, almost certainly, tweak & reprogram
various elements. If they can do that then it doesn't matter how much
physical protection is in place. A computer inside NORAD can be just as
vulnerable as one in my house if both are connected to the Internet.
Physical security works great for gold, nuclear missiles, etc., but not
for computers...

Besides, the sample size for this test is too small to draw any
conclusions from. And I don't think the tests done by Justin Corwin
have any significance, since the AI researchers are not going to grab
some random person off the street and give them the power to release the AI.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT