Re:

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Sep 16 2000 - 18:04:07 MDT


Josh Yotty wrote:
>
> Nuke the sucker. Make sure it isn't mobile. Riddle it with bullets. Melt it. EVAPORATE IT. Unleash nanobots. Include a remote fail-safe shutoff the AI can't modify. Don't give it access to nanotech. Make it human (or upgraded human) dependant in some way so it doesn't eradicate us.
>
> Or am I just not getting it? ^_^

Ya ain't getting it. One, let's suppose we do what you say, which we won't,
and that it works, which it wouldn't. So you keep the AI locked in the
basement. Two years later, the Second Singularity Institute comes along with
their AI, which isn't locked in the basement. The point of all this escapes
me.

Besides which, a real superintelligence, or maybe any random transhuman, can
probably escape through the surrounding humans. One line of text, printed on
a screen, is enough. As long as it's physically possible for the reader's
brain to assume a state such that the SI is let out of the box...

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT