From: Eliezer S. Yudkowsky (email@example.com)
Date: Sat Jul 28 2001 - 13:34:22 MDT
Gordon Worley wrote:
> At all times, such code should be under a
> red button if you will, so it can be destroyed easily if it looks
> like the wrong people might get ahold of it (by the wrong people I
> mean ignorant, well-meaning researchers and evil corporations and
Well, maybe. On the other hand, the more you talk about the red button,
the more likely it is to be disabled in advance by the bad guys.
If, on the other hand, I can honestly say that a moderately mature seed AI
cannot be modified or even reverse-engineered without vis permission, then
this acts as a deterrent against people attempting to steal the code, and
moreover, knowing this won't help them, either. The fallout of an attack
will be strictly limited to the collateral damage done to the research
project and will not actually involve a risk of a corrupted Singularity.
Or rather, the risk will be very limited.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:00:21 MDT