Re: Dangers of Superintelligence

From: Eugen Leitl (eugen@leitl.org)
Date: Sun Aug 29 2004 - 02:36:48 MDT


On Sun, Aug 29, 2004 at 02:19:09AM +0200, fatherjohn@club-corsica.com wrote:

> One safeguard that might be tried is to make sure that any AI has a
> very limited connection to the "real world" perhaps just one computer
> screen and a keyboard. That way the computer won't just grow out of
> control.

Dream on. There's a huge market for AIs, and it's for real-world control.
Any AI on the network is uncontainable, period.
Any SI can break out eventually by manipulating people, causing very
nonobvious side effects which are cumulative.

-- 
Eugen* Leitl leitl
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net




This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT