RE: How hard a Singularity?

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Jun 24 2002 - 14:13:52 MDT


RE: How hard a Singularity?
When the AI is sufficiently advanced, IT will stop hackers from turning it
into a monster

Of course, the risk of hackers co-opting the AI will be real during an
intermediate period

In the Novamente plan, however, even when massively distributed computing is
used to enhance the system's intelligence, there will still be "central
cognitive cores" running on dedicated clusters, which carry out intensive
real-time thinking and parcel out background jobs to the millions of
distributed machines on the Net. So a hacker couldn't do much damage
unless they busted into the central cores. Of course, this network
architecture assumes relatively near-future deployment, not the computing
technology of 2020 ...

-- ben g

  -----Original Message-----
  From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf Of
Smigrodzki, Rafal
  Sent: Monday, June 24, 2002 1:10 PM
  To: 'Ben Goertzel '; 'sl4@sysopmind.com '
  Subject: RE: How hard a Singularity?

  Ben Goertzel wrote:

  Before human-level AI is achieved, government won't care about the
  pertinent
  AI research; after it's achieved,

  ### After HL-AI is achieved and just one copy gets on the net, the horse
is out of the barn - nothing short of dismantling the Internet and using the
full force of the world government will stop hackers from turning your nice
AI into a monster (just for the heck of it). Then the AI can copy itself,
multiplying its capabilities by orders of magnitude without even a bit of
self-enhancement, and overpower humanity by sheer numbers in a few hours
(assuming transmissibility over the net).

  The only way to avoid it (AISI) is to install millions of friendly AI's to
take over the living space where unfriendly AI could undergo Darwinian or
Lamarckian evolution.

  Rafal



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT