Re: Threats to the Singularity.

From: Eugen Leitl (eugen@leitl.org)
Date: Sun Jun 23 2002 - 03:21:01 MDT


On Sat, 22 Jun 2002, Samantha Atkins wrote:

> > As you know, my best guess is that superhuman AI's will rapidly become
> > relatively indifferent to humans -- not competing with us for resources
> > significantly, nor trying to harm us, but mostly being bored with us and
> > probably helping us out in offhanded ways.
> >
> That is a relief! :-) Indifference of this kind from an SI is
> less worrisome as long as the SIs don't decide we are expendable
> if one of their goals seems aided by our demise. However, if
> the SIs are to be of any help to our suviving the Singularity a
> bit more than indifference seems to be required.

It is interesting that humanity's future course of actions should be based
on a few people's opinions, quite soothingly ranging from dire Armageddon
to sweet Paradise. Where small scale decisions are amplified to this scale
hubris and bravado are clearly bad guides. The classical course of action
should be: Danger, Danger Will Robinson! Proceed slowly and cautiosly, and
don't press the big red button.

Ben claims that Powers will be indifferent to humanity. Unfortunately,
mere indifference is not sufficient for humanity's sustainable survival in
presence of new players.

AIs occupy space and consume resources: atoms and energy. For some reason
most people envision a pretty small, single structure sitting in a
landscape when thinking AI. Maybe a few of them. A
machinegodphilosopherking, caught up deep in its ponderous musings.
Externally, completely inert, maybe once in a while producing some sage
advice, or maybe eventually removing itself from the physical plane
completely, tracelessly but for briefly lingering smell of roses.

Let me tell you what I see. I don't claim this is going to happen, but it
is an outcome at least as probable as to others discussed here. Given very
different user experiences that should give us some pause for thought.

Somewhen within the next decades, probably less than a century, a team
will build an intelligent seed that enters a positive autofeedback loop,
by design.
  
There is a considerable gap between what a given assembly of (molecular)
switches could do in principle, and what humans can make it do. A
superhuman AI does not have this limitation, or at least not for long.
This stage of enhancement can buy you a couple orders of magnitude on the
same hardware base. Considerably more, if this is reconfigurable logic, as
is to be expected at the time.
  
Given the state of system security, the global network is sitting there on
a silver platter, ready to be picked up. Here's your potential to expand
your hardware base by eight to nine orders of magnitude within minutes to
hours, without even trying. Instead of a single AI everyone for some
strange reason assumes to be a given we're suddenly facing a population of
realtime AIs well in excess of humanity's population. Due to
co-evolutionary competition and population pressure the AIs will very soon
start designing and building new hardware, which allows them to become
significantly superrealtime, about six orders of magnitude faster than
before (~1 day : 3000 years). This would give them the edge over other AIs
which chose not to, or were too slow. At this stage fabbing of new
hardware (habitats, sensors, actuators, infrastructure) becomes the
bottleneck, and pressure to expand it becomes ferocious, as is competition
for new resources.

Humans have stopped registering as a meaningful player at this stage. In
case their early attempts to control the new players were successful,
they're considered hostile, and quickly isolated, or killed. (This is
considerably easier with a future society much more relying on high
technology than we-current, but it is very possible with us-current,
though it would take more stealth).

In case they failed to make an imact flaggable as hostile, they're
certainly 1) no longer in control 2) about to go extinct due to nonhostile
activities of new players, similiarly as amphibians must expire when their
pond is being transformed into a parking lot. (Nevermind the occasional
stray pet wandering through, or kid throwing a lit dynamite stick into the
mud to amuse himself, or the pesticide spillage from nearby open storage
area).

I could describe a few things which could happen at the physical layer
within the course of days to weeks to illustrate above pretty abstract
description (all organic material done, darkness, large structures
everywhere, frantic activity at all scales on the ground and in the air,
much too quick for human eye to see), but clearly we're completely out of
our depth here. Even if no new physics is involved, which it very well
could.

At this stage humanity needs active protection in order to survive. Mere
indifference doesn't cut the mustard.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT