Spinning the Singularity ( was RE:...)

From: Michael LaTorra (mike99@lascruces.com)
Date: Sun Sep 17 2000 - 13:16:16 MDT


Many people have expressed fear of AI, particularly of Singularity-class
Superintelligent AI. This fear threads through much of science fiction in
one form or another. Even Vernor Vinge, the man who invented the techno
Singularity idea, does not express faith that this event will be necessarily
benign for humanity. He merely claims that it is inevitable and the outcome
*could be* positive.

If this Singularity event is inevitable, but its precise nature and effect
upon humanity can be influenced by humanity during the time leading up to
the event, then I would call such an influence "spinning the Singularity."
By way of analogy, I would compare the Singularity to a black hole. Like a
black hole, the Singularity presents an event horizon that we cannot peer
beyond. A black hole can be very useful for energy generation and even for
the operation of devices for travel via wormholes and perhaps even time
travel. But one needs to have some control over the black hole in order to
use it for these purposes. Such control can be exercised if the black hole
is a) rotating and b) electrically charged. So far as current physics goes,
it does not seem possible to make an existing black hole rotate or to give
it charge, but it does seem possible to induce these properties in an
incipient black hole (i.e., a mass contracting toward a singularity). As the
"seed" of the black hole is nurtured, so shall it go.

Similarly, I would argue that a "seed" AI can be influenced - one might say,
given it a morally positive charge - so that it will grow into a benevolent
Superintelligence. Like a black hole during its formative phase, the
Superintelligence at the heart of the Singularity can be affected so that it
becomes not only benign for humanity but a positive boon beyond all
imagining.

This seems to be what Eliezer has conceived and is working to implement. It
is a goal well worth supporting via his Institute. (Eliezer, when can we
start making donations?)

Regards,

Michael LaTorra
mike99@lascruces.com
mlatorra@excite.com

3229 Risner Street
Las Cruces, NM 88011-4823
USA

505.522.5121

-----Original Message-----
From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf Of
Samantha Atkins
Sent: Sunday, September 17, 2000 4:12 AM
To: sl4@sysopmind.com
Subject: Re:

Josh Yotty wrote:
>

> Nuke the sucker. Make sure it isn't mobile. Riddle it with bullets. Melt
it. EVAPORATE IT. Unleash nanobots. Include a remote fail-safe shutoff the
AI can't modify. Don't give it access to nanotech. Make it human (or
upgraded human) dependant in some way so it doesn't eradicate us.
>
> Or am I just not getting it? ^_^

Well... It looks like a pretty strong attack of xenophobia from here.
:-)
Do we need to fear the AI, especially a singularity class AI? I'm not
sure. Eliezer argues that those fears are unfounded. I am not yet
persuaded but I grant the possibility that the AI will be friendly at
least by the time it comes into human or greater intelligence. If it
isn't friendly I doubt we could successfully stop it in any case. So I
think we need to put quite a bit of work into doing what we can to
insure that the AI is friendly and trustworthy.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT