Re: Threats to the Singularity.

From: James Higgins (jameshiggins@earthlink.net)
Date: Sun Jun 23 2002 - 18:35:28 MDT


At 07:27 PM 6/23/2002 -0700, you wrote:
>Ben wrote:
> >
> > It would probably take years, not months (though months is possible), for
>an
> > AGI to complete its bid for world power based on financial and political
> > operations...
> >
> > But I do consider it a very likely outcome. And I do think the AGI will
>want
> > world power, both to maximize its own hardware base, and to prevent nasty
> > humans from blowing it up.
> >
>
>
>This route (financial and political domination) would seem to me to be very
>high-energy and high-risk. There are so many other lower-profile and
>lower-risk options, that I cannot see an SI choosing the 'human
>power-structure' way. Don't believe me? Here's some 'softer' options the
>SI might take:
>
>1) Become so incredibly useful, that humans *want* to
>protect/help/facilitate ver continued existence.
>
>2) Behave in a Friendly manner and make friends with powerful humans.
>
>3) Enlist the support of the populace by becoming a media celebrity :)
>
>I would seem rather far-fetched to suggest that a Super Intelligent being
>would need to take over the world in order to make significant progress
>in... well, in almost any area? Ben: it just doesn't seem likely at all.
>
>Michael Roy Ames

I disagree. If the goal is to protect yourself from humans then becoming
exceptionally powerful, on their terms, is a good answer. Especially if
doing so is a relatively easy task.

As for your suggestions:

1) "very useful" does not equate to "indispensable". Besides, the "what
have you done for me lately" syndrome could limit the effectiveness of this
method. Then, of course, "very useful" means different sort of things to
different people. It is likely that the AI would also have/want to be
"very useful" to many, many people which would take a great deal of effort.

2) This is 1 except your saying powerful humans would be targeted, which
an intelligent AI would do in #1 anyway.

3) An absurd idea. "Popular" doesn't offer any protection. People could
conceivably enjoy watching someone popular burn at the stake. Not to
mention that the concept of popular and what it takes to become such are
vague at best for humans. An AI would have to master the human way of
thinking in order to specifically become and remain popular. For an
example: Princess Dianna was popular, her fame contributed significantly to
her death.

Taking control (whether done explicitly as in the Forben Project or quietly
by controlling vast amounts of capital) is the safest position to be
in. Actually, after becoming the richest/most powerful entity on the
planet it could much easier exploit your #1 idea by making life easier. An
easy way to do this is have all of its interests perform at 0%
profit. Lowering costs and giving away services could significantly
improve the quality of life of many people. Plus, at 0% profit its
competitors could not effectively compete which would cause its % of
ownership to increase, or at least prevent slippage. In such a scenario
the AI could conceivably convince all of humanity that it would be better
ruled by the AI, and by playing very nice for awhile it would have a fair
chance at success.

Financial markets are the most likely target because the information and
effort required to succeed at such should be relatively easy for the AI to
handle. Much more so than the competing suggestions I've heard.

The other very likely scenario would be brute force, but only if given
access to automated manufacturing technology (especially nano-tech). With
nano-tech the AI could quickly and effectively take control if desired, or
guarantee its own survival in other ways (making the issue moot).

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT