RE: Threats to the Singularity.

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 23 2002 - 12:18:30 MDT


Hi james,

> Well, we would hope that the team who created this AI doesn't give it
> access to the global network! Or, if it does, it would be so highly
> restrictive as to (at least for the near term, pre-super intelligence)
> prevent such uncontrolled expansion. However, they could conceivably be
> ignorant enough to grant such access or have a flaw in their security.

This is a tricky issue. The global network is an important source of info
for any AI in learning phase. One wants to give one's young AI net access
via a kind of "reverse firewall" that allows it to gather data but not to
cause damage.

> >co-evolutionary competition and population pressure the AIs will
> very soon
> >start designing and building new hardware, which allows them to become
>
> And just how would they make the leap from running on silicon to building
> silicon? I'm almost certain that there is no capacity to do this
> today. They would have to be able to perform 100% of the
> manufacturing and
> assembly operation completely via computer, assemble the working
> technology
> and connect it to the net. All without human intervention. Even
> if there
> was a facility which had all of this capability computer
> controlled (which
> I don't believe is the case, much of it is manual - moving pieces between
> manufacturing workstations, etc) the operators would have to sit there
> while the machines spent hours (days?) "doing their own thing".

Your point is an excellent one. However, there are plenty of scenarios one
can conjure to counter it.

For instance, suppose the AI finds a way to threaten a lot of people with
death, and then basically *blackmails* humans into creating a fully
automated computer-and-robot-manufacturing facility for it....

Or, more probably, suppose it finds some group of humans and promises them
lots of goodies if they build it the right automated manufacturing
facilities.... it's almost inconceivable that an AGI, capable of predicting
financial markets and hence getting lots of $$, couldn't find *some* group
of humans to build it whatever it wanted for cash payment...

I can see a future gov't wasting a lot of effort protecting against
military-style attacks by AGI, and then finding that an AGI actually takes
over the world via financial & political machinations...

Remember how the Brits took over New Zealand. They never conquered the
Maori militarily. They just wheeled and dealed the land out from under
them, in deals with varying degrees of crookedness....

All a superhuman AGI needs is to be able to outsmart us in financial and
business situations, and it will effectively own the world, within a matter
of years. And this can happen behind the scenes, and large corporations
will allow it to happen if it's in their own temporary financial interest,
i.e. if it boosts their quarterly profits, regardless of the potential
long-term consequences.... And if US corporations won't (yah, right), some
foreign corporations will...

-- ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT