Risks of distributed AI (was Re: Investing in FAI research: now vs. later)

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Thu Feb 21 2008 - 10:30:28 MST


--- Peter de Blanc <peter@spaceandgames.com> wrote:

> On Wed, 2008-02-20 at 19:59 -0800, Matt Mahoney wrote:
> > > The scenario I'm most afraid of is not a hard take-off leading to
> > unfriendly
> > > AGI, but a pseudo-AI falling into the hands of evil men.
> >
> > Bad science fiction.
>
> (I disagree with the above, but) do you really consider this a
> counterargument?

Not by itself, but I suppose I am also guilty of presenting doomsday
scenarios. My objection is to the idea that AI could be developed in
isolation in a secret lab somewhere. Or worse, that the technology could be
stolen, as if the thieves could be smart enough to use it without being smart
enough to develop it themselves.

I believe that AI will be developed where there is the most computing power
and information already available, on the internet. I described one possible
design in http://www.mattmahoney.net/agi.html and did my thesis work to show
that a very abstract model of this architecture is robust and scalable. The
idea is that intelligence will emerge from a huge number of simple but
specialized peers and an infrastructure that routes messages to the right
experts. It does not require any advances over current technology. It is a
P2P network that creates a market where information has negative value and
peers compete for computing resources (memory and bandwidth) in an economy
that rewards intelligence and cooperation.

The design is friendly, at least initially, because friendliness is a subgoal
of the evolutionarily stable goal of acquiring resources. Each peer, being
relatively stupid, would be administered by a human owner. A typical
configuration would broadcast messages posted by the owner on his or her
favorite topic, collect and relay messages that shared the same keywords, and
prioritize incoming messages to reward valuable and accurate sources of
information (in the opinion of the owner) and block spammers. Well-behaved
and intelligent peers will be rewarded by having their own messages accepted
and propagated to a wider audience. As technology allows, peers will become
more intelligent and more of these tasks will be automated.

The idea that AI could fall into the "wrong hands" is like the Internet
falling into the wrong hands. It is true that the Storm botnet (
http://en.wikipedia.org/wiki/Storm_botnet ) controls about 0.1% of the
internet's computing power, similar in size to Google. There are also
critical failure points, such as the software that updates the root DNS
servers. But generally, I think distributed ownership greatly lessens the
risk.

I believe a distributed AI is susceptible to a singularity and loss of human
control like any other design. Initially, peers will communicate in natural
language, with each peer understanding only a small subset. That will be true
as long as humans are the dominant source of information. But as peers become
more intelligent, the language will evolve. Peers will develop their own
protocols that will be incomprehensible to humans, and humans will become less
and less relevant to the system's evolution.

Also, as Eliezer pointed out, RSI need not take an evolutionary path. If the
peers all cooperate, then evolution does not apply. Evolution is not an
equilibrium process. It lies on the boundary between stability and chaos,
like all complex, incrementally updated systems. It is punctuated by mass
extinctions, plagues, population explosions, and other ecological disasters
like the introduction of deadly poisonous gases like oxygen into the
atmosphere. I believe we are alive today because of the great diversity of
species, none of which can survive in every environment (and now robots roam
Mars). And by the anthropic principle, maybe we have just been lucky so far.

Comments?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT