Re: Complexity tells us to maybe not fear UFAI

From: Phil Goetz (philgoetz@yahoo.com)
Date: Thu Aug 25 2005 - 09:57:19 MDT


Mikko Särelä <msarela@cc.hut.fi> wrote:
> On Thu, 25 Aug 2005, Chris Paget wrote:
> > Phil Goetz wrote:
> > > By limiting the computational power available to an AI to be one
or
> > > two orders of magnitude less than that available to a human, we
can
> > > guarantee that it won't outthink us - or, if it does, it will do
so
> > > very, very slowly.
> >
> > You're assuming that the human brain is operating at more than 1%
> of its
> > theoretical computational power here (and I'd be interested to see
> how
> > you plan to calculate or prove that).

Evolution doesn't construct machines that operate at < 1% of the
efficiency possible with the given materials. Photosynthesis is
about 12% efficient, comparable to solar cells, which are made
out of human-selected materials rather than proteins.
Fat is more efficient than our best power storage cells.
Birds are probably much more efficient flyers than airplanes?
Bicycles enable a human to be more efficient than walking
animals, but only on roads. (There is a case to be made that
a wheel could not evolve, except at microscopic sizes.)

> It is at least possible that the
> > AI will be able to self-optimise to such a degree that it could
> function effectively within any computational limits.

No. That is exactly what I was claiming is not possible.
I'm not proving it, but I think I made a pretty good argument.
Provide me with a counter-argument, not a mere denial.

> And you are assuming that many of the problems the AGI needs to solve
> have computationally tractable solutions. This makes the problem
>P=NP? highly relevant to such hypothetical situation.

(I think the latest "you" also refers to me?)

> If P=NP and the AGI is the first to discover this, then he will be
> able to do things a lot faster than otherwise would be expected.
> Also if the truly
> interesting problems have good polynomial (or rather linear, or
> sublinear) approximation algorithms, then taking away
> computational power does not really help that much.

This is a point worth making. I don't think it ultimately
matters, unless P = NP.

The problems that may have polynomial, or even linear
algorithms, are specialized problems. An AGI could construct
a subroutine that it could call to solve these problems for it.
In exactly the same way, a human could write a program to solve
these problems for him/her in polynomial or linear time. This
might enable said human to make a lot of money, say by cracking
Internet commerce traffic, or by simulating protein folding,
but SL4 is not worried about that person being a threat to humanity.
In exactly the same way, an AGI might write all sorts of
high-speed subroutines that can solve problems at higher rates
than we expect, but the AGI's "conscious" general-purpose
intelligence is NOT going to be one of those things that can
be converted into a polynomial algorithm. Unless P = NP,
the AGI will be of the same order of magnitude of threat as
as a human who develops some surprisingly new efficient
algorithms, and considerably less of a threat than a human who
develops a quantum computer.

Not to say that a human with a quantum computer isn't a
considerable threat. But that doesn't invoke the same fear
factor here on SL4.

BTW, I suspect that the NSA will be the
first organization to develop quantum computers, and that it
will have them for several years before anyone finds out about it.
I have no inside information about this, but it makes sense.
They were the original main client for supercomputers, along
with Los Alamos, and nowadays Los Alamos is much less important,
and quantum computing is much more relevant to NSA than to
Los Alamos AFAIK.

Am I willing to bet the future of humanity that P = NP?
I'm confident enough that P != NP to consider the expected
benefits of the gamble.

> Final note, I am not speaking for AGI-boxing, nor do I consider it a
> good strategy.

[Digression:
The debate about AI-boxing is useless unless you have criteria
for when an AI is smart enough that it needs to be boxed.
We aren't going to even try boxing the AIs that we're working
on for a very long time, because we believe they're irreparably
stupid. Even getting to the point where people believe
that AI-boxing is a better idea than just building and running
the thing on a computer attached to the internet with no
security precautions will take decades.
I myself take no precautions.
Telling the general public that AI-boxing is a bad idea
skips one or two shock levels.
/Digression]

> Then going to another topic I've been thinking for a while. If I've
> understood correctly, one of the reasons a spike, a singularity, is
> predicted soon after the development of AGI is that it could device
> itself
> a better hardware in consecutive cycles and thus each time halve the
> time it takes to develope the next generation.
>
> I would like to counter argue against this proposition.
> The whole proposition assumes
> that developing next generation hardware is computationally as
> complex as
> was developing next generation hardware. Or that at least the
> complexity does not go up fast.

Yes, and this proposition is wrong; we already know that our
increase in computational power requires an exponential increase
(I think; pretty close, anyway) in monetary investment. Plot
dollars invested vs. transistors per square cm, and Moore's law
looks a lot less impressive. This relationship between scientific
investment in a field, and payback, is a general rule, expressed
in a law in the 1980s by a man whose name I can't remember, but that
starts with an 'R' and was in my Transvision 2004 presentation on
the myth of accelerating change.

> For the past decades we have lived with approximately exponential
> growth
> of doubling computational capacity of a chip each two years. At the
> same
> time, the computational effort we have put into generating each next
> generation has also grown exponentially in two ways. Firstly, we
> spend
> more computer time designing the next generation chip, and secondly,
> we
> spend much, much more brainpower to solve the problems each new chip
> generation brings. As there are several problem fields in computer
> hardware design that can be run parallelly, lots of humans working on
> the
> problems does not seem like a solution that looses much to the
> overhead.

Right, exactly!

- Phil Goetz

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT