Re: Is a theory of hard take off possible? Re: Investing in FAI research: now vs. later

From: William Pearson (
Date: Wed Feb 20 2008 - 20:01:49 MST

On 20/02/2008, Nick Tarleton <> wrote:
> On Wed, Feb 20, 2008 at 4:19 PM, William Pearson <> wrote:
> > If you accept that the rate of improvement of a learning system is
> > bounded by the information bandwidth into it,
> I can't see why this would be the case. Processing limitations
> (including memory bandwidth) and algorithm efficiency seem much more
> important.

I'm going to taboo* myself from using learning, I would appreciate if
you attempted the same. Instead I will say system evolution
bifurcation, or bifurcating for short.

What do I mean by this. Imagine a deterministic computer shut in a
box, it will only go down one path. If you had a bunch of them the
same and pre-programmed it to answer a question at set time, they
would all answer the same. Such a system cannot increase its
predictive power of the outside world. That is not to say that it
cannot answer a question wrongly at one point, and correctly later,
just that the predictive power is limited to what is already there.

Now imagine you added a link to the outside, through which one bit
could enter. Now depending upon that bit the system, the systems
evolution could bifurcate, or it could ignore it and stay singular.
Bifurcation leads to the potential for growing of the ability to
predict the world, if parts of the world happen to correlate with the
bit that was bifurcated on.

More bits given to the system lead to more possible bifurcations.
Exponentially increasing numbers of bifurcations are needed for
exponential increases in predictive power.

A concrete example, let us say we are trying to get an AI to evolve to
a state where it can consistently use my IP number to reach my
computer. It follows its programming, let us say by incrementing a
counter and using that in the IP field of TCP/IP packets. I give it
no feedback nor any other bits of information about the world. It will
never be able to bifurcate to a state where it always uses the correct
IP address, it is just throwing out packets into the void. At some
points it would even get the IP address right, but it wouldn't know
that it had.

Now say I give it a bit per second bandwidth input. If it has used the
correct IP address in the previous second it gets a 1, otherwise a 0.
Now it might guess the correct IP address first time (and only guess
one IP address) and get a 1, and therefore get 4 bytes of information.
But as we said the AI is deterministic and the IP address is fixed, so
we might be forgiven in thinking that the game is rigged and the
system already had that information. If we randomise the IP and set it
off again, on average, no matter the computing power the system had,
it would get only 7.45x10^-9 bits per second of the IP address. This
isn't the most efficient coding of our input to it. We could simply
give it the IP address, one bit per second. The system would still be
limited to bifurcating once per second.

See section 5.2 of on AIXI for some
discussion of the importance of giving more information to the system
to get it to learn, sorry bifurcate quickly.

Since these two situations are impossible for us to distinguish
between, in a running AI, we shall assume the higher bound and count
the number of potential bifurcations and thus the number of input bits
when determining the potential rate of change of the system.

Do not get confused between the number of bits of the state changed
and the number of input bits. It is possible for the system to change
terrabytes of state without bifurcating.

> > When people start positing new physics that they tend to lose me. Yep,
> > I know our physics isn't perfect. But reasoning using the possibility
> > of new physics is a bit too much of a leap of faith for me.
> Positing any *specific* new physics is a bad idea, but it's not
> unlikely that our physics is incomplete in *some* significant way,
> perhaps even one relevant to a hard takeoff.

Physics may be incomplete in a way relevant to the changing of
Eliezer's arm into a blue tentacle. He would argue that he shouldn't
expect it. And similarly if we can get some science behind the
possibility of hard take off we would be in a better position to
determine whether we should expect it or not.

Will Pearson


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT