Re: AI timeframes

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Sat Apr 10 2004 - 00:39:54 MDT


On Apr 9, 2004, at 5:42 PM, Elias Sinderson wrote:
> My reflecting on the idea, however, has brought me to another,
> potentially more important, question: Is it reasonable to assume that
> a successful SAI project can be controlled? ... James made the
> assertion [2] that one could not control the results and I imagine
> that this may be a road well traveled for him - if you have the time,
> could you expand some more on this point (or provide some appropriate
> references)?

Actually, all that the arguments and discussions over time have shown
is that "control" means different things to different people, though
certain things are generally agreed upon. It is highly arguably as to
the limits of control, partially depending on what one means by
"control". However, in the context of my comment that you were
responding to:

AI, like money, doesn't have any value unless you use it. And the
stronger the AI, the more uses and more value it has. One end of the
spectrum, there is the argument that a sufficiently intelligent AI is
not controllable, which is a pretty strong argument from theory. But
even if you exclude this case, you end up with an interesting and
complicated scenario.

The complication comes from competing forces: You exploit your AI by
using it, with a stronger AI being more valuable, yet the stronger the
AI and the more you use it, the more likely you will find yourself
competing with someone else's implementation. It is no more
controllable than any other technology, and in this case, it is a
particularly cheap advanced technology to implement and relatively easy
to steal.

You end up generating an AI arms race. Someone with more resources can
crush your AI advantage by brute force through capitalizing a stronger
AI unless you grow the strength of your AI as fast as possible. It is
a first-mover advantage writ large, and because the barrier to entry is
very low as such things go, you can be leapfrogged very quickly if you
do not follow an aggressive growth curve. All the while, you are
rapidly approaching the cliff of general capability where the AI is no
longer even controllable because it has grown too strong.

In this scenario, there are two modes of losing control, depending on
your strategy. You can keep things very secret and controlled, but you
are only delaying the inevitable take-off as other implementations
eventually get built, and getting leapfrogged as more aggressive
implementations come online. On the other hand, you can put the pedal
to the metal and grow the AI as fast as possible so that no other
implementation can catch you, but with the enormous caveat that you
will rapidly hit the point where your AI is uncontrollable.

In both cases, you will lose control sooner than later. The average
outcome will be almost the same in both cases. This is what I was
referring to in my previous comment.

There is a more technically rigorous argument about the limits of
control, but that is a lot dryer and, as you surmised, has been
discussed in relative length in the past. I wasn't making a technical
case here, though you will find that as such things go, I generally
believe that AI is theoretically more controllable than some others on
this list, if only by matters of degree.

j. andrew rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT