Re: Artificial Intelligence will Kill our Grandchildren

From: Aleksei Riikonen (aleksei@iki.fi)
Date: Sat Jun 14 2008 - 06:41:28 MDT


On Sat, Jun 14, 2008 at 11:22 AM, Vladimir Nesov <robotact@gmail.com> wrote:
> On Sat, Jun 14, 2008 at 6:11 AM, Anthony Berglas <anthony@berglas.org> wrote:
>> So all comments most welcome, especially as to what the paper does not need
>> to say.
>
> "So this paper proposes a moratorium on producing faster computers. "
>
> Even if it works now (it won't), in the future, when nanotechnology
> matures, you won't be able to ban it anyway, just as you can't ban
> information now.

One can't ban information right now -- technical and political
structures that would be required are not in place -- but high-tech
societies where the flow of information is strictly controlled and/or
monitored are actually quite feasible. Mostly just a matter of having
a huge amount of surveillance. A society where the use of advanced
technology, such as freely programmable computers, is banned
everywhere except in a relatively few strictly controlled facilities
is also a non-impossibility.

If in the next few decades we get destructive near-human intelligence
computer viruses running around on the net, or something like that, it
is also politically realistic that we'll see stuff like the outlawing
of all computers that don't have advanced government spyware on them,
including e.g. the feature to send a live screen capture to government
agents whenever they feel like it. Administrations such as currently
in power in major countries such as China, U.S. and Russia would rub
their hands with glee at any political excuse to implement such
systems of control. And I for one would not even oppose if they
proposed such things, if the only alternative was to have no defenses
against powerful AI technologies. (In reality, there is the
alternative of a Transparent Society, where all surveillance
information is available to everyone and not just government
agencies.)

This is actually what I consider the highest-probability scenario for
the pre-singularity future. We will see extremely pervasive
surveillance, either of the Transparent Society or the Big Brother
variation. Friendly AI advocates will probably start to lobby for that
as the necessary Plan A, once the difficulty of solving FAI has become
sufficiently apparent. I certainly expect a solution to FAI to take
more time than we would have if we didn't build pervasive
surveillance.

(On the paper that started this thread, I'll also make note here of
the core error that earlier comments also touched on, that it is
assumed in the paper that AIs would necessarily have human-unfriendly
motivations. Friendly AI is very difficult, but not impossible.)

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT