Re: General summary of FAI theory

From: Daniel Burfoot (daniel.burfoot@gmail.com)
Date: Tue Nov 20 2007 - 21:50:07 MST


On Nov 21, 2007 7:27 AM, Tom McCabe <rocketjet314@yahoo.com> wrote:
> rehashing the basics. Some of the things which have
> already been
> covered years ago, and are therefore ineligible for
> rehashing:

One question/concern that I have, which is not covered by your list,
is the possibility of a powerful but sub-AGI level AI falling into the
"wrong hands".

I believe that AI will come in several iterations. The first couple
versions will be very powerful reasoning/statistical inference systems
but will not have their own goals. In other words, the AI will act
like an oracle, answering very difficult questions accurately but not
taking actions on its own.

For an example of how this could be a problem, consider the following
scenario. Dr. Evil invents such an AI system. He then uses it to
predict the stock market, so he becomes enormously wealthy. He then
constructs some simple robot soldiers, and uses them to take over the
world.

Given the existence of a Dr Evil who can construct this kind of
non-self-willed AI, the above scenario doesn't seem so implausible.

I consider this type of problem to be somewhat more realistic than the
"super-AGI turns world into computronium" scenario. Even more
realistic is the possibility of the government obtaining a pseudo-AI
and using it for tyrannical purposes. Almost all the obvious
applications of limited AI incline in this direction (robot weapons,
computer vision for surveillance, speech recognition so the NSA can
automatically transcribe all your phone calls).

This list isn't for political debate, but I think everyone will agree
that there is not even a popsicle barricade standing between us and
AI-aided government tyranny. I consider this a rather substantial
existential risk and one which we are psychologically biased against
thinking about (it requires criticism of one's own group and
government). So in this sense discussion of AGI and politics must
unfortunately collide.

The question I'm seriously asking myself now is: should AI research be
put on hold until more political safeguards can be put in place?

Perhaps some people have discussed this at length and provided a
compelling answer. If so, please send me a link.

thanks,
Dan



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT