Re: General summary of FAI theory

From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Nov 20 2007 - 22:20:04 MST


>
> For an example of how this could be a problem, consider the following
> scenario. Dr. Evil invents such an AI system. He then uses it to
> predict the stock market, so he becomes enormously wealthy. He then
> constructs some simple robot soldiers, and uses them to take over the
> world.

It is straightforward (not easy, but straightforward) for any
dedicated, reasonably intelligent person to become rich in modern-day
America. All you have to do is start a company, and get millions of
users for a product or service, no AI necessary. See the essays at
http://www.paulgraham.com/, in particular
http://www.paulgraham.com/hiring.html and
http://www.paulgraham.com/start.html.

> The question I'm seriously asking myself now is: should AI research be
> put on hold until more political safeguards can be put in place?

No. For that to have a reasonable chance of success, you would have to
get competent transhumanists (if not professional AI researchers)
writing the regulations, and the bureaucrats aren't going to let that
happen. Otherwise, you just end up having to fill out meaningless AI
Safety Permit Application Form #581,215,102.

> Perhaps some people have discussed this at length and provided a
> compelling answer. If so, please send me a link.
>
> thanks,
> Dan
>

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT