From: P K (email@example.com)
Date: Thu Dec 08 2005 - 16:57:34 MST
>From: "David Picon Alvarez" <firstname.lastname@example.org>
>Materially speaking, the problem with such an approach you propose is that
>any sufficiently advance process of thinking requires action. Even theorem
>solvers require a strategy module to decide what paths to follow. Thinking
>is just a subset of acting, and from the viewpoint of an AI where
>is data not a particularly useful subset to consider.
I disagree. It is true PAI would not be able to start the singularity alone
or do anything alone for that matter, but that is what makes it safe. Think
of it as enchasing human intelligence the way MSWord or Google enhances
human intelligence but much more. PAI is but a stepping stone to AGI.
Why do we need a stepping stone?
There seem to be some peculiarities in AGI that don't usually appear when
developing other software projects. AGI is a highly interdependent system.
The smallest miscalculations can cause huge delays or derail the entire
project. Therefore, the farther we try to predict ahead the more likely we
are to get things wrong (even more so with AGI). Also, AI development allows
you to increase your own intelligence as you progress by using your work to
improve it. (Analogy example: maybe the people at Microsoft use Word to
spellchecker their own documentation.) So if we move forward just slightly
to see what is over the horizon, we are less likely to be thrown back to the
drawing boards. PAI systems will almost certainly be used in a complete AGI.
And PAI is safe. This means that work can start on PAI even if we don't know
the ultimate "friendly" goal system. The goal system will also be more
obvious from PAI than from our present position.
Take advantage of powerful junk e-mail filters built on patented Microsoft®
Start enjoying all the benefits of MSN® Premium right now and get the
first two months FREE*.
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:00:48 MDT