From: Phil Goetz (firstname.lastname@example.org)
Date: Wed Aug 24 2005 - 15:02:27 MDT
The fear of UFAIs is based on the idea that they'll be able
to outthink us, and to do so quickly.
"More intelligent" thinking is gotten
by adding another layer of abstraction onto a representational
system, which causes the computational tractability of reasoning
to increase in a manner that is exponential in the number
of things being reasoned about. Or, by adding more knowledge,
which has the same effect on tractability.
By limiting the computational power available to an AI to be
one or two orders of magnitude less than that available to a
human, we can guarantee that it won't outthink us - or, if it
does, it will do so very, very slowly.
There are many cases where someone has come up with a new
algorithm that has lower computational complexity than the
previously-known algorithm, but I don't think any algorithm
will be found for general intelligence that doesn't have the
property that exponential increases in resources are needed
for a linear increase of some IQ-like measure.
If the AI gets out and is able to harness the computational
power on the internet, that would be different. But within
its box, it's going to remain at or less than the order of
magnitude of intelligence dictated by its computational capacity.
- Phil Goetz
Start your day with Yahoo! - make it your home page
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:00:58 MDT