From: Ben Goertzel (firstname.lastname@example.org)
Date: Wed May 08 2002 - 09:08:20 MDT
> Finally, even without deconstructing the rest of the parable, you can flip
> its moral lesson quite easily by assuming that the mountain *is*
> much larger
> than Y thinks. And historically this would appear to be the heuristic
> lesson of AI.
There are lots of lessons to be drawn from the history of AI, obviously.
I think that
-- those who say the problem has been a lack of good ideas for AGI software
-- those who say the problem has been a lack of adequately powerful hardware
are BOTH largely right
Powerful hardware will not allow totally wrong theories to produce AGI. On
the other hand, it will accelerate the learning process by which totally
wrong theories are empirically refuted, and hence accelerate the production
and acceptance of better ideas.
And, I think that there are a LOT of different approaches that are going to
yield AGI's, given adequately powerful hardware. These different approaches
will yield different kinds of AGI's, different "species" of artificial
minds, so to speak.
Exactly when our hardware becomes "adequately powerful" for human-level AGI
is another question, which we've speculated on before on this list. Our
rough estimates in terms of Novamente is that a cluster of, say, 1000
powerful Linux boxes might do it today.
So, anyway, the terms of the parable aside, the failure of AI historically
has lots of different causes, as we both know...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT