From: Ben Goertzel (firstname.lastname@example.org)
Date: Mon Jan 24 2005 - 09:15:45 MST
> It's interesting that you are using the exact same flawed analogy class
> as Ben did in his response. The answer is the same as I gave him: your
> analogy is incorrect because we know with great detail using existing
> physics whether a given car design will actually result in a working car
> or not. With a given AGI design we do not know at all (you may think you
> know, but if I ask you to prove it you will be unable to provide me
> anything near the level of concrete proof of design-worthiness that a
> car designer could provide using a physics model).
Hmmm.... Brian, IMO, this is not quite correct.
We know with great detail that Cyc, SOAR and GP (to name three AI
systems/frameworks) will not result in an AI system capable of hard takeoff.
And, we know this with MORE certainty than we know that no one now knows how
to build a ladder to Andromeda.
IMO, it's more likely that next year some genius scientist will come up with
a funky self-organizing nano-compound creating a self-growing ladder to
Andromeda, than that one of these simplistic so-called AI systems will
self-organize itself into a hard takeoff. Not all scientists would agree
with me on this, but I guess most would.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT