From: Ben Goertzel (email@example.com)
Date: Sat Nov 18 2000 - 08:43:53 MST
> You'll have to forgive me for not having time to look properly into your
> work, but basically I find myself in broad agreement with what I
> read on the
> webmind philosophy page about AI & wish you luck. :)
> However, if it works, what's to stop it becoming too successful & turning
> all the matter of Earth into webmind nodes?
> For instance if your fitness function rewards successful algorithms, what
> happens if one of them comes up with the bright idea of tricking a human
> into giving it access to nanotech, then proceeds to build more computer
> power for itself?
> I'm not one for disasterbation but that stakes are high. We can't
> afford to
> get it wrong.
Once AI systems are smart enough to restructure all the matter of Earth into
own mind-stuff, we won't be ABLE to guide their development via intelligent
tinkering, one would suspect...
So, your intuition is that by carefully guiding our AI systems in the very
we can mold the early phases of the next generation of intelligence so
later phases will be unlikely to go awry
I tend to doubt it.
Based on practical experience, it seems to me that even the AI systems we
now are experimenting
with at Webmind Inc. -- which are pretty primitive and use only about half
of the AI code
we've written -- are fucking HARD to control. We're already using
evolutionary means to adapt
system parameters... as a complement to, not a substitute for, experimental
re-engineering of various
components, of course.
So the idea that we can proceed with more advanced AI systems based
primarily on conscious human
engineering rather than evolutionary programming, seems pretty unlikely to
be. It directly goes against
the complex, self-organizing, (partially) chaotic nature of intelligent
> Have you compared random mutation to intelligent design?
> We have aircraft that fly faster, further, higher and for longer than any
> bird. It didn't take us millions of years either.
yeah, but an aircraft is not a mind, or a body. Minds are intrinsically
and hard to predict.
If minds could be built like airplanes, then rule-based AI would be a lot
further along than it is!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT