Re: Evolving minds

From: Dale Johnstone (DaleJohnstone@email.com)
Date: Sat Nov 18 2000 - 11:33:39 MST


----- Original Message -----
From: "Ben Goertzel" <ben@intelligenesis.net>
To: <sl4@sysopmind.com>
Sent: Saturday, November 18, 2000 3:43 PM
Subject: RE: Evolving minds

>
> hi,

hiya,

>
> > You'll have to forgive me for not having time to look properly into your
> > work, but basically I find myself in broad agreement with what I
> > read on the
> > webmind philosophy page about AI & wish you luck. :)
> >
> > However, if it works, what's to stop it becoming too successful &
turning
> > all the matter of Earth into webmind nodes?
> >
> > For instance if your fitness function rewards successful algorithms,
what
> > happens if one of them comes up with the bright idea of tricking a human
> > into giving it access to nanotech, then proceeds to build more computer
> > power for itself?
> >
> > I'm not one for disasterbation but that stakes are high. We can't
> > afford to
> > get it wrong.
>
> Once AI systems are smart enough to restructure all the matter of Earth
into
> their
> own mind-stuff, we won't be ABLE to guide their development via
intelligent
> tinkering, one would suspect...

Granted, once they're replicating it will be pretty much impossible to stop
them, intelligent or not. Hence my concern.

>
> So, your intuition is that by carefully guiding our AI systems in the very
> early stages,
> we can mold the early phases of the next generation of intelligence so
> carefully that
> later phases will be unlikely to go awry
>
> I tend to doubt it.

Not really, my intuition is to sandbox AIs tightly (especially the smart
ones) until we can be sure they aren't going to go do something stupid
(which is highly likely to begin with) in the real world.

I get very nervous when I hear about AI improving itself. It's like doing
brain surgery on yourself. What if you damage an area that affects your
judgement & you start making random modifications? Instabilities are most
likely to result, to say the least.

You could have some nice trustworthy AI's perception suddenly change because
of some unforeseen modification, and because the AI is modifying itself,
it's new perception will further affect it's modifications.

I prefer an architecture without that kind of internal feedback.

I'm well aware that thinking is itself a form of self-modification, but on a
different level. We can't modify major parts of our neural architecture by
thought alone.

Actually this stability problem is a large part of the AI problem itself.

>
> Based on practical experience, it seems to me that even the AI systems we
> now are experimenting
> with at Webmind Inc. -- which are pretty primitive and use only about half
> of the AI code
> we've written -- are fucking HARD to control. We're already using
> evolutionary means to adapt
> system parameters... as a complement to, not a substitute for,
experimental
> re-engineering of various
> components, of course.

I'm not a big fan of evolutionary methods.

>
> So the idea that we can proceed with more advanced AI systems based
> primarily on conscious human
> engineering rather than evolutionary programming, seems pretty unlikely to
> be. It directly goes against
> the complex, self-organizing, (partially) chaotic nature of intelligent
> systems.

I'm not advocating proceeding primarily with conscious human engineering. At
the moment I'm advocating people think more carefully about how their AI
projects could turn nasty. We all wear rose-tinted glasses with our own
projects, and the future in general, but we're playing with powerful juju
here that deserves more care.

My current thoughts are to have an independant peer intelligence in the
design feedback process, be it human or not. And thorough sandboxing.

If you don't, then what's to stop it self-organising into something simpler
that just copies itself once it thinks up the idea?

>
> > Have you compared random mutation to intelligent design?
> >
> > We have aircraft that fly faster, further, higher and for longer than
any
> > bird. It didn't take us millions of years either.
> >
>
> yeah, but an aircraft is not a mind, or a body. Minds are intrinsically
> complex, self-organizing
> and hard to predict.

Again you underline my concern.

>
> If minds could be built like airplanes, then rule-based AI would be a lot
> further along than it is!

Urgh! Are you equating intelligent design with rule-based AI? - If so, I
don't subscribe to that point of view. Rule-based 'AI' doesn't even qualify
as AI in my view! :P

>
> Ben
>

Cheers,
Dale.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT