RE: Evolving minds

From: Ben Goertzel (ben@webmind.com)
Date: Sat Nov 18 2000 - 14:20:58 MST


Hi

> Not really, my intuition is to sandbox AIs tightly (especially the smart
> ones) until we can be sure they aren't going to go do something stupid
> (which is highly likely to begin with) in the real world.

Unfortunately, "sandboxing" of AI is not a viable commercial strategy

Right now, we have a very stupid, sandboxed "Baby Webmind" which is only in
the
testing/debugging/algorithms-tweaking phase; and, some less flexible,
adaptive WM
systems that are doing real things in the world (predicting markets,
categorizing texts)

There is a lot of pressure, naturally, to get the more adaptive, intelligent
WM
system involved in real-world revenue-generating activities as well

Hence my prediction that as soon as an AI system works reasonably well, it
will NOT
be sandboxed, but will rather be aggressively commercialized (whether by me
or someone
else)

>
> I get very nervous when I hear about AI improving itself. It's like doing
> brain surgery on yourself. What if you damage an area that affects your
> judgement & you start making random modifications? Instabilities are most
> likely to result, to say the least.
>
> You could have some nice trustworthy AI's perception suddenly
> change because
> of some unforeseen modification, and because the AI is modifying itself,
> it's new perception will further affect it's modifications.
>
> I prefer an architecture without that kind of internal feedback.

 But if self-modification causes greater intelligence, then, the
self-modifying systems
are going to be the ones, ultimately, more thoroughly commercialized, and
hence having more
of a real-world effect

> I'm well aware that thinking is itself a form of
> self-modification, but on a
> different level. We can't modify major parts of our neural architecture by
> thought alone.
>
> Actually this stability problem is a large part of the AI problem itself.

Sure, but in evolving a population of AI's it's OK if some small percentage
is
"nuts." If this isn't true, chances are the evolutionary process is being
too conservative
and evolution will be slow.

>
> I'm not a big fan of evolutionary methods.
>

Are you a big fan of proposing viable alternatives in detail?

I'm open to suggestions, but general and vague suggestions are not useful.

Evolutionary methods are slow, so I wouldn't mind replacing them with
something else,
but as global optimization methods go, they are in my experience
incomparably robust at
getting you to a "reasonably good" solution (though not the best at getting
a precise solution
in contexts where that is required)

> > yeah, but an aircraft is not a mind, or a body. Minds are intrinsically
> > complex, self-organizing
> > and hard to predict.
>
> Again you underline my concern.

Sure. All intelligences are unpredictable, hence dangerous. It occurs to
me every
time I drive down the highway. One human brain could go psyche, swerve the
car it's
driving and kill me...

I just don't think this is a solvable problem

>
> >
> > If minds could be built like airplanes, then rule-based AI
> would be a lot
> > further along than it is!
>
> Urgh! Are you equating intelligent design with rule-based AI? - If so, I
> don't subscribe to that point of view. Rule-based 'AI' doesn't
> even qualify
> as AI in my view! :P

Rule-based AI is the primary existing AI paradigm that attempts to build a
mind in
which the overall behavior of the whole can easily be predicted from the
behavior
of the parts -- and this kinds of prediction seems to me to be prerequisite
for the
non-evolutionary approach to AI improvement that you're (or I thought you
were, when I
typed that reply) suggesting...

Of course, we all try to design our AI systems intelligently, but, even so,
one requires
an evolutionary (or other automatic optimization) approach to determining
the finer details
of system structure (the alternative would be a detailed mathematical,
quantitative theory
of complex systems, which is nowhere nearby...)

In other words, we can use intelligent design to create a system that is in
principle capable
of intelligent behavior. But to actually make it work, a lot of evolution,
or something like it,
is required to tune the numerous parameters of the system. Parameter tuning
is the difference
between sanity and psychosis, in AI's as well as humans (drugs basically
"parameter tune or mis-tune"
the brain)

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT