Loosemore's Proposal [Was: Re: Agi motivations]

From: Richard Loosemore (rpwl@lightlink.com)
Date: Mon Oct 24 2005 - 10:32:34 MDT


[Consolidated reply to Michael Vassar, Michael Wilson, Woody Long,
mungojelly and Olie Lamb...]

[See also the separate sidetracks on "Uploading" and "Neuromorphic
Engineering"].

Please forgive the dense nature of this post. Long, but dense.

1) "Prove" that an AGI will be friendly? Proofs are for mathematicians.
  I consider the use of the word "proof," about the behavior of an AGI,
as on the same level of validity as the use of the word "proof" in
statements about evolutionary proclivities, for example "Prove that no
tree could ever evolve, naturally, in such a way that it had a red
smiley face depicted on every leaf." Talking about proofs of
friendliness would be a fundamental misunderstanding of the role of the
word "proof". We have enough problems with creationists and intelligent
design freaks abusing the word, without us getting confused about it too.

If anyone disagrees with this, it is important to answer certain
objections. Do not simply assert that proof is possible, give some
reason why we should believe it to be so. In order to do this, you have
to give some coherent response to the arguments I previously set out (in
which the Complex Systems community asked you to explain why AGI systems
would be exempt from the empirical regularities they have observed).

2) Since proof is impossible, the next best thing is a solid set of
reasons to believe in friendliness of a particular design. I will
quickly sketch how I think this will come about.

First, many people have talked as if building a "human-like" AGI would
be very difficult. I think that this is a mistake, for the following
reasons.

I think that what has been going on in the AI community for the last
couple of decades is a prolonged bark up the wrong tree, and that this
has made our lives more difficult than it should be.

Specifically, I think that we (the early AI researchers) started from
the observation of certain *high-level* reasoning mechanisms that are
observable in the human mind, and generalized to the idea that these
mechanisms could be the foundational mechanisms of a thinking system.
The problem is that when we (as practitioners of philosophical logic)
get into discussions about the amazing way in which "All Men Are Mortal"
can be combined with "Socrates is a Man" to yield the conclusion
"Socrates is Mortal", we are completely oblivious to the fact that a
huge piece of cognitive apparatus is sitting there, under the surface,
allowing us to relate words like "all" and "mortal" and "Socrates" and
"men" to things in the world, and to one another, and we are also
missing the fact that there are vast numbers of other conclusions that
this cognitive apparatus arrives at, on a moment by moment basis, that
are extremely difficult to squeeze into the shape of a syllogism. In
other words, you have this enormous cognitive mechanism, coming to
conclusions about the world all the time, and then it occasionally comes
to conclusions using just *one*, particularly clean, little subcomponent
of its array of available mechanisms, and we naively seize upon this
subcomponent and think that *that* is how the whole thing operates.

By itself, this argument against the "logical" approach to AI might only
be a feeling, so we would then have to divide into two camps and each
pursue our own vision of AI until one of us succeeded.

However, the people on my side of the divide have made our arguments
concrete enough that we can now be more specific about the problem, as
follows.

What we say is this. The logic approach is bad because it starts with
presumptions about the local mechanisms of the system and then tries to
extend that basic design out until the system can build its own new
knowledge, and relate its fundamental concepts to the sensorimotor
signals that connect it to the outside world.... and from our experience
with complex systems we know that that kind of backwards design approach
will usually mean that the systems you design will partially work but
always get into trouble the further out you try to extend them. Because
of the complex-systems disconnect between local and global, each time
you start with preconceived notions about the local, you will find that
the global behavior never quite matches up with what you want it to be.

So in other words, our criticism is *not* that you should be looking for
nebulous or woolly-headed "emergent" properties that explain cognition
-- that kind of "emergence" is a red herring -- instead, you should be
noticing that the hardest part of your implementation is always the
learning and grounding aspect of the system. Everything looks good on a
small, local scale (especially if you make your formalism extremely
elaborate, to deal with all the nasty little issues that arise) but it
never scales properly. In fact, some who take the logical approach will
confess that they still haven't thought much about exactly how learning
happens ... they have postponed that one.

This is exactly what has been happening in AI research. And it has been
going on for, what, 20 years now? Plenty of theoretical analysis. Lots
of systems that do little jobs a little tiny bit better than before. A
few systems that are designed to appear, to a naive consumer, as though
they are intelligent (all the stuff coming out of Japan). But overall,
stagnation.

So now, if this analysis is correct, what should be done?

The alternative is to do something that has never been tried.

Build a development environment that allowed rapid construction of large
numbers of different systems, so we can start to empirically study the
effects of changing the local mechanisms. We should try cognitively-
inspired mechanisms at the local level, but adapt them according to what
makes them globally stable. The point is not to presuppose what the
local mechanisms are, but to use what we know of human cognition to get
mechanisms that are in the right ballpark, then experimentally adjust
them to find out under what conditions they are both stable and doing
the things we want them to.

I have been working on a set of candidate mechanisms for years. And also
working on the characteristics of a software development environment
that would allow this rapid construction of systems. There is no hiding
the fact that this would be a big project, but I believe it would
produce a software tool that all researchers could use to quickly create
systems that they could study, and by having a large number of people
attacking it from different angles, progress would be rapid.

What I think would happen if we tried this approach is that we would
find ourselves not needing enormous complexity after all. This is just
a hunch, I agree, but I offer it as no more than that: we cannot
possibly know, until we try such an approach, if we find a quagmire or
an easy sail to the finish.

But I can tell you this: we have never tried such an approach before,
and the one thing that we do know from the complex systems research (you
can argue with everything else, but you cannot argue with this) is that
we won't know the outcome until we try.

(Notice that the availability of such a development environment would
not in any way preclude the kind of logic-based AI that is now the
favorite. You could just as easily build such models. The problem is
that people who did so would be embarrassed into showing how their
mechanisms interacted with real sensory and motor systems, and how they
acquired their higher level knowledge from primitives.... and that might
be a problem because in a side by side comparison I think it would be
finally obvious that the approach just simply did not work. Again
though, just by hunch. I want the development environment to become
available so we can do such comparisons, and stop philosophizing about it.)

Finally, on the subject that we started with: motivations of an AGI.
The class of system I am proposing would have a motivational/emotional
system that is distinct from the immediate goal stack. Related, but not
be confused.

I think we could build small scale examples of cognitive systems, insert
  different kinds of M/E systems in them, and allow them to interact
with one another in simple virtual worlds. We could study the stability
of the systems, their cooperative behavior towards one another, their
response to situations in which they faced threats, etc. I think we
could look for telltale signs of breakdown, and perhaps even track their
"thoughts" to see what their view of the world was, and how that
interacted with their motivations.

And what we might well discover is that the disconnect between M/E
system and intellect is just as it appears to be in humans: humans are
intellectual systems with aggressive M/E systems tacked on underneath.
They don't need the aggression (it was just useful during evolution),
and without it they become immensely stable.

I think that we could also understand the nature of the "attachment"
mechanisms that make human beings have irrational fondness for one
another, and for a species as a whole, and incorporate that in a design.
  I think we could stud the effects of that mechanism, and come to be
sure of its stability.

And, at the end of the day, I think we will come to understand the
nature of M/E systems so well that we will be able to say with a fair
degree of certainty that the more knowledge an AGI has, the more it
tends to understand the need for cooperation. I think we might (just
might) discover that we could trust such systems.

But we have to experiment to find out, and experiment in a way that
nobody has ever done before.

Richard Loosemore.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT