Re: Fwd: We Can Understand Anything, But are Just a Bit Slow

From: Richard Loosemore (rpwl@lightlink.com)
Date: Fri Apr 28 2006 - 13:49:17 MDT


Ben Goertzel wrote:
> To give just a hint of how these distinctions manifest themselves in a
> fleshed-out AGI design, in Novamente:
>
> 1) items in all memory units are associated with various importance
> indices, e.g. "short term importance", "medium term importance", "long
> term importance" -- and these indices are continuous-valued not
> discrete
>
> 2) in additional to the main memory unit, there is also a specialized
> unit of the memory devoted only to items with very high short or
> medium term importance [but note that a memory, if represented as a
> distributed pattern across many elementary knowledge-items, may
> sometimes be present in this specialized unit but only to a lesser
> degree than in the main memory unit]
>
> 3) there are units of the memory corresponding to particular
> "interaction channels", containing memory items recently useful for
> dealing with that interaction channel (this covers perception/action
> specific STM)
>
> 4) in all units of the memory, there are caches that enable rapid
> access to recently accessed items [another kind of "STM", one could
> say]
>
> We have not tried to imitate the way the human brain handles the
> STM/LTM distinction (which no one really understands) but have tried
> to handle it in a rational and workable manner.
>
> -- Ben G

Okay: I can find a mapping between this and my architecture, although
with (what sound like) subtle variations in many places.

The one big distinction that I enforce, but which you might not do, is
between what is in the background (not being thought about or perceived
or deployed in any way) and what is in the foreground .... the latter is
the sum total of what the system is thinking about (contents of
consciousness). Why the distinction? Because I believe in putting the
latter in a place where there is very rapid and extensive connectivity,
so these things can link together and interact. What is backgrounded
right now must not be allowed to interfere with the stuff in the
foreground, because the latter is busy interrelating and engaging in
various relaxation processes, so the less clutter, the better.

However, having said that there should not be too much clutter, the
foreground does still contain many elements (concepts) that are
transiently activated because there is a chance they might be needed ...
so there is actually a lot of extraneous stuff hanging around. So when
people do priming experiments and look for effects on (say) word/nonword
discriminations due to associate distractors, this is what they are
picking up.

Now, when I talked about STM having perhaps thousands of things in it, I
was referring to the thing I have called the "foreground" here.
(obviously in a short sketch like this I cannot bring in all the
background justifications, but I hope I can conevey the general idea).

The foreground is not necessarily an undifferentiated module: it is
likely to have several semi-specialized subcompartments (various aspects
of language processing, for example), so this sounds a bit like what you
call "units of the memory corresponding to particular 'interaction
channels'".

As to the chunking issue (the magic number seven), here is what I see
happening: the foreground is allowed a ration, when it comes to
configurations of items that are not strongly grounded by their
connections to immediate sensory input or motor output, but which are
instead floating free and independent of one another. Why is it
rationed? There may be several explanations, but here is one possibility.

The foreground is always creating new elements (element = unit that
capture one concept) to cope with the situations that it encounters. A
lot of these new elements are purely episodic and low level (they will
end up sitting in memory as traces memories of the situation you
experienced or thought about at that time, in that situation), but some
will be genuine new "concepts" that the system is supposed to remember
because they might be the beginning of new ideas. So if you see a
conjunction of two things and those two things have not been seen
together before, make an element to capture their conjunction.

But there are limits on the number of different things that it is
sensible to combine together to make a new concept: the system knows
that the best way to deal with the world is to build new concepts that
consist of small numbers of components in the regularity .... just a
practical matter of experience here, because maybe evolution told it
that if you go around sticking groups of 27 new things together, you get
bogus concepts (like the concept of watching all the movies of Tuesday
Weld in one week after eating a hamburger that tasted more like a hot
dog, just after your brother fell over and hurt his knee, on the same
day that Lancashire beat Derbyshire at Lords) that will never amount to
anything ever again.

However, the process of making new concepts can be forced if the
attention system puts some effort into it, knowing (for some high level
reason) that it would be good to cram a lot of things into an episode.
So if you push it, you can remember about seven unrelated things.

Now, if someone asked me what is the size of STM, I would have to ask
what they meant by STM, and in particular I would get frustrated if they
  (not you, Ben, but old time cognitive psychology people) insisted that
all those "partially activated" concepts are not really part of STM,
because they would be defining the different cognitive modules purely in
terms of the effects that can be observed in different kinds of
experiments, and that seems a dumb way of doing it if you are actually
interested in working mechanisms, as I am. To me, STM is the
foreground, and there are probably thousands of things in it.

In Eliezer's comment I saw shades of the approach I mentioned just now,
which begs questions about why the seven things were just sitting in a
box having comparisons made between them: the main begged questions
being things like "What cognitive architecture is this part of?" and
"Why would it help anything to do the comparisons?".

Richard Loosemore.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT