RE: FW: DGI Paper

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Apr 13 2002 - 18:42:55 MDT


hi,

> > 1)
> > I think your characterization of concepts is in places too narrow. Your
> > statement “concepts are patterns that mesh with sensory
> imagery” seems to
> > me to miss abstract concepts and also concepts related to
> action rather than
> > perception. I realize that later on you do mention abstract concepts.
>
> Concepts can generalize over the perceptual correlates of realtime skills
> and generalize over reflective percepts. The same "kernel" idiom applies.

By a "reflective percept" you mean a perception of something inside the mind
rather than something in the external world?

It wasn't clear to me that this was contained within your definition of a
"percept" -- but if it is, that clarifies a lot.

> "Differential operator" is abstract but that doesn't mean it's
> non-perceptual. It means that its important perceptual correlates are
> abstract perceptual models and realtime skills in abstract
> models,

I don't think I understand your use of the terms "percept" and "perception"?
Could you tell me how you define these things?
You seem to be using them much more broadly than me, which may the the
source of much of my confusion.

> For example, you might recognize the operator "d/dx" visually,
> apply it to a
> symbol with the auditory tag "x squared", and end up with a
> symbol with the
> auditory tag "two x". Of course this is more of a perceptual
> correlate than
> the perception itself.

Sure, but when one comes up with a NEW mathematical concept, sometimes it is
not associated with ANY visual, auditory or otherwise "imagistic" stuff.
It's purely a new math concept, which then has to be, through great labor,
associated with appropriate symbols, pictures, names, or what have you.

> Far as I know, they're all perceptual in the end. It's just that the
> perceptual idiom - modalities, including feature structure,
> detector/controller structure, and occasionally realtime motor structure -
> extends far beyond things like vision and sound, to include
> internal reality
> as well.

This is getting to the crux of my issue, I think. You define "perception"
as a kind of abstract structure/process, but in the paper I don't think it's
entirely clear that this is how you're defining "perception". At least it
wasn't that clear to me. I generally think of perception as having to do
with the processing of stimuli from the external world.

Based on your very broad definition of perception, I'm not sure how to
distinguish it from cognition. I guess in your view perception serves

1) to process external-world data
2) as one among many cognitive structures/processes

I don't think this is the standard use of the term "perception", though
there's nothing particularly wrong with it once it's understood.

I'm still not sure however that a new abstract math concept that I conceive
in the bowels of my unconscious is "perceptual in the end." I think that
its conception may in some cases NOT involve feature structures and
detector/controller structures. A new math concept may arise thru
combinatory & inferential operations on existing math concepts, without any
of the perceptual/motor hierarchy-type structures you're describing.

Math concepts are not the only example of this, of course, they're just a
particularly clear example because of their highly abstract nature.

> > When you say “A thought is a specific structure of combinatorial symbols
> > which builds or alters mental imagery” – I am not sure why
> “imagery” comes
> > into it. It seems that you are using this word in a way that is not
> > necessarily related to visual imagery, which is a little bit
> confusing. I’d
> > like to see a definition of “mental imagery” as you use it here.
>
> I need to emphasize more that when I say "imagery" I am referring to
> generalized working memory in all perceptual modalities, not just
> the visual
> modality.

The key point is still, however, whether by "perceptual modalities" you mean
modalities for sensing the external world, or something more abstract.

I don't think that a new math concept i cook up necessarily has anything to
do with imagery derived from any of the external-world senses. Of course
connections with sensorimotor domains can be CREATED, and must be for
communication purposes. But this may not be the case for AI's, which will
be able to communicate by direct exchange of mindstuff rather than via
structuring physicalistic actions & sensations.

> > Don’t you
> > believe some thoughts are entirely non-imagistic, purely
> abstract without
> > any reliance on sensory metaphors?
>
> I think some thoughts rely on reflective imagery or imagery which is not
> visualized all the way down to the sensory level.

Again this same language. You're talking about some kind of "visualizing"
at a non-sensory level. I'm not sure what you mean by "visualizing" then.

> > But “smooth” always means continuous (or differentiable) in
> mathematics, and
> > the cognitively & evolutionarily relevant fitness landscapes
> definitely are
> > NOT.
>
> "Smooth" in fitness landscapes means that similar things are separated by
> short distances, and especially that incremental improvements are short
> distances. In the case of a modality smoothing a raw scene, you can think
> of distance as being the distance between feature detectors instead of the
> distance between raw pixels, or "distance" as being inversely proportional
> to the probability of that step being taken within the system.

This is just a terminology point, but I still think that your terminology is
not the standard one.

I still believe that, in the standard terminology, a fitness landscape that
has local minima and maxima at all perceivable scales, is not "smooth" in
standard usage. It's fractal.

The processing done in visual & auditory cortex often resembles
windowed-fourier or wavelet transforms, and this does result in a kind of
smoothing in that hi-frequency components are omitted.

Anyway it would be good if you just clarified in the text what you meant by
"smooth" -- it's certainly no big deal.

> Coopting premotor
> neurons sounds like coopting the sensorimotor modality to support Lakoff &
> Johnson's sensorimotor metaphors, not to support reflective
> realtime skills
> per se.

Might be so, or they could serve both roles. These
perceptual-cognitive-active loops are complex and go beyond my knowledge of
neurobiology.

> Do you think we need to go to robot eyes and
> such, or do you
> > think the Net will suffice?
>
> Billiards and mini-Go and code are definitely not rich enough because they
> can't easily support the classic Lakoff and Johnson schema such as
> line-connection, part-whole, center-periphery, container-contained, and so
> on. But I can't see any good way to do that without gritting teeth and
> starting on a 3D pixel/voxel world, which may be too ambitious for a first
> AI.
>
> The Net can't help you here. You can't have a modality with a
> computationally tractable feature structure unless your target environment
> *has* that kind of structure to begin with. If you're going to put a baby
> AI in a rich environment, the richness has to be the kind that the baby AI
> can learn to see incrementally.

I don't understand why you think a baby AI can't learn to see the Net
incrementally.

> > 6)
> > In 2.5.2, what do you mean by “verify the generalization”?
> Could you give
> > a couple examples?
>
> What I mean is that noticing a perceptual cue that all the
> billiards in the
> "key" group are red, and that all the billiards in the "non-key" group are
> not red, is not the same as verifying that this is actually the case. The
> cognitive process that initially delivers the perceptual cue, the
> suggestion
> saying "Hey, check this out and see if it's true", may not always
> be the one
> that does the verification.

So the verification is just done by more careful study of the same perceived
scene, in this case?

>
> > 7)
> > You say “concepts are learned, thoughts are invented.” I
> don’t quite catch
> > the sense of this.
> >
> > Complex concepts are certainly “invented” as well, under the normal
> > definition of “invention.” …
> >
> > The concept of a pseudoinverse of a matrix was invented by Moore and
> > Penrose, not learned by them. I learned it from a textbook.
> >
> > The concept of "Singularity" was invented as well...
>
> Well, you can learn a concept from the thoughts that you invent -
> generalize
> a kernel over the reflective perceptual correlates of the
> thoughts. But the
> concept-creating cognitive process will still reify ("learn") a
> perception,
> and the deliberative thought process that created the abstract/reflective
> perceptions being reified will still be inventive.

I don't understand this. If I create a silly concept right now, such as,
say,

"Differential functions on [-5,5] whose third derivative is confined to the
interval [0,1]"

then how is this concept LEARNED? I didn't learn this, I just INVENTED it.
It's a concept. A damn useless one (though maybe some thinking will
discover that it's useful after all...), but a concept nonetheless. There
also seems to be no PERCEPTION involved here, in any way I can understand.
No external sensations, directly or metaphorically, and also no process of
hierarchical feature detection & control. Rather, a simple process of
*combining known concepts*. I guess you can say that my mind had to
"perceive" these known concepts in some sense in order to combine them, but
this "perception" isn't perception in any very strong sense -- it's
perception only in the sense that any mental schema "perceives" the
arguments that are fed into it...

> > Self-organizing mental processes acting purely on the thought
> level seem to
> > play a big role as well.
>
> "Is embodied by" might be a better term than "arises". Even so, which
> specific self-organizing processes?

Evolutionary & hypothetically-inferential combination of existing concepts &
parts thereof into new ones, guided by detected associations between
concepts. With a complex dynamic of attention allocation guiding the
control of the process.

> What I mean is that the way humans perceive confidence,
> quantitatively, may
> not be the *best* way to perceive confidence. It may not even be the way
> humans perceive confidence. As you said, in Novamente you work with
> triples.

Sure, there are lots of ways to measure truth value, and tradeoffs with all
of them. No doubt an advanced AI will rewrite whatever we initially insert
in this slot of its design.

> > 10)
> > You say “ ‘one thought at a time’ is just the human way of
> doing things ….”
> >
> > Actually it isn’t, I often have more than one thought at a time.
>
> No, you often have mental imagery that depicts ongoing cognition
> within more
> than one train of thought, and you switch around the focus of attention,

I feel that my focus of attention can span two or three different thoughts
at once, sometimes.

> which means that more than one deliberative track can coexist. You still
> think only one thought at a time. Or do you mean that you pronounce more
> than one mental sentence at a time? You've got to keep the thought level
> and the deliberation level conceptually separate; I said "one thought at a
> time", not "one deliberation at a time".

I don't understand how you define "thought", then. Could you give me a
clearer definition?

And please don't use a variant of the "there can only be one at a time"
restriction in the definition! ;)

So far as I know, the physiology of human consciousness indicates that
humans can have multiple perceptual-cognitive-active loops of conscious
awareness running at once.

Consciousness often has a subjective "unity" to it, but in my experience,
not always.

> As discussed in the section on seed AI, I think that splitting up
> available
> brainpower into separate entities is less productive than
> agglomerating it.

Well, this will be a fun issue to explore empirically!! There's no real
need to resolve it now, in my view; I think it doesn't make a big difference
for AI engineering.

> But an "action binding" that doesn't involve a feedback
> loop, just a
> direct correspondence between a patterned variable in cognition and a
> patterned variable in motor reality, is just another kind of
> sensory mapping
> - albeit one where causality flows in the opposite direction.

I guess that if you count kinesthetic sensation as a sense, then all motor
actions can be mapped into the domain of sensation and considered that way.
In practice of course, these particular "sensory mappings" (that are really
motor mappings ;) will have to be treated pretty differently than the other
sensory mappings.

> > 13)
> > You say that “evolution… cannot boast general intelligence.”
> ["cannot invoke"]
> > This is not so clear to me. Why? It seems to me that evolution in fact
> > does display a pretty generalized kind of intelligence.
>
> Because there is a difference between genericity and generality.
>
> Evolution, like search trees and artificial neural networks, is a fully
> generic process. But it works much better for some things than
> others, and
> some things it can't handle at all. It can't do many of the things that
> human intelligence does. You can apply a generic process to
> anything but it
> won't necessarily work. Usually it only solves a tiny fraction of special
> cases of the problem (which AI projects usually go on to mistake
> for having
> solved the general case of the problem; this is one of the Deadly Sins).
> Evolution uses an unreasonable amount of computational power to overcome
> this handicap.

I think that any black-box global optimization algorithm -- including
evolution, and some NN and search-tree based algorithms -- has a kind of
"general intelligence." The problem is that it uses too many resources.
Human brains achieve far more general intelligence per unit of space and
time resources than evolutionary systems.

What I mean by "general intelligence" is roughly "the ability to solve a
variety of complex problems in a variety of complex environments." As you
know I've tried with moderate success to quantify and formalize this
definition. I'm not sure exactly what you mean by "general intelligence",
maybe it's something different.

-- Ben

p.s. Well, at least this thread is counteracting the recent mini-trend of
SL4 becoming a chatty list ;>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT