Re: FW: DGI Paper

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Apr 13 2002 - 17:17:42 MDT


Ben Goertzel wrote:
>
> Hey Eliezer,
>
> Here are some fairly fine-grained comments on the theory of mind embodied in
> your excellent paper on Deliberative General Intelligence.
>
> Overall I find the paper to be most outstanding, and I think the general
> ideas you lay out there are highly consistent with the thinking underlying
> my own AI work. Of course, due to the generality of your ideas, they are
> bound to be consistent with a great variety of different concrete AI
> systems.

Heck, most of it is supposed to be consistent with humans! So I'd have to
agree with you there.

> In these comments I'll focus on areas of disagreement & areas where your
> statements confuse me, although these are relatively minor in the grand
> scheme of the paper....

Thanks. Incidentally, I did say at the end that the paper wasn't complete.
I meant it. I'd originally planned a 200K paper in one month; instead I
wound up doing a roughly 400K paper in three months. I was already worried
that you were going to have a heart attack and insist that I chop half of it
out, which I'm not sure I could've. So I had to omit a few things. In
fact, I had to omit everything that could reasonably be left out without
crippling the theory.

In this case I mentioned, but did not go into detail on, realtime skills and
the reflective modality. Of course this is roughly equivalent to talking
about the brain and "mentioning but not explaining" the cerebellum and
prefrontal cortex. Sorry about that, but as I said I was already worried
you were going to have a heart attack about the length. If you seriously
think there's room for the chapter to become substantially longer, I may be
able to find time to expand it (and maybe not; I've already spent far too
much time).

Anyway, most of the things you mention below need to be explained with
reference to reflectivity, realtime skills, or even realtime reflective
skills.

> 1)
> I think your characterization of concepts is in places too narrow. Your
> statement “concepts are patterns that mesh with sensory imagery” seems to
> me to miss abstract concepts and also concepts related to action rather than
> perception. I realize that later on you do mention abstract concepts.

Concepts can generalize over the perceptual correlates of realtime skills
and generalize over reflective percepts. The same "kernel" idiom applies.

> Mathematical and spiritual concepts are examples of concepts that are
> meaningful but not produced in any direct way by generalization from
> perception or action. You can say that "5" is produced by generalization
> from perception, but what about "differential operator"? What about "God"?
> I'm afraid that to view these as generalizations from perception, you need a
> terrribly general definition of generalization.

"Differential operator" is abstract but that doesn't mean it's
non-perceptual. It means that its important perceptual correlates are
abstract perceptual models and realtime skills in abstract models, although
this does not exclude interlacing with the visual and auditory modalities.
For example, you might recognize the operator "d/dx" visually, apply it to a
symbol with the auditory tag "x squared", and end up with a symbol with the
auditory tag "two x". Of course this is more of a perceptual correlate than
the perception itself. Abstract imagery is also handled by the modality
level, as discussed, but it tends to be handled by the higher levels of the
modality in a way that's pretty far from what we think of as "sensory"
imagery. The abstract imagery is simpler than fully visualized sensory
imagery, as discussed, but there's also an added layer of complexity that
comes from generalizing over reflective perceptual correlates such as the
goal context, and from generalizing over perceptual correlates of the
realtime reflective skills that manipulate abstract imagery.

Hope that made sense, or at least enough to give you a general idea of where
I'm coming from...

> I think some concepts arise
> through self-organizing mind-processes into which perception is just one of
> many inputs, not always a very significant one.

Far as I know, they're all perceptual in the end. It's just that the
perceptual idiom - modalities, including feature structure,
detector/controller structure, and occasionally realtime motor structure -
extends far beyond things like vision and sound, to include internal reality
as well.

> Overall, I think that you overemphasize perception as compared to action.
> Pragmatically, in Novamente, I’ve found that the latter is at least as hard
> of a problem to deal with, in terms of implementation and in terms of
> working out the conceptual interface with cognition.

Yep, like I said I left the cerebellum out due to length constraints. You
can give a complete account of something that is "intelligence" even if the
realtime skills are very crude - they'll just get done with goals and
subgoals instead. You can think of realtime skills as being a conceptually
simplified but computationally intensive version of goal processing that
involves limited-complexity targets and a limited number of dynamic
achievement processes within a modality workspace.

> Along theese same lines, the definition of a “concept kernel” you give seems
> to apply only to perceptually-derived concepts; I think it should be
> generalized.

Kernels go everywhere; it's perception that gets generalized.

> 2)
> Next, about Thoughts:
>
> You omit to mention that thoughts as well as concepts can be remembered.
> Much of episodic memory consists of memories of thoughts! Thus, thoughts
> are not really “disposable one-time structures” as you say. Most but not
> all are disposed of.

Thoughts have perceptual correlates. The correlates get remembered.
Another reflective modality issue. I did mention this briefly, I think,
while I was discussing the role of the stream of consciousness in human
intelligence.

> When you say “A thought is a specific structure of combinatorial symbols
> which builds or alters mental imagery” – I am not sure why “imagery” comes
> into it. It seems that you are using this word in a way that is not
> necessarily related to visual imagery, which is a little bit confusing. I’d
> like to see a definition of “mental imagery” as you use it here.

I need to emphasize more that when I say "imagery" I am referring to
generalized working memory in all perceptual modalities, not just the visual
modality.

> Don’t you
> believe some thoughts are entirely non-imagistic, purely abstract without
> any reliance on sensory metaphors?

I think some thoughts rely on reflective imagery or imagery which is not
visualized all the way down to the sensory level.

> Some of mine appear to be,
> introspectively -- and I am a highly visual thinker, more so than many
> others.
>
> 3)
> The discussion of “smooth fitness landscapes” is confusing.
>
> Actually, the fitness landscapes confronting intelligent systems are almost
> all what are called “rugged fitness landscapes” (a term I got from a whole
> bunch of Santa Fe institute papers on rugged fitness landscapes, some by
> biologist Alan Perelson).

Yes, the fitness landscapes confronting intelligent systems are all very
sharp and rough. Sensory modalities and learned categories smooth them so
that they are computationally tractable for thought. A visual scene is
extremely rough when handled by a process that sees it as a field of raw
pixels with no edges, textures, shading, etc.

> But “smooth” always means continuous (or differentiable) in mathematics, and
> the cognitively & evolutionarily relevant fitness landscapes definitely are
> NOT.

"Smooth" in fitness landscapes means that similar things are separated by
short distances, and especially that incremental improvements are short
distances. In the case of a modality smoothing a raw scene, you can think
of distance as being the distance between feature detectors instead of the
distance between raw pixels, or "distance" as being inversely proportional
to the probability of that step being taken within the system.

> 4)
> About feature controllers & feature detectors. There is evidence for the
> involvement of premotor neurons in conscious perception in the brain. I
> reference some old work along these lines in my online paper “Chance and
> Consciousness.” I’m sure there’s more recent work too. This doesn’t
> directly give evidence for feature controllers but it is a big step in that
> direction.

I would expect concept substrate to govern feature controllers. If visual
concept kernels are identified with inferior temporal areas; and if the
higher-level concepts that bind multiple kernels together are identified
with the association areas in the posterior, superior temporal areas; then
I'd expect mental imagery to take place through the invocation of a
higher-level associative concept (posterior superior temporal) that invokes
the kernels (inferior temporal) that feed back through the ventral
processing stream and create depictive imagery in the actual visual areas.

Feature controllers are the inverses of feature detectors; they do not
appear as internal actions.

What you're attributing to premotor neurons sounds more like reflective
internal actions (i.e., realtime skills within the reflective modality)
rather than feature controllers. I would tend to put reflective feature
detectors and feature controllers in prefrontal cortex (of course) and
realtime reflective skills in the cerebellum (of course). Coopting premotor
neurons sounds like coopting the sensorimotor modality to support Lakoff &
Johnson's sensorimotor metaphors, not to support reflective realtime skills
per se.

> 5)
> About your suggested modalities: billiards, super-Go, and interpreted code…
> it’s not clear that the former two have the richness to support the
> education of a non-brittle intelligence. Of course, you don't say they do,
> wisely enough. But what do you suggest for modalities for a slightly more
> advanced AI? Do you think we need to go to robot eyes and such, or do you
> think the Net will suffice?

Billiards and mini-Go and code are definitely not rich enough because they
can't easily support the classic Lakoff and Johnson schema such as
line-connection, part-whole, center-periphery, container-contained, and so
on. But I can't see any good way to do that without gritting teeth and
starting on a 3D pixel/voxel world, which may be too ambitious for a first
AI.

The Net can't help you here. You can't have a modality with a
computationally tractable feature structure unless your target environment
*has* that kind of structure to begin with. If you're going to put a baby
AI in a rich environment, the richness has to be the kind that the baby AI
can learn to see incrementally.

> 6)
> In 2.5.2, what do you mean by “verify the generalization”? Could you give
> a couple examples?

What I mean is that noticing a perceptual cue that all the billiards in the
"key" group are red, and that all the billiards in the "non-key" group are
not red, is not the same as verifying that this is actually the case. The
cognitive process that initially delivers the perceptual cue, the suggestion
saying "Hey, check this out and see if it's true", may not always be the one
that does the verification.

> 7)
> You say “concepts are learned, thoughts are invented.” I don’t quite catch
> the sense of this.
>
> Complex concepts are certainly “invented” as well, under the normal
> definition of “invention.” …
>
> The concept of a pseudoinverse of a matrix was invented by Moore and
> Penrose, not learned by them. I learned it from a textbook.
>
> The concept of "Singularity" was invented as well...

Well, you can learn a concept from the thoughts that you invent - generalize
a kernel over the reflective perceptual correlates of the thoughts. But the
concept-creating cognitive process will still reify ("learn") a perception,
and the deliberative thought process that created the abstract/reflective
perceptions being reified will still be inventive.

> 8)
> You say “the complexity of the thought level … arises from the cyclic
> interaction of thoughts and mental imagery.”
>
> I think this is only one root of the complexity of thoughts.
>
> Self-organizing mental processes acting purely on the thought level seem to
> play a big role as well.

"Is embodied by" might be a better term than "arises". Even so, which
specific self-organizing processes?

> 9)
> You say, “in humans, the perception of confidence happens to exhibit a
> roughly quantitative strength….”
>
> I don’t think this is something that “just happens” to be the case in
> humans. I think that this quantification of the qualitative (the creation
> of numerical truth values corresponding to mental entities) is a key part of
> intelligence. I bet it will be part of ANY intelligence. There is a huge
> efficiency to operating with numbers.

What I mean is that the way humans perceive confidence, quantitatively, may
not be the *best* way to perceive confidence. It may not even be the way
humans perceive confidence. As you said, in Novamente you work with
triples.

> 10)
> You say “ ‘one thought at a time’ is just the human way of doing things ….”
>
> Actually it isn’t, I often have more than one thought at a time.

No, you often have mental imagery that depicts ongoing cognition within more
than one train of thought, and you switch around the focus of attention,
which means that more than one deliberative track can coexist. You still
think only one thought at a time. Or do you mean that you pronounce more
than one mental sentence at a time? You've got to keep the thought level
and the deliberation level conceptually separate; I said "one thought at a
time", not "one deliberation at a time".

> 11)
> Regarding your discussion of the need (or otherwise) for language in digital
> mind.
>
> I am guessing that AI will come most easily via building a community of
> intercommunicating AI’s. Communication between AI’s need not proceed using
> linear sequences of characters or sounds, though. It can proceed via
> various sorts of brain-to-brain transfer. But this requires some art; in
> Novamente we’ve designed a language called Psynese for brain-to-brain
> transfer between different Novamente instances.

As discussed in the section on seed AI, I think that splitting up available
brainpower into separate entities is less productive than agglomerating it.

> 12)
> When you discuss the various kinds of binding, I’m not sure why you have a
> sensory binding but not an action binding.
>
> You deal with actions in the context of decisive bindings, but, I think
> sometimes actions can be bound in a way that has nothing to do with goals.

I suppose you could call an "action binding" the correspondence between
muscle commands and mental pictures of muscle commands, but I think you
would need to complete the picture with a feedback loop from proprioception
before it became a cognitively real binding. And in that case what you have
is a realtime manipulative binding, which is of course one of the coolest
kinds. But an "action binding" that doesn't involve a feedback loop, just a
direct correspondence between a patterned variable in cognition and a
patterned variable in motor reality, is just another kind of sensory mapping
- albeit one where causality flows in the opposite direction.

> 13)
> You say that “evolution… cannot boast general intelligence.”
["cannot invoke"]
> This is not so clear to me. Why? It seems to me that evolution in fact
> does display a pretty generalized kind of intelligence.

Because there is a difference between genericity and generality.

Evolution, like search trees and artificial neural networks, is a fully
generic process. But it works much better for some things than others, and
some things it can't handle at all. It can't do many of the things that
human intelligence does. You can apply a generic process to anything but it
won't necessarily work. Usually it only solves a tiny fraction of special
cases of the problem (which AI projects usually go on to mistake for having
solved the general case of the problem; this is one of the Deadly Sins).
Evolution uses an unreasonable amount of computational power to overcome
this handicap.

> You could say perhaps that evolution is relatively unintelligent because it
> requires too many space and time resources to achieve its goals.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT