FW: DGI Paper

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Apr 13 2002 - 15:35:48 MDT


Hey Eliezer,

Here are some fairly fine-grained comments on the theory of mind embodied in
your excellent paper on Deliberative General Intelligence.

Overall I find the paper to be most outstanding, and I think the general
ideas you lay out there are highly consistent with the thinking underlying
my own AI work. Of course, due to the generality of your ideas, they are
bound to be consistent with a great variety of different concrete AI
systems.

In these comments I'll focus on areas of disagreement & areas where your
statements confuse me, although these are relatively minor in the grand
scheme of the paper....

-- ben

*******************

1)
Firstly, about Concepts:

I think your characterization of concepts is in places too narrow. Your
statement “concepts are patterns that mesh with sensory imagery” seems to
me to miss abstract concepts and also concepts related to action rather than
perception. I realize that later on you do mention abstract concepts.

I define a concept as “a relationship between concepts, percepts and
 actions”.

Mathematical and spiritual concepts are examples of concepts that are
meaningful but not produced in any direct way by generalization from
perception or action. You can say that "5" is produced by generalization
from perception, but what about "differential operator"? What about "God"?
I'm afraid that to view these as generalizations from perception, you need a
terrribly general definition of generalization. I think some concepts arise
through self-organizing mind-processes into which perception is just one of
many inputs, not always a very significant one.

Overall, I think that you overemphasize perception as compared to action.
Pragmatically, in Novamente, I’ve found that the latter is at least as hard
of a problem to deal with, in terms of implementation and in terms of
working out the conceptual interface with cognition.

Along theese same lines, the definition of a “concept kernel” you give seems
to apply only to perceptually-derived concepts; I think it should be
generalized.

2)
Next, about Thoughts:

You omit to mention that thoughts as well as concepts can be remembered.
Much of episodic memory consists of memories of thoughts! Thus, thoughts
are not really “disposable one-time structures” as you say. Most but not
all are disposed of.

I like dredging up old thoughts from memory and reliving them. Of course,
one can never tell how much change has taken place in the process ;>

When you say “A thought is a specific structure of combinatorial symbols
which builds or alters mental imagery” – I am not sure why “imagery” comes
into it. It seems that you are using this word in a way that is not
necessarily related to visual imagery, which is a little bit confusing. I’d
like to see a definition of “mental imagery” as you use it here. Don’t you
believe some thoughts are entirely non-imagistic, purely abstract without
any reliance on sensory metaphors? Some of mine appear to be,
introspectively -- and I am a highly visual thinker, more so than many
others.

3)
The discussion of “smooth fitness landscapes” is confusing.

Actually, the fitness landscapes confronting intelligent systems are almost
all what are called “rugged fitness landscapes” (a term I got from a whole
bunch of Santa Fe institute papers on rugged fitness landscapes, some by
biologist Alan Perelson).

I.e., they are fractal, not smooth.

They have a relatively small average Lipschitz constant, meaning that
|f(x)-f(y)|/|x-y| is on average not too big for (x,y) reasonably close
together.

But “smooth” always means continuous (or differentiable) in mathematics, and
the cognitively & evolutionarily relevant fitness landscapes definitely are
NOT.

4)
About feature controllers & feature detectors. There is evidence for the
involvement of premotor neurons in conscious perception in the brain. I
reference some old work along these lines in my online paper “Chance and
Consciousness.” I’m sure there’s more recent work too. This doesn’t
directly give evidence for feature controllers but it is a big step in that
direction.

5)
About your suggested modalities: billiards, super-Go, and interpreted code…
it’s not clear that the former two have the richness to support the
education of a non-brittle intelligence. Of course, you don't say they do,
wisely enough. But what do you suggest for modalities for a slightly more
advanced AI? Do you think we need to go to robot eyes and such, or do you
think the Net will suffice?

6)
In 2.5.2, what do you mean by “verify the generalization”? Could you give
a couple examples?

7)
You say “concepts are learned, thoughts are invented.” I don’t quite catch
the sense of this.

Complex concepts are certainly “invented” as well, under the normal
definition of “invention.” …

The concept of a pseudoinverse of a matrix was invented by Moore and
Penrose, not learned by them. I learned it from a textbook.

The concept of "Singularity" was invented as well...

8)
You say “the complexity of the thought level … arises from the cyclic
interaction of thoughts and mental imagery.”

I think this is only one root of the complexity of thoughts.

Self-organizing mental processes acting purely on the thought level seem to
play a big role as well.

9)
You say, “in humans, the perception of confidence happens to exhibit a
roughly quantitative strength….”

I don’t think this is something that “just happens” to be the case in
humans. I think that this quantification of the qualitative (the creation
of numerical truth values corresponding to mental entities) is a key part of
intelligence. I bet it will be part of ANY intelligence. There is a huge
efficiency to operating with numbers.

Western science only developed via the advent of widespread quantification –
it’s valuable in science for the same reason that it is in the brain.

Also, in fact, I believe the human notion of truth value is not a single
number but involves at least 2-3 numbers; in Novamente we work minimally
with triples (probability, confidence, weight of evidence).

10)
You say “ ‘one thought at a time’ is just the human way of doing things ….”

Actually it isn’t, I often have more than one thought at a time.

Right now, I am simultaneously thinking about thoughts, and thinking about
getting up to get something to eat .. and thinking about something
deliciously X-rated.... This particular kind of simultaneity usually
indicates to me that I've been working too long ;>

Usually one thought is in the foreground and the others are vying for
attention but dimmer. But sometimes 2 thoughts share the foreground, e.g.
when chatting on the phone with my loquacious grandfather while
simultaneously answering e-mails...

I do agree though that digital minds will be able to multitask actions and
thoughts much more thoroughly than humans can. None of us can pursue more
than, say, 5 trains of thought simultaneously (I can only handle 2-3), but
an AI should be able to be much more flexible by diverting more resources
from unconscious processing as situationally necessary.

11)
Regarding your discussion of the need (or otherwise) for language in digital
mind.

I am guessing that AI will come most easily via building a community of
intercommunicating AI’s. Communication between AI’s need not proceed using
linear sequences of characters or sounds, though. It can proceed via
various sorts of brain-to-brain transfer. But this requires some art; in
Novamente we’ve designed a language called Psynese for brain-to-brain
transfer between different Novamente instances.

12)
When you discuss the various kinds of binding, I’m not sure why you have a
sensory binding but not an action binding.

You deal with actions in the context of decisive bindings, but, I think
sometimes actions can be bound in a way that has nothing to do with goals.

13)
You say that “evolution… cannot boast general intelligence.”

This is not so clear to me. Why? It seems to me that evolution in fact
does display a pretty generalized kind of intelligence.

This is a deep philosophical point of course, and not directly relevant to
digital mind design.

You could say perhaps that evolution is relatively unintelligent because it
requires too many space and time resources to achieve its goals.

OK, 13 is my lucky number, so I'll stop here.

BTW, I'll be out of town Sunday afternoon thru Wednesday night, so my e-mail
responses will likely come daily rather than hourly ;-)

-- ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT