RE: Curriculum for AI

From: Ben Goertzel (
Date: Tue Dec 31 2002 - 19:20:13 MST

Colin Hales wrote:
> I have had a look at your doc. Two issues, IMO:
> Issue a. Learner Type
> -------------------------------------
> The issue of 'intuition' and other comments in the doc about
> pre-configured
> knowledge indicates that there is something in need of more
> attention. Can I
> suggest explicitly recognising it? The choices are roughly classes like
> 'shock levels' :-)
> 1) Automaton. Fixed learning about X,Y,Z...No training.
> 2) Learner. Learns X,Y,Z.....
> 3) Meta Learner. Learns to learn X,Y,Y....
> 4) Meta-Meta Learner. This machine is Eliezer's SL4 self-modifying
> subliming beastie! Perfoms brain surgery on itself. Too hard for my poor
> brain.

These very closely resemble the "levels of learning" introduced by Gregory
Bateson in his book "Mind and Nature" and his earlier writings.

He discusses learning, learning how to learn, and learning how to learn how
to learn. He gives examples of each in human thinking, and posits that the
human brain doesn't extend to "learning how to learn how to learn how to
learn" in significant ways.

> The assumption in the given training course is class 2). At least
> it appears
> to be.

Well, it's true that Michael's tests do not explicitly seek to distinguish
learning from metalearning etc.

However, I think they can be used to test metalearning ability, with a
little cleverness...

If a system is able to "learn how to learn", then it should be able to carry
over learning from one test to the next in the test suite.

So, one could test metalearning as follows, using his test suite of tests
T(1),...,T(N). Given a particular AI system S, for each test T(K) one could

a) test S on the test T(K)

b) test S on a number of prior tests drawn from {T(1),...,T(K-1)} , and then
test it on the test K

[One would want to choose prior tests T(i) so that knowledge of how to pass
T(i) does not obviously provide material assistance in knowing how to pass

If the system can learn how to learn, then it should learn to pass the test
K *significantly faster* in case b) than in case a). Because it should have
learned how to learn, to some extent, from its experience with the prior

One could construct a similar regime to test learning how to learn how to
learn, though it would be a lot more elaborate and time-consuming.

> Like Peter, I see assumptions (in the intro) that belie an implicit and
> specific philosophical/design position.

Peter Voss did not say anything nearly this strong, as I interpreted his

Of course, he can clarify his opinion for himself if he so wishes ;)

> Renamed, say, "Training
> for a Class
> 2) , Human-grounded unembodied AI",

The problem with making a test suite for an embodied AI, is that the test
suite is inevitably very body-dependent.

Humans would fail a test suite created for dolphins, and vice versa -- etc.
etc. etc.

I note that IQ tests, SAT tests, and the like, do not take explicit account
of embodiment.

I think that tests involving embodiment are interesting. But I don't think
that a test NOT involving embodiment is intrinsically inapplicable to
embodied AI's.

Indeed, if the "embodiment is necessary for AI" theory is correct, then
embodied AI's should do far better on tests NOT involving embodiment in any
explicit way. No?

I do not think Michael's tests are anywhere near adequate as a training
regime for a baby AI. I tend to agree with Peter on one point: I think that
learning in a noisy, rich environment is probably necessary for the easy
development of robust cognition. I also think that learning in a context
encouraging spontaneous non-goal-directed behavior is important -- not just
testable, goal-oriented behavior.

However, I think that tests like Michael's can serve as an important
component of a baby AI training regimen. And they do have the advantage of
being relatively AI-approach-independent. Whereas spontaneous behaviors and
rich-environment-oriented behaviors are going to be a lot more dependent on
the sort of sensors and actuators and body that an AI has...

-- Ben Goertzel

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT