RE: Goertzel's _PtS_

From: Ben Goertzel (ben@webmind.com)
Date: Thu May 03 2001 - 07:20:27 MDT


Patrick,

You make a lot of really good points here...

Clearly, one is going to have to take what I call an "experiential
interactive learning" approach. One is going to have to teach the baby mind
AI theory step by step, giving it simple problems to solve, correcting it
when it's wrong, and so forth, incrementally. At each stage in its
education, appropriate knowledge in a Mizar-variant can be given to it,
along with practical use-cases of this knowledge, and it can be posed with
problems to solve using this knowledge. Just as when one learns something
in school. (Not that I ever learned much math or computer science in
school, I always got more from reading the texts directly, but you get the
point... reading texts, one is still getting a carefully structured
combination of new information and exercises that force one to integrate
that new information with one's previous store of information.)

Creating a "CS and AI curriculum" for an AI will be a really interesting
job. This curriculum will bear some resemblance to the curriculum we use to
teach these subjects to humans, but it certainly won't be identical.

In short, I guess that encoding CS knowledge in Mizar-ish way is one part of
the curriculum we need to make, but not the only part. So you're right, my
previous post oversimplified the matter a bit, in the way it phrased things.

In order for education to proceed reasonably smoothly, the AI will have to
be able to carry out basic human-language chat (so we can tell it when it's
wrong, give it suggestions as to different approaches, etc.). However it
doesn't need to be able to read and fully understand arbitrarily complicated
human-language texts.

ben

> Moving on, the next area of difficulty is semantics.
> Theorems in Mizarish may
> be nice, but if you don't know how to use them to accomplish
> tasks, they're
> useless. Attaching use case information to each theorem, and generating
> metatheorems that connect the dots, will take even more time.
>
> Then there's the vast majority of CS knowledge, having to
> do not with
> abstract CS problems but with how we actually build software. A complete
> self-improving system must be able to analyze a software system
> in whole and
> in parts, often without any kind of useful documentation. This is
> not covered
> by existing theorems, and I have doubts that appropriate theorems could be
> designed by humans.
>
> Surmounting all these obstacles, you have a system which
> ought to be able to
> A) write software which can solve some given problems, and B) optimize or
> improve existing software. However, there's still a gigantic gap
> between B and
> improving an AI system, and that is Domain Specific Knowledge.
> Optimizing the
> pieces of an AI system will make it run faster, but not better.
> To make it run
> better, the system must rewrite itself perhaps from the very
> ground up, and in
> order to do that it must understand how its code relates to artificial
> intelligence in very specific ways. That is, it must understand
> intelligence.
>
> Let's repeat that for those who may not have heard: the AI
> must understand AI
> in order to improve itself. It must understand the theory of artificial
> intelligence that was used to create it and its implementation
> specifics very
> well. Then it must go about improving both the design and
> implementation of
> its intelligence, a very complex task, as Ben Goertzel is aware.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT