RE: Goertzel's _PtS_

From: Patrick McCuller (patrick@kia.net)
Date: Wed May 02 2001 - 22:05:06 MDT


> I think that
>
> Java supercompiler description
> + computer science background (e.g. useful CS
> theorems encoded in Mizar)
> + WM reasoning system
>
> is a framework that will allow WM to gradually learn to rewrite code to be
> more intelligent, starting with simple code and eventually working up to
> itself

        I worry that this is an understatement. Self-improvement, for an AI, involves
programming. No doubt about that. Programming and lots and lots of it. But I
worry that between '+ computer science background' and '+ WM reasoning system'
there's a lot of knowledge and work being hidden or left out.

        Start with '+ computer science background (e.g. useful CS theorems encoded in
Mizar)'. This is such a gigantic task that it's hard to even know where to
begin such a thing. Mizar is extremely difficult to read and write, at least
for people. A couple of people I know, who I consider math geniuses (and who
have the CV to back it up) get headaches looking at Mizar. It's complex and
tedious, and math theorems that we can express in a few pages with standard
notation and comments become, in Mizar, thousands and thousands of lines long.

        Using this language to formally express CS theorems will be even more
difficult - skewing its original purpose. It would be more appropriate to
build a new Mizarish language, also sufficiently capable of formal expression
and more directly related to the CS domain.

        Even solving this difficulty, you must find people who can select CS theorems
and translate them into Mizarish and verify them. Are there many people
capable and willing to do such a thing? I honestly do not know the answer to
this one.

        Selection of which 'CS theorems' to translate is an important task. Even with
software assistance it is going to take serious time to translate a small
portion of the body of formalizable CS theorems. The order could be very, very
important.

        Moving on, the next area of difficulty is semantics. Theorems in Mizarish may
be nice, but if you don't know how to use them to accomplish tasks, they're
useless. Attaching use case information to each theorem, and generating
metatheorems that connect the dots, will take even more time.

        Then there's the vast majority of CS knowledge, having to do not with
abstract CS problems but with how we actually build software. A complete
self-improving system must be able to analyze a software system in whole and
in parts, often without any kind of useful documentation. This is not covered
by existing theorems, and I have doubts that appropriate theorems could be
designed by humans.

        Surmounting all these obstacles, you have a system which ought to be able to
A) write software which can solve some given problems, and B) optimize or
improve existing software. However, there's still a gigantic gap between B and
improving an AI system, and that is Domain Specific Knowledge. Optimizing the
pieces of an AI system will make it run faster, but not better. To make it run
better, the system must rewrite itself perhaps from the very ground up, and in
order to do that it must understand how its code relates to artificial
intelligence in very specific ways. That is, it must understand intelligence.

        Let's repeat that for those who may not have heard: the AI must understand AI
in order to improve itself. It must understand the theory of artificial
intelligence that was used to create it and its implementation specifics very
well. Then it must go about improving both the design and implementation of
its intelligence, a very complex task, as Ben Goertzel is aware.

Patrick McCuller

>
>
> > The Friendliness-topped goal system, the causal goal system, the
> > probabilistic supergoals, and the controlled ascent feature are the main
> > things I'd want Webmind to add before the 1.0 version of the AI Engine.
> >
>
> Causal goal system is basically there, we never finished our detailed
> conversation on that.
>
> Probabilistic supergoals are definitely there.
>
> Controlled ascent is absent so far; I have no objection to it, but it's just
> not time to worry about it yet.
>
> Friendliness-topped goal system is a delusion ;>
>
> ben
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT