Re: AGI research methodology

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Fri Sep 16 2005 - 05:12:14 MDT


Ben Goertzel wrote:
> What is my research methodology for my AI project, going forward?
> I will now describe it briefly...

Although this isn't what I meant (I'd call this 'development
methodology'), it is an interesting area.

> For the next phase of the Novamente project it is as follows. We are
> connecting Novamente to a 3D simulation world called AGI-SIM, where it
> controls a little agent that moves around and sees and does stuff. We
> have then identifies a series of progressively more and more complex
> tasks for Novamente to carry out in this environment, based loosely on
> Piaget's theory of cognitive development.

I am doing something broadly similar myself with my prototype (which
isn't intended to be the basis for an AGI, merely to demonstrate the
viability of some SIAI-relevant theory). However at present I am focusing
on inductive-deductive tasks rather than planning and control tasks, and
as such I don't have the same level of embodiment nor complete
consistency of I/O modalities across tasks. So far I have been working
with 2D environments only.

> So, our methodology for the next phase is simply to try to get Novamente to
> carry out these tasks, one by one, in roughly the order we've articulated
> based on Piaget.

I'm ordering by logical difficulty rather than by similarity to human
cognitive development, since I'm looking primarily at tractability of
various kinds of complex but fairly transparent hypothesis generation.
 
> If we succeed, we should have an "artificial child" of a sort that can
> communicate using complex if not humanlike English and that can solve
> commonsense reasoning problems, posed in the context of its world, on the
> level of an average 10-year-old.

What do you make of the AAII group's 'animal-level cognition first'
manifesto? Do you already have this, or do you think it's irrelevant
or skippable (my position is that it's the wrong way of approaching
the generation of a relevant competence set)?

> I believe there is a strong argument that the step from this kind
> of artificial child to a superhuman AI is not as big as the step
> from right here, right now to the artificial child.

That argument depends on the opacity of the design. Using more brute
force and emergence lowers the design complexity bar for producing
an AGI, at the cost of making self-improvement much harder due to
the difficultly or total lack of deliberative self-modification (and
incidentally making it a UFAI and hence an extincion event).

> The big error to ward off in this kind of approach is overfitting: one
> doesn't want to make a system that fulfills the exact goals one has laid
> out by somehow "cheating" and being able to do essentially nothing but
> fulfill those goals.

Now I can just enter the relevant hypotheses to solve the problem
directly, and then progressively switch to less and less direct
specification in a kind of constraint generalisation. Overfitting isn't
(generally) a problem when you know how the 'fitting' mechanisms will
perform and can see immediately how closely the generated model and
planning metamodel matches the world and task. Though again direct
comparison is a bit unfair as I'm not working on AGI proper; right now
I couldn't directly oppose a claim that your system is a better
compromise between generality and tractability.

 * Michael Wilson

        
        
                
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT