RE: Loosemore's Proposal

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Oct 27 2005 - 11:35:47 MDT


I too suspect that the system will deviate more and more from what is
expected as the work progresses! However, unlike you, I am not so
confident that the deviations will be a matter of inadequate
functionality ;-) But I am sure there will be a lot to learn in
the course of teaching and experimentation. And I do agree that
a better environment for this kind of experimentation would make
things a LOT easier. But building such an environment is a big job
and I've opted to proceed with my best guess for how to build the
AI instead...

-- Ben

>
> This is good. I look forward to seeing what happens.
>
> Here is my prediction for how things will evolve in the future, though.
> The first set of learning mechanisms may work to the extent that their
> scope is limited, but if they aim for very general (cross-domain)
> learning, or if they are used for a developmentally extended period
> (i.e. if the system is supposed to learn some basic concepts, then use
> these to learn more advanced ones, and so on for a long period of time)
> it will start to bog down. The more ambitious the learning mechanism
> and the longer it is expected to survive without handholding, the more
> the result will deviate from what is expected. And it will "deviate" in
> the sense that the quality of what is learned will just not be adequate
> to make it function well.
>
> Of course, this is *in no way* meant to be a comment on the quality of
> Novamente, I am just trying to anticipate the way things would go if the
> complex systems problem turned out to be exactly as I have suggested.
>
> I'd be the happiest person around if it did not.
>
> Richard Loosemore
>
>
> Ben Goertzel wrote:
> >
> > Richard,
> >
> > Your comments pertain directly to our current work with
> Novamente, in which
> > we are hooking it up to a 3D simulation world (AGISIM) and
> trying to teach
> > it simple tasks modeled on human developmental psychology. The
> "hooking up"
> > is ongoing (involving various changes to our existing codebase which has
> > been tuned for other things) and the teaching of simple tasks
> probably won't
> > start till December and January.
> >
> > I agree it is possible that after we teach the system for a while in the
> > environment, then it will reach a point where it can't learn
> what we want it
> > to. Of course that is possible. We don't have a rigorous
> proof that the
> > system will learn the way we want it to. But we have put a lot
> of thought
> > and analysis and discussion into trying to ensure that it
> *will* learn the
> > way we want it to. I believe we can foresee the overall course of the
> > learning and the sorts of high-level structures that will emerge during
> > learning, even though the details of what will be learned are of course
> > unpredictable in advance (in practice).
> >
> > -- Ben G
> >
> >
> >
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT