RE: AGI research methodology

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Sep 16 2005 - 05:53:05 MDT


> > If we succeed, we should have an "artificial child" of a sort that can
> > communicate using complex if not humanlike English and that can solve
> > commonsense reasoning problems, posed in the context of its
> world, on the
> > level of an average 10-year-old.
>
> What do you make of the AAII group's 'animal-level cognition first'
> manifesto? Do you already have this, or do you think it's irrelevant
> or skippable (my position is that it's the wrong way of approaching
> the generation of a relevant competence set)?

I think it is a reasonable approach in principle, but, that there is a huge
risk of making a design that is "overfit" to the problem of animal-level
cognition.

In other words, just because some biological animal minds (not most!) seem
to be tweakable/extensible to yield human-level intelligence, it doesn't
follow that all animal-level minds will be thus tweakable/extensible...

And I think that the vast majority of minds that can achieve animal-level
intelligence will NOT be tweakable/extensible into human-level
intelligences...

So, I only like the animal-level-first approach IF the
animal-level-cognition
work is carried out within a framework that is clearly theoretically
extensible
to human-level intelligence.

Based on the limited information I have about their work I am not
so confident that the A2I2 approach fulfills this criterion, but my
knowledge of their work is of course limited...

> > I believe there is a strong argument that the step from this kind
> > of artificial child to a superhuman AI is not as big as the step
> > from right here, right now to the artificial child.
>
> That argument depends on the opacity of the design. Using more brute
> force and emergence lowers the design complexity bar for producing
> an AGI, at the cost of making self-improvement much harder due to
> the difficultly or total lack of deliberative self-modification (and
> incidentally making it a UFAI and hence an extincion event).

It's true that one could design an artificial child in such a way that
it would be difficult for this child to become progressively more and
more intelligent and sophisticated.

However, within the Novamente framework, a lot of thought has already
been put into how this latter transition would take place, so I don't
think this kind of "overfitting to the artificial child" level will
occur. The Novamente design should allow for a child-level Novamente
system to gradually decrease its opacity level as it improves its
intelligence.

Given the specifics of Novamente, I am extremely confident that IF
we can achieve an artificial child, then we can go far beyond that
level.... There is more uncertainty in achieving the artificial
child in the first place...

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT