RE: Designing AGI

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Oct 25 2005 - 19:42:38 MDT


Michael,

A comment on this,

> > Representations are also critical to
> > accurately defining goals, and if you surrender the ability to specify
> > representational structure and/or allow them to be mutated in an
> > unknown, uncontrolled fashion by learning mechanisms, the design will
> > be incompatible with FAI (or any narrow, purposeful application).

Of course, it is possible to make a system in which learning mechanisms will
construct detailed representations within explicit constraints that are
imposed by the designer.

For instance, we haven't gotten this far yet, but in principle one could
give Novamente hard-coded specifications regarding "what a good
representation of Friendliness should be like", and then let it learn the
detailed representation (and change the detailed representation over time as
it learns and grows...).

In order to take this approach one needs to do what Loosemore doesn't want,
which is to specify aspects of the nature of the AI's knowledge
representation scheme up front.

But, one can still let the detailed representations evolve and adapt (and
emerge ;) within the "environmental constraints" one has explicitly wired
in.

Which is just one more way in which the reality of AGI design is a lot
subtler than the simple distinctions Loosemore seems to be drawing...

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT