RE: Designing AGI

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Tue Oct 25 2005 - 20:16:30 MDT


Ben Goertzel wrote:
> Of course, it is possible to make a system in which learning mechanisms
> will construct detailed representations within explicit constraints
> that are imposed by the designer.

Your use of the word 'explicit' conceals a very interesting and relevant
area; the use of indirect yet still reliable constraints on the basis
and organisation of AI-generated cognitive content.

> For instance, we haven't gotten this far yet, but in principle one could
> give Novamente hard-coded specifications regarding "what a good
> representation of Friendliness should be like", and then let it learn
> the detailed representation (and change the detailed representation over
> time as it learns and grows...).

I agree that filling in representational detail without causing referent
drift is highly desireable in principle (as I understand it, essential
for Yudkowsky style FAI), and difficult in practice. Indeed, testing a
candidate solution to this problem is a major goal of the implementation
project I am currently engaged in, though from past conversations I get
the impression that this solution differs significantly from your approach.

> In order to take this approach one needs to do what Loosemore doesn't
> want, which is to specify aspects of the nature of the AI's knowledge
> representation scheme up front.

Again, a single word ('aspects') represents the tip of a complicated
and subtle iceberg :)

> But, one can still let the detailed representations evolve and adapt
> (and emerge ;) within the "environmental constraints" one has explicitly
> wired in.

Done correctly, that would be the actual valid use of 'emergence' in
AGI. But you'd never want to use something this fuzzy and subjective
as a design goal or basic principle. You don't set out saying 'we're
going to build an AGI using emergence!'; you design an AGI using the
best mechanisms you can find to achieve specific effects, such that
they work together to produce all the key aspects of intelligence,
and then when you finally get to summarising all that complexity for
the press release then you /might/ be justified in saying 'ah, this
actually works by letting Xs emerge in the context of Y...'.

> Which is just one more way in which the reality of AGI design is a lot
> subtler than the simple distinctions Loosemore seems to be drawing...

He does appear to have a counterproductive us-versus-them mystique
going on. Once again, this is not uncommon in academic or
'non-mainstream' AGI, and indeed the SIAI needs to be careful about
avoiding this attitude with regard to FAI theory.
 
> I didn't say "complex systems theory" btw I said "complex systems
> thinking." This is because there isn't really any general "complex
> systems theory" out there.... there are theories of particular
> classes of complex systems, and then there is mathematical theory
> relevant to complex systems (e.g. dynamical systems theory) and
> then there is complex systems *philosophy* (which I do find useful
> for guiding my thinking in many cases)

Ok, important distinction, though at this point I'd say that
understanding 'complex systems philosophy' is primarily useful in
terms of recognising and avoiding unwanted dynamics and failure
modes in AGI/FAI.

 * Michael Wilson

                
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT