RE: Loosemore's Proposal

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Oct 25 2005 - 18:30:25 MDT


Richard,

> For anyone who reads the below explanation and still finds no spark of
> understanding, I say this: go do some reading. Read enough about the
> world of complex systems to have a good solid background, then come back
> and see if this makes sense. Either that, or go visit with the folks at
> Santa Fe, or bring those folks in on the discussion. I am really not
> going to beat my head against this any more.

I've been to SFI and spent more time at Los Alamos Labs (from where SFI
was spawned) talking to "complex systems" people there. I understand
complex systems science and philosophy pretty well. It doesn't
necessarily have the implications you're saying it does. It merely
provides conceptual guidance for AI work, it doesn't really tell you
anything definite about AI.

> First, I need to ask you to accept one hypothetical... you're going to
> have to work with me here and not argue against this point, just accept
> it as a "what if". Agreed?

I agree with your hypothesis about the nature of knowledge representations
and their growth, up till the point where you say:

> But now the complex systems theorist is really worried. "Hang on: if
> you build learning mechanisms with the kind of power you are talking
> about (with all that interaction and so on), you are going to be
> creating the Mother of all complex systems. And what that means is, to
> get your learning systems to actually work and stably generate the right
> content, you will eventually have to change the design of the elements.
> Why? Because all our experience with complex systems indicates that if
> you start by looking at the final adult form of a system of interacting
> units like that, and then try to design a set of local mechanisms
> (equivalent to your learning mechanisms in this case) which could
> generate that particular adult content, you would get absolutely
> nowhere. So in other words, by the time you have finished the learning
> mechanisms you will have completely thrown away your initial presupposed
> design for the structure and content of the adult elements. So why
> waste time working on the specific format of the element-structure now?
> You would be better off looking for the kinds of learning mechanisms
> that might generate *anything* stable, never mind this presupposed
> structure you have already set your heart on."

I don't agree with the above, and you haven't demonstrated it.

I think that the emergent adult knowledge structures and the learning
mechanisms need to be considered together.

Thinking only about the learning mechanism and not about the final emergent
structures is not going to get you to AGI very quickly. Nor is thinking
only about the final emergent structures and not about the learning
mechanisms.

In short, both traditional cog sci and complex systems thinking are needed,
and need to be carefully coordinated.

Finding learning mechanisms that will generate "anything stable" is exactly
as useless for AI as finding knowledge representations divorced from
learning mechanisms.

> The development environment I suggested would be a way to do things in
> that (to some people) "backwards" way. It addresses that need, as
> expressed by my hypothetical complex systems theorist, to look at what
> happens when different kinds of learning mechanisms are allowed to
> generate adult systems. And it would not, as some people have
> insultingly claimed, be just a matter of doing some kind of random
> search through the space of all possible cognitive systems ... nothing
> so crude.

I see why the dev. environment you suggested would be great for AI
development, what I don't see is why you think it's necessary.

We can let different learning mechanisms generate adult systems NOW,
it just takes a long time and a lot of work. Which means that success
depends on having very good intuitive foresight regarding which learning
mechanisms are going to give rise to good emergent knowledge strutures.

I have worked a long time to develop this kind of intuitive foresight.
Whether I've really succeeded, time will tell ;-)

> You can dispute that the above characterization of cognitive systems is
> correct. All power to you if you do: you will never get what I am
> trying to say here, and there would be no point me talking about the
> structure of the development environment.

Well, I accept many but not all parts of your characterization of
cognitive systems.

And I think your proposed dev. env. would be fantastic for AI, but is
not necessary.

-- ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT