Re: Loosemore's Proposal

From: Russell Wallace (russell.wallace@gmail.com)
Date: Tue Oct 25 2005 - 09:50:48 MDT


On 10/25/05, Richard Loosemore <rpwl@lightlink.com> wrote:
>
> Everyone wants to see a draft specification. Under other circumstances
> this might be understandable, but if nobody understands my *reason* for
> suggesting the type of development environment that I outlined, then it
> would be a waste of time for me to lay out the spec, because it would be
> judged against all the wrong criteria. And if, on the other hand,
> people did completely get the motivation for the environment, then the
> details would be much less important.

It's the other way around; nobody will understand the reasons _until_ you
give the details. If you don't want to, okay; if you do choose to write up a
draft spec, I'll try to evaluate it against your motives for creating it.

I am distressed, because the common thread through all the replies so
> far has been an almost total miscomprehension of the basic reason why I
> suggested the environment. And this is not entirely my fault, because I
> have looked back over my writing and I see that the information was
> clearly stated in the posts. I suspect that many people have too little
> time to do more than skim-read the posts on this list, and as a result
> they get incredibly superficial ideas about what was said.

No, that's not the reason. The fault is the inadequacy of the language. You
may have clearly stated your points by _your_ understanding of the terms you
used, but that doesn't communicate them to anyone else. Think about it:
Michael Wilson had basically the same complaint with you as you have with
him; were you just skim-reading his posts? You could say he's being stupid
or ignorant, but he's a pretty sharp guy and there aren't a lot of people in
the world who know more about this stuff than he does; and I will immodestly
claim the same description applies to me; and neither of us really
understand what you're getting at. I will suggest you should at least
acknowledge the plausibility of the hypothesis that the problem is that the
language just isn't up to the job.

I am going to split this message at this point, because I am getting
> close to the end of my tether.

Sorry to hear that, but I'll observe that most of what you've been doing so
far is arguing with the SIAI guys in a debate where both sides are just
talking past each other due to the inadequacy of the language used. I don't
think that reflects on the chances of success using a different approach.

For anyone who reads the below explanation and still finds no spark of
> understanding, I say this: go do some reading. Read enough about the
> world of complex systems to have a good solid background, then come back
> and see if this makes sense. Either that, or go visit with the folks at
> Santa Fe, or bring those folks in on the discussion. I am really not
> going to beat my head against this any more.

Did that, years ago, everything from popular science writing to a big stack
of Santa Fe's technical papers. Was fun bedtime reading, but not a great
deal of concrete relevance to AGI.

First, I need to ask you to accept one hypothetical... you're going to
> have to work with me here and not argue against this point, just accept
> it as a "what if". Agreed?

Sure.

Here is the hypothetical. Imagine that cognitive systems consist of a
> large number of "elements" which are the atoms of knowledge
> representation (an element represents a thing in the world, but that
> "thing" can be concrete or abstract, a noun-like thing or an action or
> process .... anything whatsoever).

And here was me thinking intelligence was based on an indivisible, ineffable
soul :)

Seriously, this is the sort of thing I mean. Everyone agrees that of course
cognitive systems must consist of a large number of elements - what else
could they be? (At least everyone on SL4; I doubt anyone who believes in the
ineffability of the soul will have bothered to read this far.)

Presumably you mean something more than just that, of course, but this is
where we need details.

Elements are simple computational
> structures, we will suppose. They may have a bit of machinery inside
> them (i.e. they are not passive data structures, they are active
> entities), and they have connections to other elements (a variety of
> different kinds of connections, including transient and long-term). For
> the most part, all elements have the same kind of structure and code
> inside them, but different data (so, to a first approximation, an
> element is not an arbitrary piece of code, like an Actor, but more like
> an elaborate form of connectionist "unit").

Okay. Connectionism typically uses a few simple equations with large vectors
of floating point coefficients, plus an optimized hill climbing algorithm to
tweak the coefficients for a training data set. What do you propose to use
instead?

The most important aspect of an element's life is its history. When it
> first comes into being, its purpose is to capture a particular
> regularity (the co-occurence of some other elements, perhaps), and from
> then on it refines itself so as to capture more precisely the pattern
> that it has made its own. So, when I first encounter a dog, I might
> build an element that represents "frisky thing with tail that barks and
> tries to jump on me", and then as my experience progresses, this concept
> (aka element) gets refined in all the obvious ways and becomes
> sophisticated enough for me to have a full blown taxonomy of all the
> different types of dogs.

How? Forget the me-vs-them business, this is what it comes down to: _how_
exactly would your proposed architecture accomplish the above?

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT