Re: Loosemore's Proposal

From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Oct 25 2005 - 12:26:43 MDT


Russell,

Why did you stop your analysis there?????

Why, after I asked you to take that section as a premise, so I could
make my argument, did you stop half way through and (once again) start
criticising the wrong thing? What you criticised, right down the bottom
there, at the end of this post, was something so general it covered a
vast multitude of different cognitive systems. THAT WAS WHAT IT WAS
MEANT TO DO. It was meant to be general, so why blast it for not being
specific enough???!!!!

Isn't that exactly what I meant by not paying attention to my point?

So let me see if I have got this right......

I complain that the responses are coming from people who are not paying
attention to the point I am making.

You complain back at me that, no, it is just that I am not being clear
enough.

And then you go and do what you just did ...... stopping halfway through
the argument, firing off senseless criticisms at a section that I
labelled "treat this as a hypothetical"? And along the way, making
comments like:

> Seriously, this is the sort of thing I mean. Everyone agrees that of
> course cognitive systems must consist of a large number of elements -
> what else could they be?

Everyone was *supposed* to agree with that characterization! You were
supposed to accept that as our common ground and go on to the point of
the argument, not make cheap remarks about it.

Richard Loosemore

P.S. And if you have read "a big stack of Santa Fe's technical papers",
how would you like to do us all a favor and summarize the complex
systems argument that appears at the end of my post in your own words,
so we can get a second perspective on it? Explain to the group, if you
would, the way in which low level mechanisms can become disconnected
from high level regularities. Darned if I can say the words to make
anyone understand that point, so maybe you will have better luck. Just
a few illustrations of the effect would do.

Russell Wallace wrote:
> On 10/25/05, *Richard Loosemore* <rpwl@lightlink.com
> <mailto:rpwl@lightlink.com>> wrote:
>
> Everyone wants to see a draft specification. Under other circumstances
> this might be understandable, but if nobody understands my *reason* for
> suggesting the type of development environment that I outlined, then it
> would be a waste of time for me to lay out the spec, because it would be
> judged against all the wrong criteria. And if, on the other hand,
> people did completely get the motivation for the environment, then the
> details would be much less important.
>
>
> It's the other way around; nobody will understand the reasons _until_
> you give the details. If you don't want to, okay; if you do choose to
> write up a draft spec, I'll try to evaluate it against your motives for
> creating it.
>
> I am distressed, because the common thread through all the replies so
> far has been an almost total miscomprehension of the basic reason why I
> suggested the environment. And this is not entirely my fault, because I
> have looked back over my writing and I see that the information was
> clearly stated in the posts. I suspect that many people have too
> little
> time to do more than skim-read the posts on this list, and as a result
> they get incredibly superficial ideas about what was said.
>
>
> No, that's not the reason. The fault is the inadequacy of the language.
> You may have clearly stated your points by _your_ understanding of the
> terms you used, but that doesn't communicate them to anyone else. Think
> about it: Michael Wilson had basically the same complaint with you as
> you have with him; were you just skim-reading his posts? You could say
> he's being stupid or ignorant, but he's a pretty sharp guy and there
> aren't a lot of people in the world who know more about this stuff than
> he does; and I will immodestly claim the same description applies to me;
> and neither of us really understand what you're getting at. I will
> suggest you should at least acknowledge the plausibility of the
> hypothesis that the problem is that the language just isn't up to the job.
>
> I am going to split this message at this point, because I am getting
> close to the end of my tether.
>
>
> Sorry to hear that, but I'll observe that most of what you've been doing
> so far is arguing with the SIAI guys in a debate where both sides are
> just talking past each other due to the inadequacy of the language used.
> I don't think that reflects on the chances of success using a different
> approach.
>
> For anyone who reads the below explanation and still finds no spark of
> understanding, I say this: go do some reading. Read enough about the
> world of complex systems to have a good solid background, then come back
> and see if this makes sense. Either that, or go visit with the folks at
> Santa Fe, or bring those folks in on the discussion. I am really not
> going to beat my head against this any more.
>
>
> Did that, years ago, everything from popular science writing to a big
> stack of Santa Fe's technical papers. Was fun bedtime reading, but not a
> great deal of concrete relevance to AGI.
>
> First, I need to ask you to accept one hypothetical... you're going to
> have to work with me here and not argue against this point, just accept
> it as a "what if". Agreed?
>
>
> Sure.
>
> Here is the hypothetical. Imagine that cognitive systems consist of a
> large number of "elements" which are the atoms of knowledge
> representation (an element represents a thing in the world, but that
> "thing" can be concrete or abstract, a noun-like thing or an action or
> process .... anything whatsoever).
>
>
> And here was me thinking intelligence was based on an indivisible,
> ineffable soul :)
>
> Seriously, this is the sort of thing I mean. Everyone agrees that of
> course cognitive systems must consist of a large number of elements -
> what else could they be? (At least everyone on SL4; I doubt anyone who
> believes in the ineffability of the soul will have bothered to read this
> far.)
>
> Presumably you mean something more than just that, of course, but this
> is where we need details.
>
> Elements are simple computational
> structures, we will suppose. They may have a bit of machinery inside
> them (i.e. they are not passive data structures, they are active
> entities), and they have connections to other elements (a variety of
> different kinds of connections, including transient and long-term). For
> the most part, all elements have the same kind of structure and code
> inside them, but different data (so, to a first approximation, an
> element is not an arbitrary piece of code, like an Actor, but more like
> an elaborate form of connectionist "unit").
>
>
> Okay. Connectionism typically uses a few simple equations with large
> vectors of floating point coefficients, plus an optimized hill climbing
> algorithm to tweak the coefficients for a training data set. What do you
> propose to use instead?
>
> The most important aspect of an element's life is its history. When it
> first comes into being, its purpose is to capture a particular
> regularity (the co-occurence of some other elements, perhaps), and from
> then on it refines itself so as to capture more precisely the pattern
> that it has made its own. So, when I first encounter a dog, I might
> build an element that represents "frisky thing with tail that barks and
> tries to jump on me", and then as my experience progresses, this concept
> (aka element) gets refined in all the obvious ways and becomes
> sophisticated enough for me to have a full blown taxonomy of all the
> different types of dogs.
>
>
> How? Forget the me-vs-them business, this is what it comes down to:
> _how_ exactly would your proposed architecture accomplish the above?
>
> - Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT