Re: Non-black non-ravens etc.

From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Sep 13 2005 - 08:53:04 MDT


Chris Capel wrote:
> On 9/12/05, Richard Loosemore <rpwl@lightlink.com> wrote:
>
>>Ben Goertzel wrote:
>>
>>>I don't think that logical reasoning can serve as the sole basis for an AGI
>>>design, but I think it can serve as one of the primary bases.
>>
>>You raise an interesting question. If you were assuming that "logical
>>reasoning" (in a fairly general sense, not committed to Bayes or
>>whatever) was THE basic substrate of the AGI system, then I would be
>>skeptical of it succeeding. If, as you suggest, you are only hoping to
>>give logic a more primary role than it has in humans (but not exclusive
>>rights to the whole show), then that I am sure is feasible.
>
> [...]
>
>>Lastly, you say: "However, I suggest that in an AGI system, logical
>>reasoning may exist BOTH as a low-level wired-in subsystem AND as a
>>high-level emergent phenomenon, and that these two aspects of logic in
>>the AGI system may be coordinated closely together." If it really did
>>that, it would (as I understand it) be quite a surprise (to put it
>>mildly) ... CAS systems do not as a rule show that kind of weird
>>reflection, as I said in my earlier posts.
>
>
> I'm not sure I can reconcile these two opinions. If you think it's
> feasible to use some sort of logical reasoning, (whether rational
> probability analysis or something else,) as part of the basic
> substrate of a generally intelligent system, and given that any
> successful AI project would necessarily result with a system that
> *does* exhibit logical reasoning at a high level, how could you find
> it unlikely that a system would combine both features? I probably
> misunderstand you.
>
> Oh, and do fractal patterns not emerge in many complex systems? (Curious.)
>
> Chris Capel

Chris,

You are right to point this out: when I wrote those words I knew I
would risk obscuring my point by saying it that way.

In the first part I was imagining a logic engine sitting side by side
with some other system - let's call it the 'symbol engine' - that is
able to find the 'things' out of which the world is made and represent
them as internal symbols. The symbol engine is assumed to be a complex
system, while the logic engine is not. The symbol engine has low level
mechanisms that may look nothing like symbols (to take a very crude
example (that I don't want to imply I am committed to!) consider the raw
neural signals in a distributed-representation connectionist net, which
are low-level and very non-symbolic), and high-level symbols that emerge
out of those low-level mechanisms (the way that distributed patterns can
act like whole symbols in the neural net example). So the symbol engine
has layers to it. The logic engine, on the other hand, is somehow
independent of that layering and the basic components of the logic
engine *are* the high level symbols created by the symbol engine. Do we
call the logic engine high-levl or low-level? I am not sure: as I say,
it operates on the high-level symbols of the symbol engine, not the low
level mechanisms. But it is kind of a "basic substrate" because it is
implemented at just the one level.

Now the only thing I was saying was that the logic engine would not
"emerge" from that system: it was there from the beginning.

But of course, another logic engine could arise as an even higher level
of the symbol engine (the way it does in our own minds). Then there
would be two of them, one emergent and another basic-substrate.

You know what? On reflection this looks like *my* misunderstanding of
Ben's original point, because I think he was only saying exactly what I
just said. Apologies. I had thought he was implying that the first
logic engine would be somehow responsible for the emergence of the
second one. I don't think he meant to say that, so I was tilting at a
ghost.

But NOW here is an interesting question.

If that basic-substrate logic engine were to interact with the symbols
created by the symbol engine, how would it do it?

I am referring now to Ben's comment:

> In the human mind, arguably, abstract logical reasoning exists ONLY as a
> high-level emergent phenomenon. However, I suggest that in an AGI system,
> logical reasoning may exist BOTH as a low-level wired-in subsystem AND as a
> high-level emergent phenomenon, and that these two aspects of logic in the
> AGI system may be coordinated closely together.

Let's overlook the deficiencies of connectionism (aka Neural Nets) for a
moment and push my previous example a little further.

The (Neural Net) symbol engine generates these distributed patterns that
correspond to symbols. The logic engine uses these to reason with. Now
imagine that the logic engine does something (I am not sure what) to
cause there to be a need for a new symbol. This would be difficult or
impossible, because there is no way for you to impose a new symbol on
the symbol engine; the symbols emerge, so to create a new one you have
to set up the right pattern of connections across a big chunk of
network, you can't just write another symbol to memory the way you would
in a conventional system. The logic engine doesn't know about neural
signals, only high level symbols.

This question hinges on my suggestion that a logic engine would somehow
need to create or otherwise modify the symbols themselves. So tell me
folks: can we guarantee that the symbol engine can get along without
ever touching any symbols? You know more about this than I do. Is
there going to be a firewall between the logic engine and whatever
creates and maintains symbols? You can look but you can't touch, so to
speak? This all speaks to the question of what exactly such a built-in
logic engine would be for, exactly?

I could stand to be enlightened on this point. In my world, I wouldn't
try to connect them, so I have not yet considered the problem.

Richard Loosemore

P.S. About fractals in CAS: that was what was in the back of my mind
as I wrote .... I don't think they do. If they do, I suspect the CAS
would be a weird one. I'll try to do a little research on that one.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT