Re: Non-black non-ravens etc.

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Tue Sep 13 2005 - 10:37:03 MDT


Richard Loosemore wrote:
> So the symbol engine has layers to it. The logic engine, on the other
> hand, is somehow independent of that layering and the basic components
> of the logic engine *are* the high level symbols created by the symbol
> engine. Do we call the logic engine high-levl or low-level?

This isn't what I'm proposing and I don't think it's what Ben is
proposing either. I am proposing that the 'symbol engine' is implemented
and processed using the 'logic engine' (though I haven't 'symbols' in
the fuzzy sense for some time; the term is misleading); they are not
seperate, the 'logic engine' doesn't need to 'access' the 'symbol engine'
because it is running the 'symbol engine', and the 'symbol engine'
doesn't have to go through hoops to 'access' the logic engine because
the relevant logical operations are available as cognitive primitives.
I do not have a good understanding of Ben's design, but I get the
impression that though he does have seperate logical inference and
'emergence-promoting' (e.g. GAs) mechanisms, they do not constitute
well-seperated modules, but instead form the basis of agents in something
like an extremely distributed, adaptive topology blackboard architecture.

> If that basic-substrate logic engine were to interact with the symbols
> created by the symbol engine, how would it do it?

Were you to do this, the 'basic-substrate' logic engine would act like
a 'logic modality'. Logical systems would be imagery in the 'logic
modality', and logical operations would be opaque imagery transforms.
In a system with such a crufty 'symbol engine', presumably this would
be a relatively efficient way to apply brute-force logical deductive
power. For an AI with a logic modality, complex 100-step proofs may
seem as obvious once the axioms are defined as the question of whether
a tennis ball will fit through a donut is to a human with a visual
modality. An integrated design can achieve the effects of a 'logic
modality' via reflection, if it needs to at all.
 
> Let's overlook the deficiencies of connectionism (aka Neural Nets) for a
> moment and push my previous example a little further.

Your description appears to match that of the 'hybrid systems' subfield
of AI, which in a well-meaning but misguided attempt at synthesis
advocates combing symbolic and connectionist approaches as the answer
to AI's problems. This mostly means the crude gluing together of a
production system 'central cognition' engine and NN input/output
processors, though there has been some more interesting work done on
systems that use two-way NN/rule-set conversion and which combine NNs
and production systems in an agent network. Regardless, it still doesn't
work and is based on the fundamentally flawed approach of gluing things
together without understanding why the component techniques didn't
work on their own, how the combination will rectify the flaws, or indeed
in most cases the end-to-end functional mechanisms the overall system is
supposed to implement.
 
> This would be difficult or impossible, because there is no way for you
> to impose a new symbol on the symbol engine; the symbols emerge, so to
> create a new one you have to set up the right pattern of connections
> across a big chunk of network, you can't just write another symbol to
> memory the way you would in a conventional system.

This is actually less difficult than you might think. I find it highly
amusing that while connectionists are aweing themselves with the opacity
and inscrutability of their own networks, as if that was an /advantage/,
other researchers are demonstrating how NNs operate as function
approximators and how compact symbolic rule sets can be mined from
trained NNs (admittadly at some information loss, but connectionists
don't expect reliability anyway), or rule sets turned into NNs. More
sophisticated reccurent, spiking, dynamic-toplogy or otherwise
unconventional connectionist systems pose more of a challenge, but in
general connectionist networks are far less opaque to appropriate
data-mining algorithms than they appear to unaided human perception.
While still not useful for (well-designed) AGI, this field does at least
offer tools that may one day be useful for understanding human wetware,
as well as puncturing some of the connectionist mystique.

 * Michael Wilson

                
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT