Re: Control theory, signals, dynamics (was Re: Retrenchment)

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Mon Aug 22 2005 - 13:02:19 MDT


Phil Goetz wrote:
> That's not at all what I meant. I'm trying to figure out
> the connection between control theory and signal
> processing, and peripheral vs. core complexity, and not
> getting it.

Interpreting sensory data from the real world requires signal
processing, and generating actions for an embodied agent
requires at least some control theory. I'm a strong critic
of the whole peripheral/code divide when (GOFAI) researchers
declare the bulk of perception 'peripheral', but I accept
that the specifics of the early stages of the information
extraction process or the late stages of action generation
are noncritical parts of an architecture specification.

> Look at Chris Eliasmith's book, "Neural engineering".
> This is an excellent start on constructing modular systems
> out of neural networks.

This is much closer to Loosemore's position, though I'm not
/sure/ that he's proposing to use emergence to build an AGI
(it just seems likely). As I've doubtless made clear, I
consider this silly.

> You can begin to see how you might solve the 100-step
> problem, when you see how to construct neural networks to
> implement each of those 100 steps, where each step is a
> signal processing step such as performing a Fourier
> transform, or mapping deltas into absolute values.

In engineering we do this by designing the appropriate
algorithms. Clearly the relevant specialist theory is
helpful whenever one designs a specialist algorithm, be it
signal processing, edge detection or physics simulation.
But this is learned complexity, that we want an AGI to
induce from an environment or possibly some processed
knowledge source. I assume that you're claiming that we
need control theory and signal processing to design the
basic substrate of the AI, before it learns anything. I
am highly dubious of this claim; you need information theory
which is relevant in defining the goals and feasibility of
signal processing, but I do not see the relevance of either
of these to the design of a rational inference substrate.
I can see why you'd think you needed them, if you thought
that connectionism and emergence were a good idea.

> Control theory by itself is not going to make an AGI, but
> control theory techniques may well be applied to the cyclic
> or chaotic attractors stored in memory in order to
> stabilize them, to make use of them as a pattern generator
> to structure movement OR "thought".

Ok, so we have at least two people sharing this view,
possibly more if the AAII people are taking this view of
pattern processing.

>> The only paper I can think of off hand that makes a
>> semi-reasonable case for dynamic systems theory as the
>> basis for AGI is 'Dynamics and Cognition' by Timothy van
>> Gelder. And he didn't have any good reasons why one would
>> /want/ to use that as a basis for AGI design, only some
>> vague arguments as to why it should be possible and how
>> it might be useful for analysing the brain.
>
> Now I don't understand the phrase, "the basis for AGI".
> Nothing from any of these communities would be "the basis".

The paper I mentioned describes an architecture in which
dynamic systems theory is the most important descriptive and
predictive design tool, to the extent that I'd call it the
'basis'. Clearly you would not go as far in simplification,
but in my view you have inverted the sensible layering of
complexity as well as introducing some pointless and poorly
understood holdovers from human cognitive design.

> We have evidence that, at least in some cases, brains use
> attractors, possibly chaotic ones, as memory elements.

I agree. I've read plenty of papers on the subject. We can
do better. That's not even a statement of hubris, because
the task of designing an AGI to run on a (pretty good and
fast approximation of a) Turing machine is much easier,
in the absolute sense, than designing one to run on mammal
neurons.

> AS WELL AS fuzzy operators to work on those representations,
> in order to have flexible categorization, analogical
> reasoning, metaphor, creativity, etc.

I do agree that you need fuzzy logic to do those things (it
is, at the risk of sounding like a broken record, necessary
but not sufficient). Fuzzy logic is of course a subset of
Bayesian/probabilistic reasoning.

> In addition, the time aspect of representation and action
> is very poorly accounted-for by non-dynamical approaches.

Classically, this has been true, and this is an argument
for making very sure that dynamic analysis is efficient and
easily available within the AGI. But again, it's a technique
that a rational reasoner can use where appropriate, not a
technique you need to design a rational reasoner per se.

 * Michael Wilson

                
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT