Re: Control theory, signals, dynamics (was Re: Retrenchment)

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Mon Aug 22 2005 - 15:49:26 MDT


> Something like that - I would say that, if you want to
> replicate what brains do, it would be good to think about
> functions that map high-dimensional inputs into
> high-dimensional outputs, rather than about arithmetic
> or logic operators. Signal processing has accumulated
> a large toolkit of useful functions of that kind.

True. You still have to work out a scheme for identifying
which to use, how to implement them and how to integrate
them to solve any particular problem, but I agree that
a database of algorithms (including say the entire contents
of 'The Art of Computer Programming') will be useful if
you can solve these problems.
 
> If you're thinking of programming an AI by creating
> a rational inference substrate - e.g., a 1970s-style
> logic engine - I'm surprised.

Considering how early and often I have derided '1970s
style logic engines' or rather classic symbolic AI, it
is apparent that whatever I'm doing it's not going to
be closely related. I agree that such methods have been
exhaustively tested and found to be useless for AGI.

> You're still speaking as if I were the one advocating
> emergence, but you're the one advocating creating a
> "basic substrate" and letting learning do the rest,
> which is a more emergence-friendly viewpoint.

Actual I'm relatively keen on having a decent sized
programmer-supplied knowledge base, as long as everything
in that knowledge base could theoretically have been
learnt and can be understood in retrospect (I apologise
for leaving 'understanding' undefined, but that's a very
long discussion that has been thrashed out before). The
'emergence' issue isn't about that though. It's about two
things (a) the causal relationships between the mechanisms
specified by the programmers and the mechanisms created by
the AI, and (b) whether you understand how the AI implements
basic cognitive operations, before and after it starts
solving nontrivial problems. Core mechanisms such as
action selection based on the expected utility of outcomes
and Bayesian adjustment of beliefs strongly determine the
structure of all AI cognition such that certain constraints
will reliably hold, and can be made stable under
self-modification; the causal flow is one-way from the
core mechanisms to the structure of the rest of the AI to
actions to reality. Designers that propose 'emergence' don't
have a good idea how cognition can actually be implemented,
so they propose to search solution space until they find
something that seems to act intelligently. This solution
will not be well understood and will almost certainly not
have a clean causal structure. It may actually be possible
to specify a fitness function (or validator for iterative
AGI-building algorithms that don't have an explicit fitness
function) that would only generate causally clean systems,
but I have yet to see anyone other than me (temporarily,
a while ago) propose this.

> > Ok, so we have at least two people sharing this view,
> > possibly more if the AAII people are taking this view of
> > pattern processing.
>
> What is AAII?

http://www.adaptiveai.com/

> I'm guessing that you are thinking that pure logic is the
> way to go,

Probably not with the same definition of logic as you. Last
time I checked Bayesian networks were hip and hot in
mainstream AGI, yet no-one is calling them '1970s reasoning
systems', despite the fact that they're just as 'logical'.
I think that Bayesian networks are a good start, probably
the most promising widely used technique, but a beginning
only.

> How is your viewpoint different from the old AI
> viewpoint that we spent the 1980s and 1990s demolishing?

As most researchers with a serious architecture seem to be
saying by now, 'that would take a book'. Well, maybe not a
book, but I'm not sufficiently keen to convince you that I
personally am on the right track to spend a great deal more
time discussing it on yet another forum.

> I don't think we can do better. The "inaccurate",
> highly flexible, fuzzy categorization is an advantage,
> not a design flaw.

'Fuzzy categorisation' does not need to imply brainlike
forgetting, category blending, variable accessibility and
general random lossage. It implies probability distributions
over sets of entities, such that you can prioritise
categories and possible interpretations and allocate
inferential resources accordingly.

> I don't mean fuzzy logic. Fuzzy logic is still logic.
> Real intelligent systems don't use logic

Either you mean 'real intelligent systems that exist right
now, i.e. humans' or you're falsely generalising from
'humans'. And in any case since computers operate on
(boolean) logic, you're saying that we need a logical
emulation of illogic to make an AGI.

> in which case the use of logic is probably implemented
> using the same mechanisms as other learned tasks, such
> as playing the piano.

True. When we use logic to learn things we call it science.
A technique which has had some modest successes.

 * Michael Wilson

        
        
                
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT