From: David Clark (email@example.com)
Date: Thu Oct 28 2004 - 18:23:11 MDT
I was quite annoyed by Ben's put down and dismissal of my comments on his
paper. For the record, he didn't dispute even one of the arguments that I
posted. He could have just said he disagreed and left it at that or just
ignored my post. If his goal in posting the essay to this list wasn't to
get comments, then why post it?
My primary disagreement with his essay had to do with *defining* a pattern
as having sympathy. (My other major disagreement is his relating patterns
and evolution to compassion but I will leave that one for now.) Contrary to
what he wrote in reply, I think I *do* understand what he means by
"pattern-sympathy". If you look at historical data, you can see
correlations on a consistent basis between similar patterns.
I think I understand why Ben thinks of AI and (from the article) many other
topics in terms of patterns and why he gives these patterns human qualities.
When we look into the past, all we see are patterns. We might see a pattern
of a hungry lion just about to catch the gazelle and then see the pattern of
the gazelle being eaten. If we looked at a large number of these examples,
we would see a pattern that some might say was one pattern predicating (if
not creating) the next pattern. After all, if we usually see a dead gazelle
pattern after seeing a charging lion pattern, you might be lead to that
conclusion. I have seen this first hand at a large "Financial Wealth
Company" that tried to predict future bond trends by looking at the past.
(Very limited success and not better than many good analysts, the company
The problem is that history is all about the patterns. These patterns are
static and easy to see, but they display the results of the algorithms from
the past that we can't see. The patterns don't generate each other, the
algorithms do. We can guess about these algorithms but we can only be sure
of algorithms that exist right now. (Even this is not so easy.) If the
pattern we see is a runny nose, that pattern by itself cannot tell us
whether the cause was a virus, pollen in the air, a little bug or some other
cause. All we know is that the nose is running. The problem is the classic
chicken and egg. Do patterns create algorithms or do algorithms create the
patterns? If we take a pattern and look at it, no matter how long you look
at it, it does nothing. No matter how fancy! Algorithms are the *do* but
patterns are the results we are looking for. One is not useful without the
other but there is an arrow of causality.
A program that just looks at patterns and has no algorithms to work and
change the patterns will result in the old adage "garbage in is garbage out"
. The way to get the machine to increase it's intelligence is by making use
of existing (programmed algorithms) in novel contexts. The other way is to
have more and better algorithms put into the computer so that it has
something to work from. I am not talking about the method used to build
"Cyc". I am talking about strategies for arriving at objectives given a set
of constraints. (This short desciption defines what a model can do.) This
is why I believe that the inside, the base of AI will be in modeling.
Modeling can bring together any useful input and depending on the model, you
can see the intermediate results. This cannot be said for neural nets,
agent systems, formal logic, expert systems etc.
David Clark --
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT