From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun Nov 28 2004 - 19:45:25 MST
> My primary disagreement with his essay had to do with *defining* a pattern
> as having sympathy.
This is just wrong. I did not define a pattern as having sympathy in an
emotional or ethical sense.
Rightly or wrongly, I defined the term "pattern-sympathy" to mean the
property of a dynamical wherein, when a pattern occurs in the system at time
T, this means the pattern is biased to occur in the future at times after T.
Some possible universes will have this property, others will not.
I then argued, that in dynamical systems possessing this property,
subsystems will tend to display "compassion" in the sense of frequently
acting to as to preserve similar subsystems. This property on the *system*
level I identified as a kind of abstract "compassion" -- derived from, but
not identical to, the "tendency to take habits" aka pattern-sympathy on the
> (My other major disagreement is his relating patterns
> and evolution to compassion but I will leave that one for now.)
The relation of evolution to compassion is not my idea, it's an old one; see
"The Origins of Virtue" for a review of the standard thinking on this.
> If we take a pattern and look at it, no matter how
> long you look
> at it, it does nothing. No matter how fancy! Algorithms are the *do* but
> patterns are the results we are looking for. One is not useful
> without the
> other but there is an arrow of causality.
If you look at the references I gave you before, you'll see that I formally
define a pattern as a kind of process -- i.e. a dynamical entity. A
pattern, in short, is a process that simplifies something.
> A program that just looks at patterns and has no algorithms to work and
> change the patterns will result in the old adage "garbage in is
> garbage out"
> . The way to get the machine to increase it's intelligence is by
> making use
> of existing (programmed algorithms) in novel contexts. The other
> way is to
> have more and better algorithms put into the computer so that it has
> something to work from. I am not talking about the method used to build
> "Cyc". I am talking about strategies for arriving at objectives
> given a set
> of constraints.
Constraints are represented in Novamente by PredicateNodes (which also
represent complex patterns), and strategies for arriving at objectives are
represented as SchemaNodes. Learning of SchemaNodes is done via
probabilistic inference and probabilistic evolutionary learning. It seems
to me that this aspect of Novamente matches your intuitive description
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT