From: Ben Goertzel (firstname.lastname@example.org)
Date: Wed Sep 18 2002 - 08:17:07 MDT
> > I now see a lot more harmony between our differing definitions of these
> > things than I did a month ago.
> I don't. Intelligence is not rationality. Intelligence is not a force
> that processes can draw upon. Intelligence refers to goal-oriented
> processes (NOTE: intelligence != rational intelligence) that draws on
> rationality. The human mind has explicit goals, evolution does not.
Well, I absolutely do not agree that rationality is more "force-like" than
intelligence is. Both rationality and intelligence are descriptors that may
be applied to various systems in the universe.
I can't escape the feeling that your notion of rationality is not entirely
rational. It seems to have some of the dogmatic aspect of a religious
belief. But maybe I'm just not understanding it fully...
I'll make one more attempt to state my perspective on rationality.
What I'd call "explicitly rational thought" is one particular strategy that
intelligent minds can use to achieve their goals. Explicitly rational
thought involves the use of formal logic and probability theory. It doesn't
require that the mind have explicit expressions of the appropriate
mathematical rules inside itself. But it does require that the mind take
*incremental reasoning steps* that are close to the action of individual
mathematical rules from logic or probability theory. This is
"ratiocination", it's close to what Eliezer calls "deliberation" perhaps
(though I don't fully understand his notion of deliberation).
[ I even think that my recent work on probabilistic term logic and
second-order probability has made a significant contribution to the
understanding of how rational thought works (we'll see what others think on
this when the Novamente book is published -- Eliezer, this is largely new
stuff that wasn't in the rough draft book you read).]
[Why you guys obsess on Bayes' Theorem, which is just one among many useful
results in logic and probability theory, I don't know. It seems to me a lot
better just to talk about "probabilistic logical inference" than to place
Bayes' Theorem at the fore as you habitually do.]
Next, what I'd call "implicitly rational thought" is when a system has a
goal, and its actions approximate the actions that would be taken by an
explicitly rational agent in the same situation with the same goal -- even
though the system itself is not carrying out incremental operations that are
explicitly rational in nature.
It seems that the maximum implicit rationality, given finite resource
constraints, is often NOT achieved through explicit rationality. This is a
deep cognitive science idea (not original with me, though this phrasing is
my own), which some future mathematics of the mind may allow us to present
as a theorem, who knows.
In other words, it seems that minds can sometimes maximize their implicit
rationality by doing things that at the micro-level don't appear rational.
In fact, something stronger appears to be the case: Often explicit
rationality is an EXTREMELY BAD strategy for achieving maximal
rationality.... In many cases, if one wants to do the
probabilistically-logically best thing, obeying probabilistic logic in one's
incremental mental steps is a TERRIBLE way to do it....
Lack of understanding of this principle is largely responsible for the
excesses of symbolic AI and logic-based cognitive science.
This principle means that it is VERY DIFFICULT to assess the (implicit)
rationality or otherwise of another system. Not impossible, just
difficult -- because what looks like irrationality may sometimes be "the
execution of heuristics that are implicitly rational, although in their
incremental performance very different from explicit rationality."
I define intelligence as the ability of a system to achieve complex goals in
complex environments. You may define intelligence differently; intelligence
is a natural-language concept which is intrinsically fuzzy, and my
definition may capture only part of it.
Given my definition of intelligence, it follows that a system's intelligence
should be closely tied to its degree of *implicit rationality*.
Specifically: According to my definitions it seems to follow that: Given
fixed resource constraints and a fixed set of goals, a system that has more
implicit rationality is going to have more intelligence.
What I've just described seems to me a very pragmatic treatment of
rationality in the mind. I don't see why one needs to posit a cosmic force
of rationality, nor do I see why one wants to single out Bayes' Theorem as
being more important than all the other rules of logic and probability
It is also true that one can use logic and probability theory to model the
behavior of non-intelligent systems. They are universal modeling tools --
not always useful, but almost always applicable (though in cases of very
small sample spaces, probability theory is not reliable). I don't see why
this fact should be blown up into proclamations like "You must learn to see
BPT coursing through the veins, capillaries and arteries of the cosmos", or
however Eliezer phrased it. Arithmetic is also a universal and commonly
useful modeling tool; so are differential and integral calculus (if one
includes their discrete analogues). We are clever to have invented these
very general tools for understanding the universe! But one shouldn't
confuse the universe itself with our tools for understanding and analyzing
I should clarify one thing however: According to my definition of
intelligence as currently stated, *explicit* goals aren't required.... I
think that requiring explicitness leads one down a very difficult path. Try
to formally define "explicit"! [This comes up in the theory of AI quite a
lot -- it gets at the question of symbolic AI versus subsymbolic AI... when
does one say that a neural net contains an explicit rather than an implicit
representation of something?]
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT