ESSAY: AI and Effective Sagacity

From: Mitch Howe (mitch_howe@yahoo.com)
Date: Thu Aug 02 2001 - 01:48:34 MDT


[The following is an essay I've written on human thought as it relates to
AI. I am not a neuroscientist, but base my conclusions on hopefully logical
analysis of experiences I believe are common to us all. I frame these
conclusions in the context of AI to see if they shed any light on the
requirements for creating a seed AI and nurturing it into a
superintelligence. I believe they do and would like to know if anyone
agrees. 'Join' post to follow.]

**AI and Effective Sagacity** - Mitchell Howe 8/2/01

In the field of AI, the supergoal is to create an information processing
system that does something truly significant. (Whether this something is
good, bad, of financial worth to a few, of world-ending importance to many,
etc, depends upon who is doing the programming and how successful they are
at it.) The seemingly essential subgoal that defines AI research is to
create a system that can both learn and improve itself in self-reinforcing
manner to eventually meet the end objective of significant action. Some
minimal yet critical combination of software elegance and hardware
capability is required to get to this point.

Discussion often lingers on the questions of how near to the capacity of the
human brain such a system would need to be in order to meet this goal, or
even to what degree of human brain might be required. I believe such
questions are largely meaningless because they lose sight of the only
supergoal - that such a system sustainably learn and improve, leading to
eventual significant action.

Consider this in light of the debate about whether a person with 50 IQ can
ever hope to achieve the results of someone with a 100 IQ by remembering
that within the wide range of IQ scores held by capable adults there are
many with high IQ's who have failed to contribute anything insightful or
even useful, just as there are many with lower IQ's who have come up with
world-changing ideas and become leaders in business. (While far from
scientific, an issue of TIME from early this year had fun with this idea.)
The ability to solve simple problems and make logical conclusions from given
data, as measured by IQ scores, does not directly correlate to the AI
supergoal of doing something truly significant. Somebody may know how to
design a better mousetrap yet never do anything with this knowledge. We
would hope that an AI not likewise 'fizzle' (unless its better mousetrap
design was a grey goo that would wipe out all mammalian life).

I believe that a large part of the surprisingly common discord between IQ
scores and societal significance can be explained by my simple theory of
'Effective Sagacity'. It begins with the idea that there are various levels
of thought experienced in the human mind, and that only the time spent at
the highest level contributes to genuinely productive intelligence. I
prefer to identify just two levels of thought with the disclaimer that there
is no hard line between them. I like to call them Fidget and Sage.

Fidget is the level of thought that involves making numerous small, trivial
decisions and enacting any routine physical actions these decisions require.
Many activities, once learned, become Fidgetized. Card shuffling and
dealing. Assembly line tasks. Simple arithmetic. Brushing your teeth.
You know that they are Fidgetized because you can think about something else
entirely while doing them. But you don't always think about something else
because Fidget is often capable of bringing Sage mind behind it lock-step.
(I'll talk about that more about interplay between these two in a second.)
Fidget cannot intentionally change your life, but it is very useful and
powerful nonetheless.

Sage is the level of thought that involves conscious consideration and
complex decision-making. It is the level you are at when you not only hear
what your professor is saying, but also think about it, relate it to your
model of the universe, and implement it accordingly - *learning* Sage is
responsible for pondering the deeper questions of life, sustaining
meaningful conversation, and making conclusions about your identity. It was
hopefully the level you were at if/when you decided on a career, spouse,
etc. Sage is not all-powerful, though. For starters, it has very low
endurance when most actively engaged, like someone who can walk for miles
but can barely run a lap around the track. It is also easily distracted by
inconsequential tasks, like a dog happily entertained for hours by a simple
game of catch. In fact, given the choice between running a lap and
repeatedly grabbing a stick in its mouth, Sage will usually bring you a
drool-covered stick.

Because of the complimentary talents of Fidget and Sage, they have a very
friendly relationship. People are often most satisfied when both are
simultaneously occupied at a low-to-middle stress level. Solitaire on the
computer is mostly a thoughtless exercise of mouse clicks under Fidget
control, with occasional input from Sage when an actual strategic decision
needs to be made. Neither mind is working terribly hard but both are
occupied and satisfied - a condition of well-being some researchers have
called "flow". Fidget is just as happy to spend hours throwing a stick as
Sage is to chase it and bring it back -- the seductive addiction of video
games and jigsaw puzzles is explained.

The poor endurance of Sage, and its desire to rest at an optimum
lower-stress activity level also sheds light on many kinds of
procrastination, since the thing you put off doing is often some special
case that requires a higher Sage activity level. "I can't study anymore for
my final. I must go for a swim and work on my tan." "I can't finish
writing about levels of thought right now. I must play Diablo II for a
couple of hours."

(Five hours later)

There are times though, when one level of thought operates almost
independently from the other. If you have ever been putting staples in
hundreds documents when you realized that you had run out of staples a dozen
slams of the stapler ago, you know what I am talking about. The fully
Fidgetized task did not require the attention of Sage, who found something
else to do and failed to notice and report the absence of staples. It is
either called "daydreaming" or "spacing out", depending on whether Sage was
meandering through the park or asleep on the bench when it was discovered.
Driving is an activity that unfortunately lends itself to inappropriate
Fidgetization. While first learning to drive, few can really think about
much else besides driving, but over time the procedures become more routine.
Many, many traffic accidents have occurred because people allowed Sage to
leave driving completely up to Fidget, who does not react promptly when
something unexpected occurs. Perhaps Sage was talking to his stock broker
on the cell phone, or perhaps just carrying on an imaginary conversation
with an ex-lover who would be oh-so jealous about seeing him with so-and so
behind the truck that just stopped suddenly in front of -WHAM!-. (I mean,
honestly, there are few excusable reasons to rear-end someone.)

Sage can also be deliberately put out to pasture, and this is frequently
done when Fidget is busy and can't play. Many drivers and workers in
repetitive jobs either consciously or unconsciously silence Sage by
listening to music - an activity that for many gets Sage absently swaying to
the beat. (This is not always the case when listening to music, but a use to
which it is frequently put.)

Even if Fidget is not busy, Sage can be intentionally suppressed. For some,
like angst-ridden teenagers, conversations with Sage may be so disturbing
that loud music is the best way to drown them out. For others, chatting
with Sage may simply be dull and unsatisfying. Alcohol and Marijuana are
known Sage-suppressants. TV offers many levels of basic thought occupation
catered mostly to minds ranging from the "moronic" to the "typical
American" - which is why many noticeably intelligent people have just one or
two favorite shows and renounce the rest as a worthless morass of glandular
titillation.

So what do I mean by "Effective Sagacity"? Well, by now it should be
obvious that humans, on average, spend very little time with Sage hard at
work. Sage is usually engaged in trivial games with Fidget, deliberately
distracted while Fidget is busy, or intentionally suppressed because of
boring or uncomfortable mental dialogue. It may even be that Sage, when
allowed to slack off so much, becomes even more out of shape and incapable
of running laps. (I reluctantly make this conclusion knowing that I give
ammunition to those who derail mine and subsequent generations as having no
attention span thanks to today's ubiquitous entertainment technology.) The
problem is, high-level Sage-thought is the only kind that fosters true
learning, creativity, experimentation, etc. Therefore, even the most
high-IQ human may never produce anything new or useful to society if she is
unable or unwilling to regularly put her lanky-but-lazy Sage through its
paces. The low-IQ underdog may climb to the top of his field because his
awkward-but-fit Sage is continually running marathons. The formula is as
follows:

**The amount previously invested and currently spent in highest-level
thought combine to form one's "Effective Sagacity." In the end, this is the
*only* measurement of mental capacity an AI researcher ought to be
interested in.**

Note that I did not say that Effective Sagacity was the proportion of high
Sage thought to other thought, nor did I say that it was the average height
of one's thoughts. Only highest-level 'Sage' thoughts count. Only thoughts
already completed (which by definition have enriched the mind) or currently
undertaken count. This means that a mind too unsophisticated to think any
deep thoughts will automatically be disqualified from having a high
Effective Sagacity. It also means that a high-IQ -- the mere potential to
think really big thoughts, is meaningless.

When we talk about AI, it must be said that a self-improving seed
intelligence has the potential to have an Effective Sagacity score
completely off the charts compared to humans. This is fine. If, due to
faster-than-neuron circuitry and clever software, the AI thinks through the
equivalent of 1,000 human years of high Sage thought in just two weeks, the
scale is not broken - just embarrassing to humans. It may also be that this
same AI is thinking thoughts of far higher Sage than humans are capable of.
This is more of a stretch for the Effective Sagacity scale, but if such is
demonstrably the case than the machine is already a superintelligence that
is probably doing something very significant. Hope it's friendly.

An AI researcher, then, should also take heart in the knowledge that most of
the human mind's activity may not need to be replicated in order to create a
machine that thinks high Sage thoughts. Others have already stated well the
reality of the human mind's origins and its preoccupation with biological
drives. These same forces undoubtedly worked in some way that I do not
fully understand to create the range of generally low-endurance Sage most of
us rely upon to learn and create. An artificial intelligence would not only
be free of the bio-burdens of survival, but also of the human limitations on
sustained high-level thought. It may not be necessary to come even close to
matching human neural capacity in silicon, not only because so much of the
brain's body-minded tasks need not be wired for, but because the primary
thought tasks that are programmed will be consistently carried out. If a
software engineer spends just 30 minutes a day actually entering code, she
is probably not spending the other 7.5 hours thinking about that code, but
rather some 2.5 hours thinking about the code, 2 hours thinking about food,
sex, or social status, and 2 hours "spaced out" or otherwise incapacitated
by Sage lazily chasing down or soaking up trivial thoughts of some kind or
other. An AI should be able to tweak strongly in favor of the on-target
thought.

It is possible that this conclusion is wrong; It could be that there is some
fundamental limitation inherent in the brain's level computational capacity
that makes it possible to learn effectively for short periods of time but
impossible to do it for weeks on end - but I doubt it. It could also be
that an AI would have its own crippling correlates to human Fidget
activities - exhaustive memory or data-stream management, perhaps. These
Fidget distractions could easily demand so much attention that little
capacity is left for Sage thought. (This metaphor may very crudely apply to
Ben Goertzel's early incarnation of Webmind.) More efficient coding and
more powerful hardware seem very likely to overcome this potential
bottleneck soon, however.

All these happy conclusions seem to support the view of a hard, fast AI
takeoff sooner rather than later. I'm all too happy to stand by that, but
the Effective Sagacity view suggests an additional hurdle a growing
seed-AI - the limits of human knowledge obtained thus far. A highly
Sagacious AI would be very adept at learning new material, at internalizing
input to create a more accurate model of the universe, and using this model
to produce insightful output. The problem potentially arises after the
young AI has devoured all available texts and treatises on computer science
along with all examples of program code - and perhaps managed to make only
modest improvement on its own design. Further progress could be very slow
without additional instructional materials. Fortunately, the truly
Sagacious AI could also effectively find its way out of this cul-de-sac of
human thought. It could do so the same way outstanding scientists do today:
by identifying the limits of current understanding and coming up with the
right questions to ask in order to expand those limits. The AI could either
come up with great experiments to advance human knowledge, or, more
efficiently in the software field, create and perform experiments on its
own. Even if the AI is -merely- capable of directing humans in bold new
experiments, it has already done something truly significant. This would
also increase the likelihood that it would continue to be capable of
improvement and further ultimate significance.

The Effective Sagacity view suggests that the goal of AI is simpler than it
is often made out to be. Not only does AI not require replication of the
human brain, it should not prove as susceptible to subtle weaknesses that
sap the capacity of even the most brilliant humans to sustain high-level
thought. It would be naive, however, to suggest that creating an AI is a
simple task. Coding and wiring for a truly significant new intelligence
demands both daring creativity and enviable perseverance. It will require
thinkers of the highest Sagacity.

****

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT