From: Ben Goertzel (firstname.lastname@example.org)
Date: Sat Mar 23 2002 - 10:28:17 MST
A couple weeks ago I posted here two versions of a brief informal paper on
"defining intelligence and self-modification."
I just now found time to revisit those draft papers.
One key point that came up: As two people separately pointed out to me, my
definition of "substructural self-modification" was essentially meaningless.
Well, you win some, you lose some.
In revisiting the more mathematical variant of the paper, I found myself
irresistably drawn to using some special Novamente-style notation. For this
reason, the current version of that paper no longer stands alone; I've
instead made it an appendix in our someday-to-appear treatise on Novamente.
I took the old draft off the Web.
However, the nonmathematical variant of the paper has also been revised a
bit, and exists online at:
I hope it was clear to everyone that, by using math in formulating these
ideas in the previously posted draft, I was not trying to give any undue
impression of rigor or "definitiveness" of the ideas. These are definitely
highly speculative ideas, and my use of math in this context is purely
because I sometimes find math formalism useful for clarifying my own
thoughts to myself. For instance, it was the attempted math. formalization
that made it clear that my previous attempt to clarify intuitive idea of
"substructural self-modification" was meaningless ;)
Two ways of measuring "degree of self-modification" that still appear to me
to be clearly meaningful are:
1) noetic intelligent self-modification. By this I mean changes in the
system's structure that are causally related with increases in the system's
2) dynamical intelligent self-modification. By this I mean changes in the
system's dynamical behavior that are causally related with increases in the
What I was trying to capture with the now-defunct definition of
"substructural self-modification" was something a little different: the
degree by which the system modifies its own “basic components.”
This intuitive concept is certainly relatively simple. For instance, if a
human learns new thought processes, this modifies his brain’s dynamics to a
certain extent. On the other hand, if a human augments his brain with a
neurochip that enables him to do arithmetic and algebra at high speeds, this
modifies his knowledge base and dynamics to a very large extent.
One would like to say that the neurochip expands the dynamical repertoire of
the system in some fundamental sense. Once the chip is there, the system
can do a lot of things it could not have done otherwise: for instance,
compute 1757775555*466464444 in a tenth of a second.
On the other hand, it is not clear to me at this point how this is really
qualitatively different from dynamical self-modification. After all,
learning a new cognitive skill allows one to carry out activities that could
not have been carried out otherwise.
As far as I can tell at the moment, the difference seems to be mainly one of
extent. A neurochip suddenly allows a huge amount of new “implicit
dynamics.” It thus allows the formation of all sorts of system states that
would have been impossible otherwise.
In the physical world, without modifying its basic components, there is
going to be a ceiling to the intelligence that any finite system can assume.
“Substructural self-modification” in the intuitive sense is thus necessary
for dynamical and noetic self-modification to continue beyond a certain
It seems to be possible formalize the notion of substructural
self-modification, but only by taking a point of view that isnt entirely
satisfactory to me. For instance, one can consider a physical system in
terms of an hierarchy of abstract state machines M1, M2,…,Mk, where each
state of Mi is defined as a set of states of Mi-1, and the transitions
between states of Mi are consistent with the transitions permitted between
states of Mi-1. In the case of a computer program we may have, roughly
M1 = computer hardware
M2 = operating system
M3 = programming language
M4 = AI program
M5 = Easily mutable parts of AI program
Generally speaking, intelligent modifications to lower levels of the
hierarchy will tend to cause higher degrees of noetic and dynamical
self-modification. “Substructural” self-modification as in the neurochip
example occurs at a lower level than modification of learning algorithms via
experiential adaptation. In Novamente, for instance, modifications to the
underlying source code occur at a lower level than modifications to schema
operating with a fixed-source-code system.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT