Re[2]: project COSA

From: Cliff Stabbert (cps46@earthlink.net)
Date: Sat Aug 10 2002 - 20:36:50 MDT


Saturday, August 10, 2002, 11:08:33 AM, Ben Goertzel wrote:

BG> I agree with that -- advanced AGI's will develop nonhuman ways of
BG> programming. Our programming paradigms are based on the linear-syntax
BG> nature of human language, whereas AGI's won't communicate using linear
BG> syntax in the human-language sense.

I actually wonder about that. Human brains, massively parallel as
they are, exhibit "awareness" and "identity" precisely in the sense
that there is some sort of top-level linear, sequential and strongly
language-bound process: the ego. To what degree that is an
"accidental" evolutionary result (i.e., one physical body = one
"awareness") versus an essential component is IMO unclear. (During
dreamstates, meditative states, drug- and ritual-induced states, etc.,
something called ego-loss can occur, and one can become aware of, or
feel the illusion of, a multitude of subprograms (or the absence of
any program). But one would probably not pass a Turing test in such a
state.)

But by and large I suspect we won't recognize an intelligence _as_
intelligent unless it has some sort of top-level "main thread". This
main thread quality is precisely what language is so suited to (or
co-evolved with, or what have you).

I realize the above remarks are very vague, but I'm having a
difficult time putting this into words. Basically, if not only the
underlying processing is parallel, but the whole thing, it goes way
beyond our capacity to recognize let alone understand it. I would
imagine that for the foreseeable future we want to build AIs we can in
some form or another communicate with -- to give them problems to
solve, say, and to receive solutions. IMO, this requires some form of
linear, sequential language (whether represented as such, or visually
or otherwise, is irrelevant).

(This ties back to some earlier questions I had about the concept of
an AI having an unconscious, which I'm more and more starting to
expect will be a feature of any AI we recognize as I.)

To give an example of what a "superintelligence" or "intelligence"
that we cannot recognize as such, I could point to the earth as a
whole system, or the universe, or what have you. Any complex system
that does not have a clearly (to us) definable /center/, some obvious
/locus/ of decision/motivation.

In an earlier post in this thread, you wrote:

BG> Sorta like the idea of recording music by humming the notes instead
BG> of using your fingers on an instrument. Sure, it seems easier at
BG> first, if you have the proper tech. But ultimately, humming isn't
BG> going to give you the ability to play Shostakovich or Yngwie Malmsteen
BG> within a reasonable amount of effort...

An excellent (if discouraging for lazy theoretical guitar soloists)
point. Nonetheless most music can be and is represented by linear
symbolic languages. (And given the right neural interfaces, it may be
possible at some point to have a good conductor record a full symphony
without an orchestra -- I recall reading somewhere about direct interfaces
to early-audio-processing neurons being able to pick up "imagined"
sound, but I 1) could be distorting and 2) don't know if they've
progressed.)

The reliability factor is a complex one. We can make simple programs
and simple components as reliable as we want (including up to
mathematically proven). The problem is with complex systems and even
more so, evolving/learning systems. The ideal of software is allowing
us to abstract, but so far most efforts seem to fail or get stuck at a
certain level: there are so many languages promising reusability,
etc., and so few real world projects that seem to successfully use it.

The great hope of infinite abstractability -- i.e., specify 0-level
(language level) things, 1-level things built on 0-level things, ...
n-level programs built on (n-1)-level programs -- that aspect seems to
me to be missing, at least in a practical sense. (Note that I have
yet to learn Lisp, let alone create its monster hybrid with Forth).

With visual languages that work on some sort of signal-processing or
neural network level, I fail to see how we can build or inspect
anything even somewhat complex -- it doesn't appear to cater to levels
of abstraction at all.

Now, human language appears ideal in the abstraction sense: I can say
"make me a Space Invaders, but where the aliens are students throwing
chalks and I have three protective desks". But human languages are of
course ambiguous, metaphorical, etc. There is some sort of Ideal that
is being quested for here and it's hard to put a finger on it. COSA
is trying to put a finger on it, a million projects are trying to put
their finger on it. But abstracting without loss of precision seems
to me /inherently/ impossible and thus all such efforts are doomed to
fail. If it's complex enough to be interesting, it won't be
verifiable.

A further, and even more off-topic than the rest of this post, spanner
I'd like to throw in the works here: I suspect that although we tend
to *think* of certain types of logic and thought as a visual (or more
accurately, graph-like) phenomenon, there's IMO a strong kinesthetic/
proprioceptive component to human reasoning. This last point is utter
speculation on my part based on introspection and I have no evidence
for it at all.

Regardless, even if a program can be represented in some neat
visual/graph paradigm, one still has to sit there "imagining the flow"
to understand what it does. And that IMO is its essential flaw: it
might simplify the /notation/ but it does nothing for the amount of
effort required to /comprehend/ what's going on.

And that is the essence of the challenge: a representational system
rich enough for AI that can be understood by humans. Personally, I
don't think that's a winnable challenge, but I'll gladly listen to
those who disagree.

--
Cliff


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT