Intelligence without Awareness? (was Re: The dumb SAI and the Semiautomatic Singularity)

From: Cliff Stabbert (cps46@earthlink.net)
Date: Mon Jul 08 2002 - 17:10:23 MDT


Follow up.

>From GISAI (http://intelligence.org/GISAI.html#mind_thought_I ):
ESY> But to say that "Aisa understands Aisa" is not the same as saying
ESY> "Aisa understands itself". Douglas Lenat once said of Cyc that it
ESY> knows that there is such a thing as Cyc, and it knows that Cyc is
ESY> a computer, but it doesn't know that it is Cyc. That is the key
ESY> distinction. A thought-level SPDM binding for the self-model is
ESY> more than enough to let Aisa legitimately say "Aisa wants ice
ESY> cream" - to make use of the term "Aisa" materially different from
ESY> use of the term "shmeerp" or "G0025". There's still one more step
ESY> required before Aisa can say: "I want ice cream." But what?
ESY>
ESY> Interestingly, assuming the problem is real is enough to solve the
ESY> problem. If another step is required before Aisa can say "I want
ESY> ice cream", then there must be a material difference between
ESY> saying "Aisa wants ice cream" and "I want ice cream". So that's
ESY> the answer: You can say "I" when the behavior generated by
ESY> modeling yourself is materially different - because of the self-
ESY> reference - from the behavior that would be generated by modeling
ESY> another AI that happened to look like yourself.
ESY>
ESY> This will never happen with any individual thought - not in humans,
ESY> not in AIs - but iterated versions of Aisa-referential thoughts may
ESY> begin to exhibit materially different behavior. Any individual
ESY> thought will always be a case of A modifying B, but if B then goes
ESY> on to modify A, the system-as-a-whole may exhibit behavior that is
ESY> fundamentally characteristic of self-awareness. And then Aisa can
ESY> legitimately say of verself: "I want an ice-cream cone."

I think the above could form the basis for a good argument that
in order to reach a certain level of intelligence, awareness is
_required_. Specifically, without the ability to create complicated
feedback loops which in turn mean the system-as-a-whole exhibits
self-awareness, intelligence cannot come about.

Whether through interacting with the real world or internal models
exhibiting sufficient levels of complexity, an AI *must* IMO be able
to create feedback systems in order to learn/figure things out, i.e.
in order for us to consider it intelligent. And once you reach a
certain level of complexity, IMO self-awareness is as inevitable as
the Singularity.

--
Cliff


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT