Re[4]: project COSA

From: Cliff Stabbert (cps46@earthlink.net)
Date: Tue Aug 13 2002 - 13:55:59 MDT


Saturday, August 10, 2002, 11:11:34 PM, Ben Goertzel wrote:

<snipsnipsnip>

BG> "One Novamente can't simply "transfer a thought" to another Novamente. The
BG> problem is that the meaning of an atom consists largely of its relationships
BG> with other atoms, and so to pass a node to another Novamente, it also has to
BG> pass the atoms that it is related to, and so on, and so on. Atomspaces tend
BG> to be densely interconnected, and so to transmit one thought accurately, a
BG> Novamente system is going to end up having to transmit a copy of its entire
BG> Atomspace! Even if privacy were not an issue, this form of communication
BG> (each utterance coming packaged with a whole mind-copy) would present rather
BG> severe processing load on the communicators involved.

BG> "The idea of Psynese is to work around this interconnectedness problem by
BG> defining a Psynese vocabulary: a collection of atoms, associated with a
BG> community of Novamentes, approximating the most important atoms inside that
BG> community. The combinatorial explosion of direct-Atomspace communication is
BG> then halted by an appeal to standardized Psynese atoms. Pragmatically, a
BG> PsyneseVocabulary is contained in a PsyneseVocabulary server, a special
BG> Novamente that exists to mediate communications between other Novamentes,
BG> and provide Novamentes with information."

This sounds not all that different from human language to me. In
order for the above to be useful, there needs (it seems to me) to be a
set of "common" atoms -- i.e., a more or less similar set of
structures in each Novamente brain. But it is in that "more or less"
qualifier that we find the similarity to the vagueness of human
language -- that is to say, I expect when I say "language" that some
"more or less" similar neural structures get activated in your brain
as do in mine.

I can see where it's arguable that some future AI will be able to
introspect /better/ or /more honestly/ than humans can, but not where
such introspection can be perfect (any consciousness cannot contain
a full representation of its own inner workings simply because those
are by necessity more detailed). Similarly, I cannot see where any
sort of language can be "fully" representational or accurate. I'll
buy that an inter-AI language could be "more accurate", at
least to some extent. How non-linear that needs to or can be is IMO
still open to question.

--
Cliff


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT