RE: The problem of cognitive closure

From: Ben Goertzel (ben@intelligenesis.net)
Date: Fri Mar 16 2001 - 12:41:54 MST


OK, I was wrong on this one, I can see this now. I should have thought
more carefully.

First, a side point. To avoid confusion. The Webmind AI Engine does
not use predicate logic at all; the parts of it that are logical in nature
use a special kind of uncertain term logic.... Furthermore it has a special
representation for procedural knowledge -- "schema" -- which is different
from,
but can be translated into, the declarative logic.

Second, my main point before should have been stated like this: If a
philosophy is defined as
        -- a state of beliefs about the world
        -- a habitual set of ways of acting towards the world.
then any system with a sufficiently flexible way of representing declarative
knowledge and representing and enacting procedural knowledge, can embody any
philosophy.
My statement about HOPL before was restricted to belief, and incorrectly
ignored procedure.

But anyway, I guess this statement is of limited value, because it's a
UTM-type argument,
which doesn't take into account real-world computational restrictions. So
it really
only applies to superpowerful supercomputer programs in the future.

For instance, any program that can enact the procedure of emptying out its
short-term memory
can in some sense meditate. But it may not be constructed to as to get the
same thing out
of meditation as people do. So in order to really embody Zen, it would have
to simulate
the human mind's type of STM dynamics. Which requires a superpowerful
supercomputer level of
pliability.

So, I guess my initial statement was wrong. This issue really is a bit like
Friendliness.
After the Singularity, systems may be mutable enough to adapt their
psychologies to adopt any
philosophy. But by giving them the right philosophy pre-Singularity, the
initial post-singularity
philosophy may be strongly influenced.

But what is "the right philosophy" :>

ben

> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Carl Feynman
> Sent: Friday, March 16, 2001 1:50 PM
> To: sl4@sysopmind.com
> Subject: Re: The problem of cognitive closure
>
>
>
>
> Ben Goertzel wrote:
>
> >
> > Of course, any representational system that has equivalent power
> > to higher-order predicate logic, has the capability to express any
> > known philosophy.
> >
>
> This is a very ambitious statement. Maybe any philosophy within
> 20th-century
> Anglo-American analytic philosophy could be represented by higher-order
> predicate calculus (HOPC). But consider the following examples:
>
> -- Zen Buddhism can only be learned by spending a very long time
> meditating.
>
> -- Kabbalism should only be learned by married men over 40, as
> less mentally
> sturdy people might damage their minds when they learn it.
>
> -- "Capitalism and Schizophrenia" by Deleuze and Guattari is designed to
> baffle the reader into a position where they realize nothing is
> knowable and
> everything is a matter of interpretation. (Or something like
> that-- it's open
> to interpretation.)
>
> I don't hold any of these philosophies; I consider them useless,
> nonsense, and
> useless nonsense, respectively. But they are all philosophies,
> that can be
> held by the human mind, but not (I think) represented accurately
> by HOPC. The
> important thing is that to know them involves changes in parts of the mind
> other than the part that maps neatly into HOPC. They can't be learned by
> internalizing lists of claims, which is what HOPC can represent.
> It might be
> that they cannot be represented by any mind not fairly similar to human,
> including its physiological underpinnings.
>
> Of course, there is some wiggle room in the word "represent".
> For example, a
> sufficiently powerful mind could simulate ("imagine") a human
> being who held
> such a philosophy, and when called upon to act according to the
> philosophy,
> run the simulated entity through a situation, and act the same way. But
> that's representation without understanding.
>
> I think restricting your AI to representing things in HOPC is
> asking for the
> kind of cognitive closure Mr. Porter is worried about.
>
> --Carl Feynman
>
> PS. JOIN message coming soon!
>
> PPS. This just occured to me: "the part of the mind that maps neatly into
> HOPC" may be equivalent to "the part of the mind open to conscious belief
> revision". Many philosophies involve changes by means other than
> conscious
> belief revision.
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT