RE: The problem of cognitive closure

From: Ben Goertzel (ben@intelligenesis.net)
Date: Fri Mar 16 2001 - 07:19:14 MST


> Well, when you get to the point of, say, discussing
> whether or not one should be a Platonist, you may
> have to do some retooling. I would be very surprised
> if whatever representational system you're using
> has the capacity to express even just all the *known*
> philosophies. But if you're adding fundamental new
> predicates to your AI's knowledge representation
> language, that seems a little drastic to be called
> 'education'.

Of course, any representational system that has equivalent power
to higher-order predicate logic, has the capability to express any
known philosophy.

It might be true that in some representational systems, some philosophies
are much
more compactly represented or efficiently manipulated than others.

But, this seems not to be true to me. It seems that the representational
systems used in AI's
exist at a significantly lower level than philosophies, so that most
representational systems one might use,
could represent basically any philosophy with roughly equal ease.

> And the main point of making 'philosophical
> sophistication' a design principle up there with
> Friendliness and modularity is to avoid the
> brute-force imposition of incorrect 'solutions'
> determined by locked-in philosophical errors.

Well, I sort of see your point.

Friendliness is a combination of a design principle and
something that must be instilled through education.

Clearly, by design, one could create a system inclined to be
aggressive like a tiger, or one inclined to be friendly, but education
and proper experience are needed to make these inclinations functionally
integrated with emergent system intelligence.

But, I still think that philosophy is ~mostly~ going to be induced by
experience.

For instance, one could create a system that was inclined to be a
solipsist --
by giving it sense organs for a while and using them to teach it, and then
removing
the sense organs, for example. Or one could create a system that was
inclined to
believe reality is what you make it, by raising it in a virtual world that
it could
morph using its thoughts.

Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT