RE: Intelligence and wisdom

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jul 18 2002 - 10:15:18 MDT


> Ben Goertzel wrote:
>
>
> This is an old point, but a weak one, because it doesn't show why a system
> can't come *extremely close* to complete self-understanding.
>
> ### Now, this comes as a surprise to me - I always thought that it is
> extremely difficult if not impossible to build a system capable of fully
> monitoring itself in real time (as opposed to being able to analyze its
> stored memory states), and I am not referring to the trivial inability to
> form a full predictive representation of itself within itself (similar to
> the "no homunculus" claim of consciousness research). The AI
> might have full
> access to every single bit of its physical memory, and every layer of
> organization all the way to the level of concepts and thoughts
> but how many
> useful predictions (the hallmark of understanding) will it make from such
> self-analysis? Will these predictive capabilities be enough to
> warrant being
> called "complete self-understanding".
>
> Rafal
>

Rafal,

You have a good point; my reply was overly hasty and I didn't necessarily
fully understand the argument being profferred (as it was provided only as a
hint, not a fully argument).

I have often seen arguments of the form "No finite system can fully know
itself, because then it would have to fully know not only itself but also
its knowledge of itself, etc. ... leading to an infinite regress." In other
words, no finite system can embody "X, I know X, I know I know X, I know I
know I know X, ... " forever, it has to stop somewhere, hence there has to
be some Y so that it knows Y, but doesn't know it knows Y.

I consider this a weak argument because it only argues against absolute
self-knowledge. It doesn't show that a system can't have 99.99%
self-knowledge.

However, there is another possible argument why an intelligent system can't
have even 99.99% self-knowledge or anywhere near. This is the argument
you've posed: an intelligent system is going to be a complex,
hard-to-predict system, so that even if a system knows almost its whole
state at a given time, the small bit it doesn't know can lead to a great
amount of ignorance going forward, due to chaos-theory-style "sensitive
dependence on initial conditions."

Adding this onto the first argument, I guess the package does become more
convincing! Though it's still just an heuristic argument, of course.

In my own AI work, it is intuitively apparent that a Novamente AI system
will never have anywhere near complete self-knowledge (at least not until it
has revised itself into something hugely different than the current design)

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT