RE: Intelligence and wisdom

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jul 17 2002 - 15:49:22 MDT


hi,

> I wonder if in these discussions of intelligence vs. wisdom, we're not
> leaving out an essential component of the distinction: namely, human
> psychology with its conscious vs. subconscious, id/ego/superego,
> shadows, complexes, anima and all the rest of it.

Of course, these aspects contribute to keeping some intelligent humans from
being wise, and help some relatively unintelligent humans to be surprisingly
wise.

But their relevance to AI psychology is not direct, tho there are
connections.

> Perhaps when we say of someone that they're intelligent but not (yet)
> wise we really are referring to a lack of congruence between their
> explicit stated goals and their "actual" (subconscious) goals, as
> evidenced by their behaviour. I put "actual" between quotes because
> the situation is of course far more complex than that.

This gets at a subtle aspect of the definition of intelligence as "achieving
complex goals in complex environments."

If the system thinks it is (or tries to) achieve one complex goal, but
actually achieves another, this still counts!

> Question: would you predict that an AI at some point during its
> moral evolution will 'have' some similar substrate for internal
> struggle?
>
> I am presuming most expect a super AI to be perfectly
> congruent*, with no internal contradictions, but am asking a question
> about how it gets to that point.

As Eliezer has pointed out quite nicely in CFAI, most of the contradictions
we humans experience can be traced clearly to our evolutionary heritage.

There may be some contradictoriness necessary in any pragmatically
constructible mind. I don't think that perfect mathematical consistency is
pragmatically possible in any mind given realistic resource constraints.
However, the human mind clearly has WAY more inconsistency than is imposed
on it by its resource limitations.

> There appear,
> to me, to be fundamental reasons why self-consciousness of necessity
> cannot include the whole system.

This is an old point, but a weak one, because it doesn't show why a system
can't come *extremely close* to complete self-understanding.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT