Re: Intelligence and wisdom

From: Gordon Worley (redbird@rbisland.cx)
Date: Wed Jul 17 2002 - 10:20:19 MDT


On Wednesday, July 17, 2002, at 12:00 AM, Mitch Howe wrote:

> I'll admit that I am assuming, but I don't see any AI programmers
> valuing
> Kirk's human side over Spock's vulcan logic to the point they are
> expressly
> working irrational behavior into their designs. But, in the event that
> such
> programming did occur, whether intentional or otherwise, then such an AI
> would be acting at times in ways that do not correlate with its own
> goals/values -- even if these goals were good, even if it had adequate
> information available, and even if it had the intelligence/time to
> make a
> good decision. This is either a broken, buggy, or intentionally
> dangerous
> AI, and, by my definition, foolish. I cannot think of any other
> situation
> that would earn an AI this description.

Irrational thought is the result of obeying the wrong goals using those
goals to rationalize why you shouldn't change your goal to the more
rational goal X. Whatever any mind does is in accordance with it's
goals (that's how it decides what to do; it can't act against its own
goals without changing them, in which case it's not acting against it's
current goals). It can act against what it knows to be in its best
interests (i.e. what it thinks it's goals ought to be), though, but
those are only `goals' in a loose sense.

> d)An SI possessing all of the knowledge you do currently, plus a lot
> more,
> after what amounts to 10 human years of reflection. (10 actual seconds)
>
> e)An SI possessing all of the knowledge you do currently, plus a lot
> more,
> after what amounts to 5 million human years of reflection. (57.9 actual
> days)

This isn't quite right. For any given problem, there is some limit on
how much useful thought can be done towards solving it. For example, I
can find the answer to 2 + 2 pretty quickly (I have it memorized, so I
don't even do the math anymore, but assuming I didn't and I had to do
all of the work from scratch assuming a basic knowledge of counting) and
spend some time proving this. But, after a few hours at most, there
aren't really any more proofs or experiments that I can run to prove
that the answer is 4. Plus, there is likely some penalty for not
answering quickly, so I won't run a ton of experiments and write a lot
of proofs; I'll find some optimal number that gives a satisfiable answer
within a margin of error that is small enough not to be statistically
significant.

Also, an SI's thought processes don't work down to X years of human
thought. An SI can think things that a human would never think (just as
a human can think things a dog would never think). Maybe 10 seconds of
SI thought is overkill for your `deep' philosophical question. Maybe it
only takes 2, or 0.5. For any question we ask an SI, there is a time
penalty of mistakes we'll make until we have the answer. For some
problems this isn't a big deal, but if we ask the SI to solve the
uploading problem then we don't want this to take too long because in
the mean time people will die or we might blow ourselves up.

The SI will take however much time is required to find the answer. Same
goes for the human options: no need to spend any fixed amount of time,
just the right amount of time.

To be fair, knowing what the right amount of time may not always be
obvious. It takes having a deep understanding of the issues involved in
solving the problem to know if you've really got the answer or not.

--
Gordon Worley                     `When I use a word,' Humpty Dumpty
http://www.rbisland.cx/            said, `it means just what I choose
redbird@rbisland.cx                it to mean--neither more nor less.'
PGP:  0xBBD3B003                                  --Lewis Carroll


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT