From: Stathis Papaioannou (firstname.lastname@example.org)
Date: Wed Jun 04 2008 - 17:59:11 MDT
2008/6/5 John K Clark <email@example.com>:
>> There is also no necessary contradiction in the idea
>> of superintelligent AI which remains predictable to
>> us, since its goals may make it predictable.
> Goals be damned. The axioms of mathematics do not make mathematics
> predictable and axioms are one hell of a lot more fundamental and
> inviolate than goals which change at the drop of a hat. How many
> sincerely made new years resolutions are broken by January 2? Most.
The AI could only change its mind about the aim of life if its top
goal were probabilistic or allowed to vary randomly, and there is no
reason why it would have to be designed that way.
>> For example, if the AI is born with the aim of
>> adding two given numbers together and then turning
>> itself off, it will do just that, quickly and efficiently.
> That does not require intelligence, much less super intelligence, but
> let me suggest something only slightly more complicated. Suppose the
> machine was born with the aim of finding the smallest even number
> greater than 2 that was not the sum of two primes greater than 2, and
> then turn itself off. What will the machine do? Will it turn itself off?
> Nobody knows, all you can do is watch it and see. The machine has free
> will, at least it does if the term has any meaning at all.
Yes, an intelligent machine can be unpredictable to itself (free will)
or to another intelligent machine, especially to one less intelligent.
But this need not *necessarily* be the case. If the machine is fixed
in its resolve to complete a certain task, then that's what it will
-- Stathis Papaioannou
This archive was generated by hypermail 2.1.5 : Thu Jun 20 2013 - 04:00:39 MDT