Re: More silly but friendly ideas

From: Byrne Hobart (bhobart@gmail.com)
Date: Fri Jun 06 2008 - 10:27:08 MDT


> > then as part of its strategy it will seek to
> > ensure that it can never change its mind about X,
> > since that might prevent it from achieving X.
>
> If something can never change its mind regardless of how much new
> information it receives then it is not intelligent. Such a thing would
> be no threat to humanity and no use to it either; nor would it be any
> use to itself, or to anything else.

Slightly offtopic, perhaps, but does this apply to logical operators? It's
important to state our attitude towards uncertainty very clearly, so we
avoid internal contradiction or nihilism: I would argue that logical rules,
mathematical premises, etc., should not be considered open to question. If
your AI is only 99.99999% certain that + 1 is equivalent to "count to the
next natural number", this infects all mathematical operations it performs
with some uncertainty, and means that the higher it counts, the less sure it
is (since any natural number can be stated as a 0 + 1 ... + 1, and no datum
can have two different levels of uncertainty, this could cause the AI to be
less sure that 100 + 1 = 101 than that 1 + 1 = 2).

Something that *can* change its mind about anything is effectively
nihilistic, since it can wrap any statement in an arbitrary number of "I'm
pretty sure that I'm pretty sure that... X is true" statements.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT