Re: More silly but friendly ideas

From: Nick Tarleton (nickptar@gmail.com)
Date: Fri Jun 06 2008 - 12:19:11 MDT


On Fri, Jun 6, 2008 at 12:27 PM, Byrne Hobart <bhobart@gmail.com> wrote:
> Slightly offtopic, perhaps, but does this apply to logical operators? It's
> important to state our attitude towards uncertainty very clearly, so we
> avoid internal contradiction or nihilism: I would argue that logical rules,
> mathematical premises, etc., should not be considered open to question. If
> your AI is only 99.99999% certain that + 1 is equivalent to "count to the
> next natural number", this infects all mathematical operations it performs
> with some uncertainty, and means that the higher it counts, the less sure it
> is (since any natural number can be stated as a 0 + 1 ... + 1, and no datum
> can have two different levels of uncertainty, this could cause the AI to be
> less sure that 100 + 1 = 101 than that 1 + 1 = 2).

If a mind were 99.99999% certain that +1 means "the next natural
number", this wouldn't mean it believed that rule failed 0.000001% of
the time; it would mean it was 99.99999% confident that it held all
the time. Like Peter said, this wouldn't multiply with successive
operations.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT