Re: One or more AIs??

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sun May 30 2004 - 15:39:46 MDT


From: "Mark Waser" <mwaser@cox.net>
>
> You're right, no contest. The Borg Hive wants to be
> friendly and knows that it can make mistakes unless
> it consults with Tiny Tim. It slows down the pace
> (not necessarily a bad thing) but it prevents horrible
> mistakes. What's the problem?
>

###

The problem seems to be that you haven't factored in what (significantly)
more intelligence buys you. It buys you the ability to deliberately NOT
make the kinds of errors that would be made by a being with (significantly)
less intelligence. Having the 'Borg Hive' consult with 'Tiny Tim' doesn't
reduce the probability of error, in the same way that me consulting my 4
year old nephew doesn't reduce my probability of error. 'Tiny Tim' can only
contribute materially to the reduction of error by the 'Borg Hive' if it has
relevant information that the 'Borg Hive' does not have. We already know
that more relevant information improves decision making, and you have
already acknowledged that this is not the point you are arguing. The fact
that 'Tiny Tim' has an independent process to reason about what it knows is
only useful for avoiding errors in so far as that process reaches correct
conclusions AND that process is not evaluated-by or a-subset-of a more
intelligent process.

The almost magically wonderful aspect of a significantly greater
intelligence is that it gets the correct answer reliably more often than a
lesser intelligence. Not only that, but I would suggest that any fully
specified question that can be correctly answered by a lesser intelligence
can always be correctly answered by a greater intelligence, given the same
information. A greater intelligence is not going to discard all the tried
and true techniques that humans have developed to answer questions
correctly. However it will discount the results from those techniques by
the appropriate amount, whilst considering the results of all the other
techniques using the appropriate discounts on those as well. For example,
if a new situation is so uncertain that one newer type of reasoning produces
only a 10% likelihood of a correct answer, but another type - say a
heuristic used by GOF humans - produces an 18% likelihood, then it is going
to be worth paying attention to the old-style heuristic. I suspect that
those occasions will occur, but infrequently.

Having multiple different FAIs is necessarily a bad idea, but it would be
more work, and would not reduce the risk of failure. You have not yet
provided a convincing argument to the contrary.

Michael Roy Ames



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT