Re: One or more AIs??

From: Mark Waser (mwaser@cox.net)
Date: Sun May 30 2004 - 17:13:01 MDT


> The problem seems to be that you haven't factored in what (significantly)
> more intelligence buys you.

No, the problem is that I'm UNWILLING to assume sufficiently more
intelligence on the part of the Borg Hive that I'm willing to allow you to
claim that Tiny Tim is irrelevant. Your claim reduces to: Given the
premise that being one is sufficiently intelligent that it will never make
an error that being two can avoid THEN being two is irrelevant. Well, duh!

> Having the 'Borg Hive' consult with 'Tiny Tim' doesn't
> reduce the probability of error, in the same way that me consulting my 4
> year old nephew doesn't reduce my probability of error.

Of course, I suspect that I could successfully argue that your 4 year old
nephew probably could teach you (or most other adults) something about
happiness . . . .

> We already know
> that more relevant information improves decision making, and you have
> already acknowledged that this is not the point you are arguing.

It depends upon what you mean by relevant information. If you mean
immediate relevant information, then I agree with your statement. However,
there is a huge amount of foundational information behind all of any
intelligence's compiled "knowledge". And different intelligences acquire
knowledge according to their tastes as to what is important. Two
intelligences are quite likely to diverge fairly quickly in terms of what
foundational knowledge they have and pay attention to. Also, most
"learning" algorithms show the behavior that different concept structures
may well be formed based upon the order in which data is presented or
concepts are taught. How do you intend to eliminate "bias" in a single
entity?

> The fact
> that 'Tiny Tim' has an independent process to reason about what it knows
is
> only useful for avoiding errors in so far as that process reaches correct
> conclusions AND that process is not evaluated-by or a-subset-of a more
> intelligent process.

The fact that Tiny Tim is an independent process is relevant because that
process has compiled a different knowledge structure. A larger process that
attempts to supersume Tiny Tim will not compile the same knowledge structure
unless it completely suppresses all of it's knowledge when it is modelling
Tiny Tim (a case where, arguably, Tiny Tim is still a separate individual).
If it doesn't suppress all of it's knowledge, an incorrect bias caused by
previous incorrect information may cause it to miss something that Tiny Tim
would have gotten correct.

There is a definite bias problem that I believe that y'all are overlooking .
. . .

> The almost magically wonderful aspect of a significantly greater
> intelligence is that it gets the correct answer reliably more often than a
> lesser intelligence. Not only that, but I would suggest that any fully
> specified question that can be correctly answered by a lesser intelligence
> can always be correctly answered by a greater intelligence, given the same
> information.

On small problems, sure. With incomplete knowledge and the different
intelligences getting different information due to what they are and what
biases they've developed. . . . we've wandered outside the bounds of your
hypothesis . . . .

> Having multiple different FAIs is necessarily a bad idea, but it would be
> more work, and would not reduce the risk of failure.

I assume that you mean that "Having multiple different FAIs is NOT
necessarily a bad idea". I would also argue that the amount of additional
work is more than offset by the dramatically increased safety.

> You have not yet
> provided a convincing argument to the contrary.

I'm seeing arguments that FAIs might be dangerous because they might make
mistakes that your 4 year old nephew wouldn't make (paper-clipping the
universe). Then, I'm seeing arguments that FAIs are so smart that they
won't make any mistake that a lesser intelligence (i.e. humans) won't make.
If your AI is smart enough that it won't make a mistake that a human can
avoid and if you set your initial Friendliness goals correctly, I would
argue that you're not going to have a problem at all. The problem is that
your AI is NOT going to start off intelligent enough to be able to avoid
mistakes that humans can avoid.

My convincing argument is that you're betting the human race on a single
roll of the dice and I'm betting the human race on the belief that a team of
AIs will be able recognize and avoid the mistakes that a single AI will
make. I haven't seen a convincing argument to the contrary on your side
that doesn't assume something that simply doesn't exist.

    Mark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT