Re: One or more AIs??

From: Mark Waser (mwaser@cox.net)
Date: Sun May 30 2004 - 13:26:30 MDT


> It does not follow that a catastrophe that will break one AI will be less
> likely to break multiple AIs. That would be anthropomorphic thinking.

Why is it anthropomorphic thinking that a catastrophe that will break one AI
will be less likely to break multiple AIs?

It is very reasonable to envision a situation where two different AIs have
two very different lines of reasoning for the same conclusion and where one
AI is led astray when one of it's premises that is not shared by the other
AI is catastrophically invalidated.

> For
> human beings it is demonstrably true that groups have greater abilities to
> address unexpected situations. It is true because:
> a) several humans have more data than one,
> b) several humans have more MIPS than one,
> c) humans are optimized by evolution to work in groups as well as
> independently
> d) several humans have greater physical work capacity than one.
> None of these points *has* to be true of AI. AI is not like humans in
these
> ways.

And none of these points is what I'm relying on for my argument. If two
different AIs have two different points of failure, then one COMMON failure
will not take out both.

> The concepts 'redundancy' and 'diversity of opinion' are different
concepts.
> It can be confusing to co-present them as two sides of the same coin.

Let me rephrase. If diversity of opinion leads to the same ultimate
conclusion, then you effectively have a redundancy that is immune to a
single point of failure.

> Diversity can be contained/generated from within a single object named FAI
> just as from mutually independent objects - and probably with less work.
> There are often many ways of looking at a given problem, and different
> viewpoints reveal/hide different aspects of that problem. Your argument
> seems to imply that a single object named FAI would be unable to analyze
> problems in multiple ways - meaning: as if being looked at by multiple
> humans. I would say that is a false implication.

We can always play with semantics. I could get together ten different
people and call the group (a single object) FAI and then FAI would be able
to analyze problems multiple ways. The problem is that you're obscuring
useful distinctions that way.

My argument is that either you don't truly have a single object and
therefore you are contorting yourself by treating it that way OR you do have
a single object and it is insufficiently partitioned to adequately separate
it's processes to produce safely independent conclusions. Look at it as a
definition/distinction that enough partitioning to produce independent
conclusions means separate mind objects.

        Mark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT