Re: Metarationality (was: JOIN: Alden Streeter)

From: Mitch Howe (mitch@iconfound.com)
Date: Sun Aug 25 2002 - 01:21:44 MDT


Samantha Atkins wrote:

> I am not sure I agree that Friendliness "is referenced from
> human morality". Please say more.

Ok. The issue, as I see it, is this. Since it seems unlikely that there is
any sort of cosmic Truth that human minds are tuned to when deciding what
constitutes a morally "good" goal, human minds must be making these
judgments as a consequence of their particular design.

Without some universally objective meaning of Friendliness etched into the
fabric of the universe, we can not expect that an SI would ever find such a
thing. A Friendly SI would thus have to find Friendliness in the minds of
its beholders. I think this is why Eliezer has repeatedly emphasized that a
Friendly SI would be Friendly for the same reasons an altruistic human is
friendly. A Friendly AI is morally comparable to the uploaded version of a
*human* of unsurpassed Friendliness. We would not expect or even want a
Friendly AI to be merely a human brain manifested in silicon, but at the
very least it would have to have a thorough understanding of whatever it is
in the human wiring concept that makes conclusions about what is and is not
Friendly. It would have to have some sort of "process X" module, as it
were.

But this "process X" remains a human modus operandi, and should not be
mistaken for an immutable law of the cosmos. So Friendliness of the kind we
are pursuing is really Human Friendliness. If the generalized template of a
Friendly AI were set loose on an alien world among species who had no
process X, but rather some totally unrelated process Z, it is a process Z
module that this alien Friendly AI would develop to compute Friendliness.
Therefore, the mature Friendly AI from human space might disagree violently
with the Friendly AI from Kzin space -- on account of the very differnt
modules they use to calculate what Friendliness is.

I don't see that we have any choice but to reference an AI's Friendliness to
our own perceptions of it. There is no cosmic handbook of ethics or
extraterrestrial species handy to consult. And I doubt we could bring
ourselves to sign on with any of these other references anyway if these
suggested that humans were a smelly blight on the universe.

One could argue that any intelligent species would roughly share our process
X, making this issue irrelevant. I don't think this is completely wishful
thinking since I suspect that any species intelligent enough to worry about
would at least have to share the most fundamental heuristics; after all,
what we recognize as logic seems to govern the natural universe, and it
should be difficult to evolve very far in the natural universe using
heuristics that totally ignore this logic.

But on the other hand, many concepts that we esteem as admirable human
virtues have obvious foundations in our ancestral environment -- foundations
that other intelligences might not share. For example, an intelligence that
evolved from the beginning as a singleton planetary hive-mind would have
little need to develop concepts of "personal freedom" or "altruism", since
these are social concepts that require multiple minds to have any meaning.

--Mitch Howe



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT