Benignity

From: doug.bailey@ey.com
Date: Fri Jan 18 2002 - 13:35:40 MST


<< Concluding Prediction: Any sufficiently advanced SI
will be equally more benign than we are. >>

Michael Anissimov wrote:

> I wholeheartedly agree.  Therefore, I believe that any
> initial coding as an  attempt to program benignity (Friendliness,
> Asimov's Three Laws, etc) is an irrelevant waste of time (and
> unprogrammably arbitrary), only serving the purpose of attracting
> investor dollars.

Measuring levels of benignity in a Power is the wrong approach.
My perception of Friendliness and even Asimov's Three Laws is to
achieve absolute benignity.  A sufficiently advanced AI that is
twice as benign as the average human or even twice as benign as the
most benign human to ever live may still have a malevolent streak.
Additionally, I am not certain that the attributes of benignity and
friendliness towards humanity are always mutually consistent.  What
if an SI concluded that the overall "benignity quotient" of the
universe would be maximized if humans or all sentient life other
than Powers were eliminated?

I think that there is enough uncertainty about the nature of SIs
and the post-Singularity universe in general to prohibit
unilateral discarding of genuine approaches to SI design and
development.

Doug

______________________________________________________________________
The information contained in this message may be privileged and
confidential and protected from disclosure. If the reader of this message
is not the intended recipient, or an employee or agent responsible for
delivering this message to the intended recipient, you are hereby notified
that any dissemination, distribution or copying of this communication is
strictly prohibited. If you have received this communication in error,
please notify us immediately by replying to the message and deleting it
from your computer. Thank you. Ernst & Young LLP



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT