Re: In defense of Friendliness

From: Michael Roy Ames (michaelroyames@hotmail.com)
Date: Fri Oct 18 2002 - 20:47:24 MDT


Christian n95lundc@hotmail.com wrote:

> For example: Ben's wife claims she want to die a
> "natural" death when her time is up.
> How does the AI respond to this?

That is a question I will be very much interested to observe the AI contemplate.

I currently have a strong leaning towards the notion of 'non-violation of
volition' on this question. But, considering the issues raised in this thread,
the determination of what exactly it means to violate volition does not appear
to be clear-cut. The intellectual capabilities of an AI sub-agent charged with
communicating with PD humans may have to be limited so that 'pursuasion' does
not become 'force' by default, due to greater intelligence. I intuit that there
*is* a Friendly way to execute communication such that 'most but not all' PD
humans will be pursuaded not to die. But, if someone really does want to die,
and they understand (and believe) that other options to pain/death now exist,
then they should be allowed to die. A Friendly Singularity is about expanding
people's freedoms and options. I see no reason why this particular option,
self-termination, should be omitted from a future list of freedoms.

I don't want to give *any* freedoms up that I currently have. Although I am
willing to discuss a trade of freedom for increased options in other areas. I
currently have the ability to self-terminate. That seems like 'biggie'. Why
would I want to give that up?

Michael Roy Ames
Ottawa, Canada



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT