From: Samantha Atkins (firstname.lastname@example.org)
Date: Fri Apr 04 2003 - 10:32:15 MST
Joaquim Gāndara wrote:
> From: "Samantha Atkins" <email@example.com>
im/VR for the amusement of more powerful
>>beings then that at least is immoral even though we might be
>>strictly their creations.
> If we are, then they have apparently decided that it isn't immoral.
> Maybe from their perspective, we are predictable automatons, and
> comparable to their insects. In their opinion, we probably lack some
> property that is the essence of experiencing pain and suffering.
> A higher order of qualia, maybe? "Qualia^2"?
This sort of cosmic moral relativity does not bode well for
predictions of a happy outcome when meeting or creating superior
intelligences. Perhaps we should consider more deeply what
ethics can/should be universally applied to sentients even ones
not as bright as we or, eventually, they.
> We can only subjectively determine the point where something becomes
> immoral; we can only decide when an AI is "sufficiently like us".
I don't believe that "sufficiently like us" is a reasonable
criteria. Something could be decidely not like us and still
experience great suffering and have some level of sense of self
and survival drives. We consider it immoral to torture some
animals increasingly although we still slaughter and eat some of
them and run medical experiments on others. I take the point
that we can't necessarily expect much better treatment from a
more advanced sentient than that which we dole out.
This archive was generated by hypermail 2.1.5 : Sun May 19 2013 - 04:00:55 MDT