Re: physical pain is bad (was Re: Dynamic ethics)

From: Jeff Medina (analyticphilosophy@gmail.com)
Date: Mon Jan 23 2006 - 15:51:21 MST


On 1/23/06, Philip Goetz <philgoetz@gmail.com> wrote:
> is that THE LION STANDS IN RELATIONSHIP TO US IN THE SAME
> WAY THAT WE STAND IN RELATION TO AN AI. You say we have
> the moral authority to put the lion in a fake simulation without asking
> or telling it. Hence the AI has the moral authority to put us in the
> Matrix, or dispose of us in any way it sees fit.

I'll probably draw a lot of fire for this, but here goes...

I've yet to see a decent argument for why this would be a bad thing.
I've seen a whole lot of responses along the lines of "I get to choose
for myself what I want to do!" and "We must respect the wishes of
autonomous, intelligent, rational adult humans!", and both of these
fail.

The first reply is effectively the same argument a puppy gives when
you take it to the vet, or a child gives when you make it eat its
vegetables. And we near-universally agree that it doesn't matter that
the puppy or the child or the
other-less-intelligent-less-rational-sentient-being thinks it wants;
we know better (most of the time), and we impose our will for the good
of the "lesser" being.

The second reply is a variation on the first, but requires more
comment. Specifically, it holds up the *current* level of autonomy,
intelligence, or rationality most humans exhibit as sacrosanct, an
in-practice binary distinction between our level and that of "lesser"
beings. But one of the key realizations leading to transhumanism is
that there is nothing special or sacred about humans-as-they-are-now
in and of itself. To claim the current level of rationality found in
humans is the delineator for when we or any other higher beings should
respect another being's choices/autonomy is to place yourself squarely
in the Fukuyama/Kass camp of error.

One of the main problems I personally have with being forced to live
this or that way or do thus-and-such or undergo certain medical
procedures is that I can't be sure the higher being has my best
interests in mind. But neither can puppies and neither can children,
and that fact doesn't stop us from forcing our decisions on puppies &
children, so why should our petulant protests stop posthumans from
doing the same to us? "Waaah, I wanna do what *I* want, dad!" is not
an acceptable response.

> This is not a recipe for a good singularity. UNLESS WE PROVIDE
> A NEW ETHICAL FRAMEWORK PRE-SINGULARITY, humans
> will assume that transhumans will operate in just the way Huggins
> is proposing, and they will (justifiably) either prevent anyone from
> developing transhuman technology (which is EXACTLY what Leon
> Kass is doing now, for roughly the same reason that I just gave!),
> or they will KILL TRANSHUMANS ON SIGHT.

They might try. Dogs try to bite humans giving them life-saving
medicines sometimes, too. That some humans objecting to their
'parents' telling them what to do poses a problem, let alone the major
one you suggest, is no more to be assumed than noting that dogs object
to our help at times.

If you have a way out of this, please let me know, because I can't see
it. But I go where the rational justification leads, so just noting
that the rational conclusion makes us (or even 85% of the population)
uneasy or unhappy doesn't change the fact that it's the right answer.
This "new ethics" you seek that would solve everyone's problems
simultaneously no matter what view of ethics or the future or autonomy
(etc.) they hold sure would be nice. But that doesn't mean it exists
or is constructable.

--
Jeff Medina
http://www.painfullyclear.com/
Community Director
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/
Relationships & Community Fellow
Institute for Ethics & Emerging Technologies
http://www.ieet.org/
School of Philosophy, Birkbeck, University of London
http://www.bbk.ac.uk/phil/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT