From: Chris Capel (email@example.com)
Date: Sat Dec 10 2005 - 11:20:22 MST
On 12/9/05, Phillip Huggan <firstname.lastname@example.org> wrote:
> Any happy conscious entity desires the permanence of its own existence.
Any happy human conscious entity. I don't see why either the capacity
for happiness or the desire for one's continued existence is a
necessary feature of consciousness in general.
> if ve tiled me to be "better", my "better" self would surely not desire to
> return to my previous identity. It is an arbitrary distinction valuing the
> permanence of my less than optimally happy existence over my tiled happy
> state of affairs to be. But it is prec! isely this distinction that
> separates "friendly" AGI from orgasmium or whatever.
I think the difficulty here is, at the root, the problem of the
subjectivity of morality. We think it would be wrong for an AI to kill
someone and put a different person in their place, even if the new
person was very similar. Why is it wrong, though? We know that we want
an AI that won't put an end to us, that won't break out continuity of
identity. But humans don't have any real, core identity that can
either be broken or not broken. That's more or less a convenient
Objectively, humans have these moral intuitions, and they drive us,
psychologically, in certain directions. That's morality, in a
sentence. Without humans, and all of their idiosyncracies, there would
be no morality. In the end, the only way to define the morality of
various actions is to introduce arbitrary distinctions, between human
and non-human, or sentient and non-sentient, or living and non-living.
Between "same" and "different". Between "icky" and "not-icky". Binary
classifications that are ultimately based on some object's measurement
on a continuous physical scale.
Might not make right, but might--reality optimization
ability--determines the future of the universe. And when humans are
gone, the universe returns to neutral amorality.
I don't think there's any way to escape the fact that, whatever kind
of AI we choose to try to make, the decision is a moral one, and
therefore an arbitrary one.
And if humans were to evolve for another twenty thousand years without
taking over their own biological processes, they'd might just evolve
away this deeply uneasy and demotivating feeling I'm having right now
about how arbitrary morality is. They'd probably be perfectly fine
with it. As it is, I have no idea what the significance of this
feeling is, or should be.
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT