Re: post-singularity motivation

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Sat Dec 10 2005 - 14:19:47 MST


My own core identity is the only thing I actually can be 100% sure exists. Morality is not arbitrary save for one exception. Utilitarianism works fine in all cases except when it is applied subjectively to one's own self or kindred group. This exception does not kill utilitarianism, it merely muddies the waters as to what multiplier we should put on our own existence over the existence of other conscious beings. In a well designed AGI, the weighting we assign to this multiplier will determine whether we get steam-rolled or not. Utilitarianism is a flawless theory of morality otherwise.
  Without human idiosyncracies there would still be perfect utilitarianism, but most of the remaining higher animals wouldn't come close to realizing it. Regarding the free-will aspect required for morality, there appears to be a small window of opportunity (1.5 secs in duration beginning I think a sec after some activity in our brains commences) for our conscious selves to veto borderline neuron action potential firings. Not much room for free will, but compounded over the course of a day and lifetime it is enough to shape a marginal sense of self and shift the evolution of the branching off histories oif the multiverse. The animals who affect this veto from their faculties of self-identity intent and and not mere instinct, are conscious to some degree. Attacking this hypothesized neurological basis of free-will is much more fruitful than is equating the sensations of pain and pleasure and suggesting they are arbitrary distinctions.

Chris Capel <pdf23ds@gmail.com> wrote:
  I think the difficulty here is, at the root, the problem of the
subjectivity of morality. We think it would be wrong for an AI to kill
someone and put a different person in their place, even if the new
person was very similar. Why is it wrong, though? We know that we want
an AI that won't put an end to us, that won't break out continuity of
identity. But humans don't have any real, core identity that can
either be broken or not broken. That's more or less a convenient
illusion.

Objectively, humans have these moral intuitions, and they drive us,
psychologically, in certain directions. That's morality, in a
sentence. Without humans, and all of their idiosyncracies, there would
be no morality. In the end, the only way to define the morality of
various actions is to introduce arbitrary distinctions, between human
and non-human, or sentient and non-sentient, or living and non-living.
Between "same" and "different". Between "icky" and "not-icky". Binary
classifications that are ultimately based on some object's measurement
on a continuous physical scale.

Might not make right, but might--reality optimization
ability--determines the future of the universe. And when humans are
gone, the universe returns to neutral amorality.

I don't think there's any way to escape the fact that, whatever kind
of AI we choose to try to make, the decision is a moral one, and
therefore an arbitrary one.
  

                        
---------------------------------
Yahoo! Shopping
 Find Great Deals on Holiday Gifts at Yahoo! Shopping



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT