Re: post-singularity motivation

From: Jef Allbright (jef@jefallbright.net)
Date: Sat Dec 10 2005 - 15:27:51 MST


On 12/10/05, Chris Capel <pdf23ds@gmail.com> wrote:
> On 12/9/05, Phillip Huggan <cdnprodigy@yahoo.com> wrote:
> > Any happy conscious entity desires the permanence of its own existence.
>
> Any happy human conscious entity. I don't see why either the capacity
> for happiness or the desire for one's continued existence is a
> necessary feature of consciousness in general.

It's refreshing to see when others recognize "happiness" and the
desire for one's continued existence as specific characteristics of
evolved organisms but not required for intelligent systems in general.
 I enclose the word "happiness" in scare quotes because many people
seem not to have a coherent conceptualization of the term, and worse,
think of it in terms of some absolute scale, rather than as an
indication of the current satisfaction of the goal system.

>
> > So
> > if ve tiled me to be "better", my "better" self would surely not desire to
> > return to my previous identity. It is an arbitrary distinction valuing the
> > permanence of my less than optimally happy existence over my tiled happy
> > state of affairs to be. But it is prec! isely this distinction that
> > separates "friendly" AGI from orgasmium or whatever.
>
> I think the difficulty here is, at the root, the problem of the
> subjectivity of morality. We think it would be wrong for an AI to kill
> someone and put a different person in their place, even if the new
> person was very similar. Why is it wrong, though? We know that we want
> an AI that won't put an end to us, that won't break out continuity of
> identity. But humans don't have any real, core identity that can
> either be broken or not broken. That's more or less a convenient
> illusion.

Yes, again refreshingly broad thinking.

>
> Objectively, humans have these moral intuitions, and they drive us,
> psychologically, in certain directions. That's morality, in a
> sentence. Without humans, and all of their idiosyncracies, there would
> be no morality. In the end, the only way to define the morality of
> various actions is to introduce arbitrary distinctions, between human
> and non-human, or sentient and non-sentient, or living and non-living.
> Between "same" and "different". Between "icky" and "not-icky". Binary
> classifications that are ultimately based on some object's measurement
> on a continuous physical scale.

Here's where I would offer some additional depth to the stated concept
of morality. Humans do indeed have an instinctive, or as you say
intuitive, faculty of morality below the level of conscious
awareness, and demonstrably below the level of rationality. But it's
not really arbitrary since it is the result of a long evolutionary
process selecting for what worked in the environment of evolutionary
adaptation, and there's a great degree of commonality of moral values
across the human species, and even extended to other species for the
same reason--that what worked had a tendency to survive and grow.

At a higher level of organization, moral values (those values which
work and thus survive and propagate) have been encoded into cultural
frameworks, most evident in the teachings and rules of the world's
religions.

The critical problem, which many are now beginning to see, is that the
environment has changed, rapidly diverging from the environment of
evolutionary adaptation, and thus the values and decision-making
framework of the past are increasingly unsuited to current challenges
and thus may appear increasingly arbitrary.

In a limited sense, morality is about following that deep instinctive
sense of what feels "right" and what feels "wrong" in light of shared
values. (Morality is meaningless in the context of an isolated
individual.)

In a broader sense, morality is about observing community standards
and codes of behavior, which may supervene the earlier morality based
on feelings, but tend to promote shared values more effectively.

In an even broader sense, little appreciated currently, morality is
about decision-making that promotes increasingly shared values over
increasing scope of interaction with increasing effectiveness.

My key point here is that morality is not arbitrary, or relative (in
the strong sense) and that notwithstanding the Naturalistic Fallacy,
there is an arrow of morality in terms of the ratcheting forward of
objective knowledge in the service of subjective values, increasingly
shared because they work [and fueled by increasing diversity at a
lower level.]

>
> Might not make right, but might--reality optimization
> ability--determines the future of the universe. And when humans are
> gone, the universe returns to neutral amorality.

Back to your earliest statement, "desire for one's continued
existence" is ultimately a false hope, but promotion of one's values
into the future is the inescapable essence of morality.

>
> I don't think there's any way to escape the fact that, whatever kind
> of AI we choose to try to make, the decision is a moral one, and
> therefore an arbitrary one.

Moral yes, arbitrary no, as explained (I hope) above.

>
> And if humans were to evolve for another twenty thousand years without
> taking over their own biological processes, they'd might just evolve
> away this deeply uneasy and demotivating feeling I'm having right now
> about how arbitrary morality is. They'd probably be perfectly fine
> with it. As it is, I have no idea what the significance of this
> feeling is, or should be.

To let go of the fragments of truth that no longer fit within the
older paradigm, to travel into the void with none of the previously
imagined means of support, and to come out the other side seeing that
all the pieces must and do fit in the bigger picture.

Paradox is always a case of insufficient context.

- Jef
http://www.jefallbright.net



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT