Re: Maximize the renormalized human utility function!

From: Jef Allbright (jef@jefallbright.net)
Date: Thu Aug 10 2006 - 09:41:18 MDT


On 8/10/06, Michael Anissimov <michaelanissimov@gmail.com> wrote:
> Also, we obviously have to keep in mind that there is a near-infinite
> number of possible renormalizations of the human utility function -
> some we might want a superintelligence to maximize, others to
> satisfice.

As you're poking around the edge of this problem, let me make a few
observations:

The concept of "the human utility function" is increasingly invalid as
individual human agents and their environment become increasingly
complex. While we can usefully refer to certain "universal human
values", such as the strong desire to protect and advance one's
offspring, even those are contingent. More fundamental principles of
synergetic cooperation and growth provide a more effective and
persistent basis for future "moral" decision-making.

Your statement "some we might want a superintelligence to maximize..."
obscures the problem of promoting human values with the presumption
that "a superintelligence" is a necessary part of the solution. It
would be clearer and more conducive to an accurate description of the
problem to say "some values we may wish to maximize, others to
satisfice."

If we were to further abstract the problem statement, we might arrive
at something like recognition of every agent's desire to promote its
(evolving) values over increasing scope. This subsumes the preceding
dichotomy between maximizing and satisficing with the realization that
each mode is effective within its particular limited context toward
promoting growth in the larger context.

Given the preceding problem statement, it becomes obvious that the
solution requires two fundamental components: (1) increasing
awareness of our values (those which are increasingly shared because
they work (passing the test of competition within a coevolutionary
environment), and (2) increasing awareness of principles of action
that effectively promote our values (this is our increasingly
subjective scientific/instrumental knowledge.)

Note that this approach is inherently evolutionary. There is no
static solution to the moral problem within a coevolutionary scenario.
 But there are increasingly effective principles of what works to
maximize the growth of what we increasingly see as increasingly good.

Back to the presumption of "a superintelligence." This phrasing
implies an independent entity and reflects the common assumption that
we must turn to an intelligence greater than us, and separate from us,
to save us from our critical problems. Such a concept resonates
deeply within us and our culture but the concept is flawed. We are
conditioned to expect that a greater entity (our parents?, our god?)
will know what is best and act in our interests.

It's time for humanity to grow up and begin taking full responsibility
for ourselves and our way forward. We can and will do that when we
are ready to implement a framework for #1 and #2 above. It will
likely begin as a platform for social decision-making optimizing
objective outcomes based on subjective values in the entertainment
domain, and then when people begin to recognize its effectiveness, it
may extend to address what we currently think of as political issues.

You were right to refer to a superintelligence, but that
superintelligence will not be one separate from humanity. It will be
a superintelligence made up of humanity.

- Jef
http://www.jefallbright.net
Increasing awareness for increasing morality



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT