Re: Maximize the renormalized human utility function!

From: Jef Allbright (jef@jefallbright.net)
Date: Wed Aug 16 2006 - 13:14:38 MDT


Given the basic disconnect, not so much in our thinking, but in our
priors, I wanted to leave this thread to anyone one who might consider
it productive. However, while the first few items presented here seem
to adequately represent our conflicting views, I take exception to the
last paragraph.

For the record, anyone who knows me is already aware that I fully
support and promote the use of AI to amplify human reasoning. Due the
cognitive limitiations of our biological substrate, to remain
competitive we will *become* our AIs over time. A higher level of
organization based on a multitude of diverse agents, both posthuman
and native machine intelligence, will perform at a level of
intelligence surpassing the understanding of any of the individuals
but trusted by those individuals.

This differs from the dominant viewpoints on this list as discussed previously.

- Jef

On 8/16/06, Michael Anissimov <michaelanissimov@gmail.com> wrote:
> On 8/15/06, Philip Goetz <philgoetz@gmail.com> wrote:
> > So you think goodness and evil are inherent, objective, context-free
> > properties of people?
>
> Not really... but this is beside the point for the purposes of what I
> was responding to. Yyou're reading too far into my example. The
> point was simply that it will eventually be possible to 'manufacture'
> kindness without having to struggle through any sort of complex
> synergistic process, as Jef Allbright seemed to imply with the
> following:
>
> "Note that this approach is inherently evolutionary. There is no
> static solution to the moral problem within a coevolutionary scenario.
> But there are increasingly effective principles of what works to
> maximize the growth of what we increasingly see as increasingly good."
>
> The point is also that, eventually, everything reduces to engineering.
>
> As another example, imagine a social pact where one person's moral
> model is automatically updated based on silent requests from people
> around them, sent to their brain over a wireless network. This would
> be "augmented morality", and it would subtly contradict the implicit
> message Jef was putting across when he said,"It's time for humanity to
> grow up and begin taking full responsibility for ourselves and our way
> forward." It's not "taking full responsibility" per se when you are
> using machines to update your morality automatically.
>
> A society full of such individuals may be able to dispense with
> discussing things face to face, or holding votes, or engaging in all
> the other moral/political activity that humans engage in today.
>
> Jef also said, "We are conditioned to expect that a greater entity
> (our parents?, our god?) will know what is best and act in our
> interests." The fact of the matter is, a greater and more intelligent
> entity might indeed know what is best for us and act in our interests
> in ways far deeper, elegant, and long-lasting than we would be able to
> act for ourselves. The fear of this outcome stems from so many
> powerful humans who say they want the best for us but then abuse their
> power. But to deny its possibility is blatant anthropocentrism.
>
> --
> Michael Anissimov
> Lifeboat Foundation http://lifeboat.com
> http://acceleratingfuture.com/michael/blog
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT