Re: Maximize the renormalized human utility function!

From: Russell Wallace (russell.wallace@gmail.com)
Date: Fri Aug 11 2006 - 05:22:33 MDT


On 8/11/06, Michael Anissimov <michaelanissimov@gmail.com> wrote:
>
> What? If I copy the most benevolent person I know, then they will be
> bevolent, no matter the environment. Humans are flexible - they
> adjust. Someone won't automatically turn evil just if they're placed
> in a slightly different environment.
>

That's fine in imagination; the problem is that a magic duplicate-anything
machine does not and cannot exist in real life.

That's not to say duplicating people couldn't be possible in principle given
sufficiently advanced technology; it could. But to actually make it work,
even starting with mature nanotech, would require a great deal of knowledge
about how the human body and mind work, how to handle the structures
typically encountered (and what to do about atypical ones), which 99.999% of
the data can (and must) be discarded and which 0.001% needs to be copied,
how to repair damage caused in the process etc. In other words, exactly the
sort of knowledge you were hoping to bypass.

(And that's just the technical issues, nevermind the even bigger problem of
how to get legal permission to do any such thing, or to do the research that
would be required to develop the ability, etc.)

This is a good example of the general point: real world problems can
sometimes be solved, given enough knowledge and resources, but they can't be
handwaved away, not if you want to produce anything that actually works.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT