Re: Morality simulator

From: Norman Noman (overturnedchair@gmail.com)
Date: Fri Nov 23 2007 - 13:53:45 MST


On 23/11/2007, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:

> Norman Noman wrote:
> >
> > According to the poetry, "knew more" etc. is to be "interpreted as
> we
> > wish that interpreted, extrapolated as we wish that extrapolated."
> DOES
> > "knew more" happen first, interpreted in the explicit manner you
> > describe, or is it "interpreted as we wish it interpreted", and if
> so,
> > how is THAT interpreted?
> >
> > It would be really nice if you could break down the order of
> operations,
> > and define them more explicitly.
>
> You couldn't possibly begin to extrapolate volitions as people "would
> wish" them extrapolated, without first doing a huge amount of
> extrapolation to get the "would wish" part. If you can't write
> something the complexity of CEV today, we'll have to extrapolate you
> to get it. So you wouldn't even begin to start on the "extrapolated
> as we wish that extrapolated" until you'd evaluated a whole lot of
> "knew more" and "thought faster".
>

Ok, that makes sense.

    It may make sense to assume that CEV writes a single program to
> replace itself, using only the first-order extrapolation, and any
> second-order extrapolation of how to extrapolate would happen in this
> replacement program if it happened at all.
>

That's a start, but I can see I'm not communicating what information I'm
looking for. I'm going to try making it up myself and you can correct me
where I've got it wrong.

***

CEV recipe:

First we construct the initial dynamic. We do this by simulating the will of
earth's 7 billion human people, although abstractly enough that the
simulations aren't real people, to the extent that that means anything, but
either way it probably doesn't impact the end result so moving on.

We don't care about carp, anchovies, orangutans, breakfast cereals, people
who are long dead, people who are recently dead, people who aren't born yet
(concieved? probably doesn't matter), or sneaky aliens in flying saucers.

Then, these seven billion simulated people are fast-forwarded through 18
years of development so that they're all adults, and they're allowed to
interact while this is being done, so that they grow up "together", and
learn to care and share and collaborate. Given no information to the
contrary, I guess this happens on an abstractly simulated earth, with the
only difference being that everybody who was already alive is immortal, and
there's no singularity because that would just gob things up like crazy.

Then (all inside this abstract simulation), the people with what modern
medicine regards as a brain injury or mental illness sufficient to make them
incapable of taking responsibility for their own actions have that injury or
illness corrected.

Then, as you said, the AI's probability distribution is substituted for that
of the people's. Additionally, their model of the world is replaced with the
AI's model, to the extent that that model can be wired to our volition. At
this point they'll become aware (although not REALLY aware, just, you know,
functionally) that they are the CEV, if they haven't figured this out
already.

Then, they're made to think faster, and smarter.

Then, they're allowed to self-modify their minds, to become the people they
want to be. Self modification requests must be submitted verbally, shouted
at the sky. Everyone previously mute is given the ability to speak.

During this self-modification stage, every day at 2:00 PM GMT, everyone has
a collective dream that they're part of a focus group consisting of everyone
on earth, randomly seated at an extremely long rectangular table in a huge
meeting room. They're given three subjective hours to argue with their
fellows about what the initial dynamic should be. They wake up at 2:00 with
no time having passed.

Every day at 4:00 PM GMT, everyone has an individual dream that they are at
santa's workshop in the north pole, and the AI is santa, and they tell him
what they want.

This abstract simulation continues to run until either it stablizes, and
everyone wants roughly the same thing from day to day, or it goes chaotic
and roughly nobody wants the same thing, or 50 simulated years have passed.

The whole simulation, from beginning to end, is run not as a single thread
of possibility, but rather as an increasingly blurry, endlessly branching
mess. When it's all said and done, the AI looks at what every probable
extrapolation of every person asked santa, and takes stock of what they
agreed they wanted and didn't want, and how strongly they felt that way,
although since everybody is allowed to self-modify, it's a good bet they'll
all feel "100%" strongly about everything, or however it's measured.

If there's not enough agreement to construct a coherent initial dynamic, the
AI prints "OUT OF CHEESE ERROR" and becomes completely inactive. If there is
enough agreement to construct a coherent initial dynamic, then it's
constructed, and then implimented.

***

I doubt you had in mind the simulated earth, focus group, santa claus, etc,
but my point is there has to be some kind of concrete method for these
steps. What I want to know is the recipe as YOU envision it.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT