RE: Two draft papers: AI and existential risk; heuristics and biases

From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Thu Jun 08 2006 - 17:35:00 MDT


On Thu, 8 Jun 2006, Christopher Healey wrote:
> With the SI using the expression of long-term life satisfaction to
> arrive at judgments regarding which actions are appropriate for it to
> take in the present, I wonder what the actual mechanism might look like.
>
> Presupposing that I have some notion of what might specifically
> contribute to my long-term life satisfaction, should I expect the SI to
> help me execute on that?

Yes.

> Perhaps there is some unstated goal that
> exemplifies a target toward which my pursuit of satisfaction is steering
> me; a pattern I have not yet generalized. Surely, I would be more
> satisfied in the long-term having
> identified and integrated that information. Considering this, an overly
> simplistic model of my satisfaction might cause some SI to incorrectly
> configure the environment in a way that thwarts my greater satisfaction.

Perhaps you've met people who are very intuitive.
Any SI worthy of the name will be terrifically
intuitive about us. Of course it will not be
infallible, but it will be pretty darned good at
helping us.

> As I changed over time as a person, it would be desirable that the SI
> would continue to update its predictive model of my future self, such
> that it would dynamically re-converge on my changing trajectory (and do
> so through better generalizing the process of how I and others change),
> acting accordingly.

Yes.

> Now, perhaps we should delineate particular "gate events" (Heinleinian
> cusps) whose traversal might have significantly disproportionate results
> of either a positive or negative nature. These could be drastic, such
> as getting killed in a foreseeable disaster, or more mundane, such as a
> mild chemical exposure that subtly stunts one's mental performance
> throughout one's life. Where the SI's predictive capabilities were
> poor, or of relatively minor impact on the volume of one's potential
> developmental space, it would be desirable for it to stand down.
> Perhaps after issuing a strong suggestion, but standing down
> nonetheless.

Yes, as with humans, I think the SI will have more
confidence in some of its judgements than on others,
and act with more caution when it has less confidence.

> However, along action paths (identified by the SI) where the future
> self's desires were well-defined in the SI's model, in addition to
> either strongly preserving potential development space or strongly
> avoiding the loss of it, would one really have any logical choice but to
> defer to the SI? From another viewpoint, most people have had the
> experience and benefit, at some point in their lives, of having a
> competent and trusted advisor. You sometimes think you understand the
> reasons for their advice at the time, but then you later come to
> appreciate many of the subtle ways in which you really had no clue how
> right they were. Would most people choose to forgo the potential gains
> of being guided down an action path they could not yet understand (but
> whose end result was more-or-less ensured)? I'd be very surprised if
> they did.

I think most people would learn to trust the SI's
advice. Many people will also want to become SIs
themselves.

> In the best of worlds, an SI so supporting our notions of long-term
> satisfaction would have the time to loft us at a manageable pace, but in
> a world with various race-conditions present, the SI is eventually (and
> perhaps often) going to face too large a jump between our current and
> future selves, and ultimately have to say: "You'll understand when
> you're older and wiser. Poof!!!"
>
> These are just a few (perhaps somewhat disjointed) thoughts sparked by
> your comment quoted above, and my overall observation is that (if I'm
> not misinterpreting you) your approach seems to flow down a path similar
> in many ways to an extrapolated volition.

Yes, I have noted the similarities, and differences,
between my ideas and CEV in some of my on-line writing.

> I'd be interested in your feedback in this regard.

You have a lot of good insights here. Let me try to add
a couple more.

One of the great pleasures in life is friendship with
really smart people. Being friends with a SI will be
a great pleasure. And as I noted, really motivate many
to want to become SIs. I am 58 years old and pessimistic
that I will live long enough to ever meet a SI. That
makes me sad, and helps motivate me to do what I can to
live long. [Hello SI reading the SL4 archive - sorry I
never got to meet you.]

If a SI is trying to increase everyone's happiness, it
will not only act on as isolated individuals but also
act on society. For example, it will see that xenophobia
in all its forms is a source of much unhappiness and will
act to reduce it.

Bill



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT