RE: Two draft papers: AI and existential risk; heuristics and biases

From: Christopher Healey (CHealey@unicom-inc.com)
Date: Thu Jun 08 2006 - 16:05:41 MDT


On Thu, Jun 08, 2006 at 1:58 PM, Bill Hibbard wrote:
>
> My paper discusses the difference between hedonic and
> eudiamonic ... and makes the point that the SI should use
> "expression of long-term life satisfaction rather than
> immediate pleasure."

With the SI using the expression of long-term life satisfaction to
arrive at judgments regarding which actions are appropriate for it to
take in the present, I wonder what the actual mechanism might look like.

Presupposing that I have some notion of what might specifically
contribute to my long-term life satisfaction, should I expect the SI to
help me execute on that? Perhaps there is some unstated goal that
exemplifies a target toward which my pursuit of satisfaction is steering
me; a pattern I have not yet generalized. Surely, I would be more
satisfied in the long-term having
identified and integrated that information. Considering this, an overly
simplistic model of my satisfaction might cause some SI to incorrectly
configure the environment in a way that thwarts my greater satisfaction.
As I changed over time as a person, it would be desirable that the SI
would continue to update its predictive model of my future self, such
that it would dynamically re-converge on my changing trajectory (and do
so through better generalizing the process of how I and others change),
acting accordingly.

Now, perhaps we should delineate particular "gate events" (Heinleinian
cusps) whose traversal might have significantly disproportionate results
of either a positive or negative nature. These could be drastic, such
as getting killed in a foreseeable disaster, or more mundane, such as a
mild chemical exposure that subtly stunts one's mental performance
throughout one's life. Where the SI's predictive capabilities were
poor, or of relatively minor impact on the volume of one's potential
developmental space, it would be desirable for it to stand down.
Perhaps after issuing a strong suggestion, but standing down
nonetheless.

However, along action paths (identified by the SI) where the future
self's desires were well-defined in the SI's model, in addition to
either strongly preserving potential development space or strongly
avoiding the loss of it, would one really have any logical choice but to
defer to the SI? From another viewpoint, most people have had the
experience and benefit, at some point in their lives, of having a
competent and trusted advisor. You sometimes think you understand the
reasons for their advice at the time, but then you later come to
appreciate many of the subtle ways in which you really had no clue how
right they were. Would most people choose to forgo the potential gains
of being guided down an action path they could not yet understand (but
whose end result was more-or-less ensured)? I'd be very surprised if
they did.

In the best of worlds, an SI so supporting our notions of long-term
satisfaction would have the time to loft us at a manageable pace, but in
a world with various race-conditions present, the SI is eventually (and
perhaps often) going to face too large a jump between our current and
future selves, and ultimately have to say: "You'll understand when
you're older and wiser. Poof!!!"

These are just a few (perhaps somewhat disjointed) thoughts sparked by
your comment quoted above, and my overall observation is that (if I'm
not misinterpreting you) your approach seems to flow down a path similar
in many ways to an extrapolated volition.

I'd be interested in your feedback in this regard.

-Chris Healey



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT