Re: ESSAY: Would a Strong AI reject the Simulation Argument?

From: rolf nelson (
Date: Sun Aug 26 2007 - 21:30:45 MDT

> I don't think you need the human component at all. If this plan will work,
> both the FAI and RAI will figure it out on their own, the FAI will run the
> simulations, and the RAI will respond to them.

This sounded utterly wrong to me at first. But, on reflection, I
shouldn't assume that the AI's would have the same beliefs about
'causality' that humans typically do. Part of this is, indeed, in the
same genre as Newcomb's Paradox, but is not identical; for starters,
the current problem is more well-defined than generic Newcomb.

Suppose you, Norman, played the Prisoner's Dilemma with Norman*, who
is your identical twin brother, or your clone who was raised in the
exact same environment, or a duplicate from a parallel
mirror-universe. (And as always with the Prisoner's Dilemma, assume
you care only about your own payoff). Do you Defect or Cooperate? A
typical human would say "Defect"; call this the Local Causality
position (unless there's already a name for this.) But maybe there's
another defensible (i.e. self-consistent and consistent with the rest
of Decision Theory) philosophical position: a Nonlocal Causality
position that implies that if these three conditions hold:

 * if your decision correlates ~100% with your opponent's, *and*
 * the correlation is "causal" from a prior root cause (you both have
the same genes and environment, or you both are copies of the the same
computer program), *and*
 * you don't have a better way of predicting your opponent's decision
than observing your own decision,

then you should choose "Cooperate" since, after observing that you
made the Cooperate decision, your expected payoff goes up.

(Are there counterexamples? I think even if there are edge-case
counterexamples, Nonlocal Causality could be tweaked to remain

To a human, Nonlocal Causality sounds like magical thinking. After
all, Norman's decision can't causally make Norman* choose Cooperate!
But an AI might not be bothered by this, depending on its
implementation details.

Anyway: if, like most humans, FAI believes in Local Causality, then
you need one or more pre-singularity humans to bootstrap the whole

But, if FAI believes in Nonlocal Causality, then I agree that *under
certain narrow conditions* UFAI's model of FAI can talk UFAI into
cooperating with FAI, even without the cooperation of any
pre-singularity humans.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT