From: Thomas McCabe (firstname.lastname@example.org)
Date: Tue Jan 29 2008 - 20:14:24 MST
On Jan 29, 2008 9:40 PM, Peter de Blanc <email@example.com> wrote:
> On Tue, 2008-01-29 at 19:10 -0500, Thomas McCabe wrote:
> > These are good, but they've already been added:
> > "* We might live in a computer simulation and it might be too
> > computationally expensive for our simulators to simulate our world
> > post-Singularity.
> > o Rebuttal synopsis: This scenario can be used to argue for,
> > or against, any idea whatsoever. For idea X, just say "What if the
> > simulators killed us if we did X?", or "What if the simulators killed
> > us if we didn't do X?". "
> This is not a rebuttal. Just because an idea can be misused to argue for
> all sorts of things does not make it false (consider evolution, quantum
An idea which can argue for absolutely *anything* must have zero
information content. See
> Hypothesis 1: We are in a computer simulation, and it will be shut down
> if it becomes much more computationally expensive.
> Hypothesis 2: We are in a computer simulation, and it will be shut down
> _unless_ it becomes much more computationally expensive.
> Is hypothesis 2 exactly as plausible as hypothesis 1? I would say it's
> much less plausible.
>From Eli's Technical Explanation:
"The way human psychology seems to work is that first we see something
happen, and then we try to argue that it matches whatever hypothesis
we had in mind beforehand. Rather than conserved probability mass, to
distribute over advance predictions, we have a feeling of
compatibility - the degree to which the explanation and the event seem
to 'fit'. 'Fit' is not conserved. There is no equivalent of the rule
that probability mass must sum to 1. A psychoanalyst may explain any
possible behavior of a patient by constructing an appropriate
structure of 'rationalizations' and 'defenses'; it fits, therefore it
must be true.
Now consider the fable told at the start of this essay - the students
seeing a radiator, and a metal plate next to the radiator. The
students would never predict in advance that the side of the plate
near the radiator would be cooler. Yet, seeing the fact, they managed
to make their explanations 'fit'. They lost their precious chance at
bewilderment, to realize that their models did not predict the
phenomenon they observed. They sacrificed their ability to be more
confused by fiction than by truth. And they did not realize "heat
induction, blah blah, therefore the near side is cooler" is a vague
and verbal prediction, spread across an enormously wide range of
possible values for specific measured temperatures. Applying equations
of diffusion and equilibrium would give a sharp prediction for
possible joint values. It might not specify the first values you
measured, but when you knew a few values you could generate a sharp
prediction for the rest. The score for the entire experimental outcome
would be far better than any less precise alternative, especially a
vague and verbal prediction.
You now have a technical explanation of the difference between a
verbal explanation and a technical explanation. It is a technical
explanation because it enables you to calculate exactly how technical
an explanation is. Vague hypotheses may be so vague that only a
superhuman intelligence could calculate exactly how vague. Perhaps a
sufficiently huge intelligence could extrapolate every possible
experimental result, and extrapolate every possible verdict of the
vague guesser for how well the vague hypothesis "fit", and then
renormalize the "fit" distribution into a likelihood distribution that
summed to 1. But in principle one can still calculate exactly how
vague is a vague hypothesis. The calculation is just not
computationally tractable, the way that calculating airplane
trajectories via quantum mechanics is not computationally tractable."
> IMO, the simulation argument should not be dismissed.
> - Peter de Blanc
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT