RE: Bayesians & Pascal's wager

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Wed Aug 31 2005 - 12:47:24 MDT


Eliezer's answer explains a superintelligent Bayesian's response. Such a
Bayesian can map out the approximate causual relationships leading to its
hearing Christian arguments, evaluate the memetics involved, the complexity
of the statements, the size of the space of equivalent statements which map
onto the actual statement percieved, etc. Having done so, it can note that
with respect to this particular statement, the P of hearing it given its
falsehood is not much greater than the P of hearing it given its truth,
though the statement being heard would remain Bayesian valid evidence, but
of the weakest possible sort. Far weaker than the Bayesian valid evidential
value of most statements of equal complexity for which the speaker's
statement is the only available evidence. A Bayesian superintelligence
could also recognize that the expected cost of being vulnerable to arbitrary
parasitic memes is actually very high, not very low as Pascal would assert.

A Bayesian non-SI might be vulnerable to Pascal's wagers so long as the
structure of the particular wager was well tuned. Indifferent to
infinities, it would need to see specific numbers, and information as to how
they were derived. Not having a self, the threat would have to be towards
its utility function, making the nature of the threatening meme quite
different from those which infest humans. It could not be coerced into
believing in religious doctrines "faithfully" because Bayesians don't
believe or disbelieve in the manner of humans. It could become a SI in
order to build an "electronic monk" capable of human-like belief, but in
becoming a SI and examining human-like belief and associated concepts it
would surely see the causual origins of the meme and reject it. Finally, a
Bayesian's belief must, in general, contain a representation of the
statement which is believed in. Consistantly representing the content of a
religious doctrine is typically not possible, and human requests for such
beliefs would be less compelling to it than the courtship behavior of a
mis-imprinted duck is to a human. They would not have the structure of
arguments, and would not look like arguments to it.

Human religious memes utilize a non-Bayesian support structure. Low
probability, high importance statements are made. These statements are
coupled with coersion aimed at compelling acceptance. The low probability
of the statement is used to misappropriate evidentiary strength for the
religious leaders once the statement is accepted. In proper reasoning,
people do become stronger sources of evidence by making low probability
statements, but only if those statements later recieve confirmation, not
simply from those statements influencing the Bayesian's behavior due to
their potential importance. Unfortunately, humans are not set up to hold
beliefs probablistically.

By the way, I see this as a major problem with CV. Starting an AI from a
Bayesian definition of truth may be necessary. Starting a person's volition
extrapolation from a definition of truth that the person could never
non-coercively be made to accept is a legalistic ritual, NOT respect for the
person's actual preferences. I suspect "we can't get there from here"
without coersion. Bear in mind that current beliefs are the result of the
coersion of genetic and memetic natural selection. You would definitely not
get convergence with humanity by extrapolating the volitions of worms, and
probably not be extrapolating those of chimps. More specifically, if you
found that the extrapolated volitions of humans and of worms converged, this
would be overwhelming evidence that you had made an error. You might get
convergence, or might not, as similarities accrue, but it is important to
not take adaptationalism and (imperfectly) derived adaptive homogeneity too
seriously as a description of the human condition. After all, we are not at
an evolutionary equilibrium, but are rather caught in an evolutionary
feedback loop with out technology and memes. If individual volition
extrapolation risks the creation of autistic super-babies, then it is no
surprise that collective volition extrapolation risks the same on a species
level.

At any rate, a religious meme that could mess up a Bayesian couldn't evolve
among humans, as there would be no pressure for it. It could be designed,
but any prospective designer would be able to build a new and more powerful
GAIs in the first place and so would not need to use such memes to gain
control of it. Now, a Bayesian might contain non-Bayesian intelligent
agents in a "society of mind". Some of these agents might have utility
functions, though probably most would not. Certain data might lead these
agents to transfer data to other agents in a manner which differed greately
from that intended by their designer, but at this point I have abstracted
the Pascal's Wager concept enough that it has become unrecognizable and it
is difficult to find shared symbolic referrents for its discussion. Suffice
it to say that the problem seems no more worthy of concern than the
possibility that someone might plot against you by exploiting optical
illusions or by torturing the authors of scientific papers into writing
misleading publications in order to decieve you, two other classes of risk
which exemplify the generalized category of problem.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT