From: Samantha Atkins (firstname.lastname@example.org)
Date: Fri Jun 28 2002 - 22:53:21 MDT
Mark Walker wrote:
> ----- Original Message -----
> From: "James Rogers" <email@example.com>
> To: <firstname.lastname@example.org>
>>Of course, Pascal's Wager is a well-known (but modestly clever)
>>fallacy. Your analogous construction doesn't appear to do much better,
>>and for many of the same reasons.
> I know that many understand Pascal's argument to be fallacious, I have never
> heard of it referred to as a fallacy. (I've taught critical thinking at the
> university level, part of which occasionally studies fallacies like 'ad
> hominem' 'post hoc ergo propter hoc' etc.) My argument has a similar form
> but it is hardly an exact parallel.
Well, perhaps not a listed fallacy but let's run through it:
1) If I believe and act on X and X is true then I obviously gain;
2) If I believe X and X is false then I lose very little;
3) If I don't believe X and X is true then I lose;
4) If I don't believe X and X is not true then I gain little.
The logic itself is empty because the relatively validity of any
of the statements depends utterly on the qualities of X and the
consequences of belief and non-belief of that particular X
immediate and future. It doesn't depend on the steps at all.
In the case of belief in God it is not at all clear that (2) is
low-cost. It can be quite high cost if it has detracted from
living life and seeking to extend and enrich life as much as
possible for oneself and others. Also (1) is not necessarily
beneficial unless a lot of things are true about how God
operates and what manner of belief and/or action "wins". The
same can be said of (3). The comments on (2) also apply to (4).
So overall, the form of this kind of argument adds nothing
> For example, in Pascal's Wager I only
> risk my soul, here everyone else is at risk. Another disanalogy is this: in
When applied to Friendly AI it is not clear that concentrating
first on the Friendliness is the actual way to acheive AI, much
less Friendly AI. If we fail to accomplish AI at all, there is
a downside risk that as our society and problems become
increasingly complex they grow beyond our non-AI ability to deal
with. So (2) has a potential large downside.
> the case of God, the prevailing scientific opinion is that God does not
> exist, in contrast, there is very little agreement among scientists about
> how or when to implement Friendliness. Ironically, your post comes pretty
> close to the fallacy of guilt by association.
I was unaware that God was purported to dwell or exist within
any realm that science has competency in. It is not up to
science to pronounce an authoriative opinion on the subject.
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:00:22 MDT