Re: Friendly Existential Wager

From: Mark Walker (mdwalker@quickclic.net)
Date: Sat Jun 29 2002 - 06:29:22 MDT


----- Original Message -----
From: "Samantha Atkins" <samantha@objectent.com>
To: <sl4@sysopmind.com>
>
> Well, perhaps not a listed fallacy but let's run through it:
> 1) If I believe and act on X and X is true then I obviously gain;
> 2) If I believe X and X is false then I lose very little;
> 3) If I don't believe X and X is true then I lose;
> 4) If I don't believe X and X is not true then I gain little.
>
> The logic itself is empty because the relatively validity of any
> of the statements depends utterly on the qualities of X and the
> consequences of belief and non-belief of that particular X
> immediate and future. It doesn't depend on the steps at all.

Agreed. (This is true of most arguments: changing the content of the
variables can affect its validity). I guess I should have made it clear that
I cited Pascal not because I think his argument is correct (which I don't)
but because he was the first to think (or at least popularize) this sort of
thinking.

>
> > For example, in Pascal's Wager I only
> > risk my soul, here everyone else is at risk. Another disanalogy is this:
in
>
> When applied to Friendly AI it is not clear that concentrating
> first on the Friendliness is the actual way to acheive AI, much
> less Friendly AI. If we fail to accomplish AI at all, there is
> a downside risk that as our society and problems become
> increasingly complex they grow beyond our non-AI ability to deal
> with. So (2) has a potential large downside.
>

Yes, this is true and a good point. There are a lot more considerations than
went into my little post. One thing we would want to know is exactly how
inefficient (assuming that it is) concentrating on the Friendly component
would be. B.G. says it has some value but also says that it is like
"building castles in the air" (if memory serves). Other considerations are
things like worrying about nefarious groups stealing your research. Less of
a worry obviousy if you have only done your Friendly research, more of a
worry if you have a neonate AI without the Friendliness "bolt on".

> > the case of God, the prevailing scientific opinion is that God does not
> > exist, in contrast, there is very little agreement among scientists
about
> > how or when to implement Friendliness. Ironically, your post comes
pretty
> > close to the fallacy of guilt by association.
> >
>
> I was unaware that God was purported to dwell or exist within
> any realm that science has competency in. It is not up to
> science to pronounce an authoriative opinion on the subject.
>

You may be right. My point relies simply on scientists in general thinking
they are right on this point.

Let me add that to be tempted by the sort of argument I outlined I think it
is necessary (but not sufficient) that one appreciate the enormous risks and
benefits associated with this task (which I believe E.Y. and B. G. do) and
also believe that we really don't know what we are doing when it comes to
building an AI. These sorts of wager arguments presume a certain amount of
ignorance, e.g., in the Pascalian case one has to believe that there is not
sufficient evidence for or against the existence of God. This already
precludes most atheists and theists who believe that they have compelling
evidence one way or another. As far as I can tell, both E.Y. and B.G.
reject this point about ignorance, i.e., they think they know enough about
how to build an AI, or at least they know enough that they know when it is
appropriate to focus on the Friendly component, so of course they won't be
tempted by this argument. Let's hope they are right.

Mark

Dr. Mark Walker
Research Associate (Philosophy), Trinity College, University of Toronto
Editor-in-Chief, Journal of Evolution and Technology,
(www.transhumanist.com)
Editor-in-Chief, Transhumanity, (www.transhumanism.com)
Home page: http://www.markalanwalker.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT