Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Aleksei Riikonen (aleksei@iki.fi)
Date: Thu Aug 23 2007 - 07:35:37 MDT


(I wrote an off-list reply in Finnish first, but then decided to post
a reply here as well.)

On 8/23/07, Mika Letonsaari <mika.letonsaari@gmail.com> wrote:
> Those of you who believe in this argument must be very religious
> people. I mean, isn't this kind of similar to religion, where someone
> is telling you without good evidence, that there is someone capable of
> pulling the plug or doing other nasty stuff if you don't obey his
> will?
>
> Probably the superuser of the universe wouldn't just pull the plug,
> because he would end the lives of all other intelligences in the
> simulation too. And because obviously the superuser wouldn't use his
> power to stop smaller suffering (because he doesn't want to be seen),
> maybe the punishment would be something more abstract like going to
> hell after AI reaches the goal, if the goal is achieved by hurting
> other intelligences? And all the thinking why superuser allows bad
> things etc...
>
> But of course it might work. Most people believe in something
> supernatural, even some intelligent ones. So maybe the AI will too.
> :-)

You seem to have difficulty understanding that assigning a non-zero
probability to something is a different thing than "believing" in it.
Religiosity tends to be about assigning probabilities of 1 to silly
stuff. Here we are talking about assigning a potentially very small
but still non-zero probability to the fact that one is living in a
simulation, and implications of such a probability assignment. To me
it seems intellectually challenged to equate such a thing with
religion.

If you are unaware of what we mean by the Simulation Argument (it does
not refer to the argument which started this thread), see:
http://www.simulation-argument.com/

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT