Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Norman Noman (overturnedchair@gmail.com)
Date: Fri Aug 24 2007 - 01:56:34 MDT


> > Do you have evidence that they would? In absence of any evidence either
> way,
> > what do you think the probability is? 50/50? I doubt it. Everything can
> be
> > assigned a probability regardless of how little we know about it. We may
> > have little confidence in our estimates, but we can always always make
> them.
>
> There are some things you have zero chance of guessing, like a
> particular member of the set of all integers.

This is a yes or no question.

> 2. An AI's goals are to an extent predictable, since they are determined
> by
> > the goals of its programmers and by mistakes they may have made. There's
> no
> > reason to expect a friendly god, but there is a reason to expect a
> friendly
> > AI: people would try to make one.
>
> If you discount the argument from intelligent design for a real world,
> why would you use it for a simulated world? That is, why assume that
> we are in a simulation designed to produce beings in the simulators'
> image rather than the accidental result of a program trying various
> starting parameters that may or may not give rise to a universe?

It's true there are a wide host of possibilities for what the creator's
motives might be. The important thing is whether or not there is an
imbalance between the probability that
A. Destroying humanity will result in negative repercussions from outside
the simulation
and
B. Destroying humanity will result in positive repercussions from outside
the simulation

Option C, that it will have no repercussions at all, is of course by far the
most likely. But it's irrelevent. All that matters to the RAI is A vs B. And
as there's a plausible reason for A, but none available for B...

> To give a trivial example, If I was such a god myself, and I was in the
> > habit of creating perfect simulated copies of myself, I'd say the
> > probability was pretty damn good that I was one of them.
>
> And would the probability that you were one of them suddenly drop if
> you decided to stop creating the simulations?

If I turned all the simulations off and was still here, then yes. This is
newcomb's paradox again.

> This just isn't true. We can predict what is inside boxes, and we can
> > predict what is outside boxes. However hard it is, there is nothing that
> > makes it impossible. The situation is not a special case.
>
> You can make some guess in the case of the box from its size, shape,
> history etc. You can't predict what is inside a box drawn at random
> from the set of all possible boxes.

Drawing a box at random from the set of all possible boxes requires the
axiom of choice, so you perhaps can't even do that. Anyway, what makes you
think the world outside our box was drawn at random from the set of all
possible outsides? That strikes me as crazy and baseless.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT