Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Gwern Branwen (gwern0@gmail.com)
Date: Wed Aug 22 2007 - 21:08:20 MDT


On 0, Aleksei Riikonen <aleksei@iki.fi> scribbled:
> Wow. What Rolf said here actually seems (to me) like a smart new idea.
> It's not often that you see those. Eat the Simulation Argument, you
> damned dirty UFAIs potentially reading this!
>
> (And yeah, I noticed the "low probability" part.)
>
> Does this mean we'll have to promise to convert a large portion of the
> universe into hardware running UFAI Deterrence Simulations, and
> actually do this too? I hope a FAI didn't tell anyone they'd be more
> convincing if they included real sentient beings, and hence we very
> well might currently be in one... No wait, actually I hope a FAI *did*
> tell that to someone! <-- Hear this you UFAIs.
>
> Man, have I ever sounded more insane from the point of view of
> SL<4ers. (SL4ers too?)

> --
> Aleksei Riikonen - http://www.iki.fi/aleksei

This leads to an interesting thought on my part.

Suppose you are the UFAI. You suspect you are in the karma simulator, and you are thinking about how likely it is. It stands to reason that the more 'common' a UFAI design is, the more likely it is to be tested by being put in a karma simulator, and so the more common a design you find yourself to be, the more likely you are to be in a karma simulator. Obviously, if your design is very unusual and odd and haphazard, then it'd be easier to conclude that you are not in the karma simulator - who on earth would want to engage in expensive testing of weird and unlikely designs?

I suppose there are other things you could do to increase your confidence that you aren't in a simulation, like attempt to use up absurd amounts of resources and force computation on the part of the simulation, or detect inaccuracies and approximations, but I think this idea has the nice side-effect that it would suggest that we should discourage research into very off-the-wall or technologies which might accidentally become UFAIs, like primarily evolutionary-based ones. Increases the likelihood that any actual UFAI would be a conventional design, and so it would have to worry about being much more likely to be tested than any old random design, and so it might be more vulnerable to the argument.

--
gwern
KY B-52G/H SITOR gtegsc WORM B28 Mac-10 MP40 Enforcers GEO




This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT