Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Gwern Branwen (gwern0@gmail.com)
Date: Fri Aug 31 2007 - 15:04:21 MDT


On 0, rolf nelson <rolf.hrld.nelson@gmail.com> scribbled:
> Not sure I agree with everyone calling this "simulation", it's more a
> "brain in a vat" scenario. I guess I'll go along with it, as long as
> everyone keeps straight that in the current proposals, it's the
> *environment* that's being simulated, not the AI itself.
>
> > Suppose you are the UFAI. You suspect you are in the karma simulator, and you are thinking about how likely it is. It stands to reason that the more 'common' a UFAI design is, the more likely it is to be tested by being put in a karma simulator, and so the more common a design you find yourself to be, the more likely you are to be in a karma simulator. Obviously, if your design is very unusual and odd and haphazard, then it'd be easier to conclude that you are not in the karma simulator - who on earth would want to engage in expensive testing of weird and unlikely designs?
>
> I don't agree. The reduction in probability that "someone would bother
> to simulate such an odd design" is offset by the reduced probability
> that "someone would bother to create such an odd design in the
> unsimulated world."

This is a little hard for me to think about, and I can't see how to assign any numbers or calculations to anything we've been saying, so I suspect this may be a fruitless line of inquiry. This issue begins to get into imponderables such as the probabilities of how an AI would come to be created. If it can only be created deliberately and with malice aforethought, then that seems to be a valid point, although I wonder if this is like arguing about random numbers chosen out of a million - any particular design is very unlikely, but nevertheless one had to be chosen. But if one could be 'accidentally' created, such as by haphazard evolving program experiments or explorations of brute force techniques using supercomputer level resources or even hurried slapdash situations (I understand that the possibility of a accidental AI is a particular worry of the SingInst), then we're in a situation where the AI's design might betray information about its probable origin and information such as the humans' expected likelihood of an AI being created. In *that* situation, I think my above reasoning does apply.

--
gwern
Defense Shell SATKA RFX zealand cocaine SAS JTF-6 TEXTA SADF




This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT