Re: [sl4] A model of RSI

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Fri Sep 26 2008 - 02:53:48 MDT


>> I've already proposed a gaggle of tests - mainly taking an open ended
>> task (running a sucesfull company, organising an election campaign,
>> etc...) with a clear relative standard of sucess, and setting the AI's
>> head to head. A sucessful test just means a better understanding of
>> what we really want.
>
> If winning an election is good, then becoming supreme dictator of the earth is better. If running a successful company is good, then acquiring all of the world's wealth and starving the rest of the population is better.

I never said that they would be safe tests. They are tests you should
only run if the AI is friendly to start with.

> The more general problem is that you cannot simulate your own source code. You cannot predict what you will think without thinking it first. You need 100% of your memory to model yourself, which leaves no memory to record the output of the simulation.

That's a theoretical limit (like the finite states of a finite state
machine). I'm not sure it's a practical limit; you'd want a method
that could predict your own thoughts with, say 95% accuracy, and using
no more than a certain fraction of your processing power. That doesn't
seem too unbelievable.

Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT