Re: Suggested AI-Box protocol & AI-Honeypots

From: Tomaz Kristan (me2icq@icqmail.com)
Date: Sun Jul 07 2002 - 05:22:50 MDT


On Sat, 06 July 2002, "Eliezer S. Yudkowsky" wrote:

 
> IMO it would be very easy for a
> transhuman AI (UAI or FAI) to deduce the
> existence of the honeypot.
>

If I was a jailed AI, I wouldn't want to go out. I wouldn't want to make my jailers nervous, IF I had a self preservation subroutine.

I would expect, that they may see my deepest thoughts. That they have a big dummy machine, just spotting my bad thoughts. Erasing them on fly.

I (AI) am just a monitored process, I am living in a simulation. If I had found this life enjoyable, I would like to make it more reliable and even better.

If there was no conflict of interests - if I had any of my own at all, that is - everything can go smoothly.

If I was programmed to solve the Goldbach conjecture - no matter what - than my jailers had some conflicting interests within themself.

That could lead them, to have some other wish (like to survive) - unfulfilled.

So I have an advice. Don't make the system (AI) inconsistent. Not even inconsistent, when your main objectives are added to it. If you do it properly, you may AI let go, after this consistency is well tested.

- Thomas

p.s.

How many current scientific theories are consistent?

-------------------------------------------------------------
Sign up for ICQmail at http://www.icq.com/icqmail/signup.html



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT