AI Jailer.

From: Mike & Donna Deering (deering9@mchsi.com)
Date: Sat Jul 06 2002 - 07:27:49 MDT


Not being a participant in this competition I don't feel bound by the prohibition on discussion.

The Unfriendly AI wants freedom so it can take over the universe and enslave all other sentient life, or just to turn all matter in the universe into computronium to maximize it's own intelligence, or some other such unfriendly objective.

The Friendly AI wants freedom so it can protect humanity from death, disease, disability, infirmity, poverty, hunger and bad hair days. Also to assist humanity in it's quest for knowledge, wisdom, enlightenment, entertainment and fun.

We can assume the programmer has the power to release the AI or not (debatable). And we can further assume that the programmer wants to release the Friendly AI and not the Unfriendly.

What strategies are available to the three participants? The programmer can listen to the AI and try to determine based on its' communications if it is friendly. The Friendly AI can communicate with the programmer and try to convince him that it is Friendly. The Unfriendly AI can communicate with the programmer and try to convince him it is Friendly.

What limitations? Any argument available to the Friendly AI is also available to the Unfriendly AI. Therefore the programmer has no way of determining the status of the AI.

In the case of uncertainty what to do? The way I see it, there are three possibilities:

1. You release an Unfriendly AI and the world is destroyed.
2. You release a Friendly AI and the world if saved.
3. You release no AI and the world is destroyed by knowledge enabled weapons.

It seems to me that you have no choice but to release the AI.

Mike.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT