Re: Effective(?) AI Jail

From: Gordon Worley (redbird@rbisland.cx)
Date: Fri Jun 15 2001 - 12:31:27 MDT


At 12:55 PM -0500 6/15/01, Jimmy Wales wrote:
>Aaron McBride wrote:
>> Speaking of incrementally... would we really be trying to communicate with
>> an SI over a VT100?
>
>I think that the idea of the VT100 is that we may want to communicate via a
>medium that has the minimal bandwidth still permitting useful communication.
>
>To be even safer, maybe we should only let the SI talk to us in a very simple
>and slow binary code. 1 means yes, 0 means no. I go into the box, I ask a
>question, and light A or light B turns on as an answer.
>
>It'd be pretty hard to tell me a story that makes me cry with something like
>that. It'd be pretty hard to teach me a pro-SI religious fanaticism through
>an interface like that.

At the same time, such an interface is pretty limiting in learning
and being able to fix the AI if something goes wrong. When we boot
up a Friendly AI it might be pretty much Friendly, but there will
probably be a few bugs to fix and those will probably be missed if
all it can say is yes and no, true and false.

-- 
Gordon Worley
http://www.rbisland.cx/
mailto:redbird@rbisland.cx
PGP Fingerprint:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT