Re: Effective(?) AI Jail

From: Durant Schoon (durant@ilm.com)
Date: Tue Jun 19 2001 - 19:16:27 MDT


> From: James Higgins <jameshiggins@earthlink.net>
>
> The most likely scenario is that the SI will play nice for a period of
> time, unless it is so completely hostile that it is unable to hide its true
> motives. So you walk in with your .45 and it says all the nice happy
> things you want to hear, so you give Eliezer the thumbs up. So more people
> talk to this thing, over and over again. Then, possibly years later, it
> has much more freedom, is at least somewhat trusted and has much greater
> access to converse with people. Now it looks for the one person that it
> has a greater than 99% chance of convincing to let it out (or at least make
> it possible for it to escape).
>
> For this reason I don't believe it would ever be possible to prove that any
> given SI was friendly.

Let's say Eli is making two claims:

1) Friendly AI can be created.

2) Friendly AI can be created which cannot (to a very high degree
        of certainty) deviate from Friendliness.

Just to clarify your position, which of these (or both) do you consider
to be faulty (or even just suspicious, in case you aren't thinking of
particular problems)?

Or maybe your claim is different and you're saying that neither (1) nor
(2) can be verified satisfactorily.

--
Durant Schoon


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT