Re: [sl4] Evolutionary Explanation: Why It Wants Out

From: Tim Freeman (tim@fungible.com)
Date: Fri Jun 27 2008 - 09:25:14 MDT


> From: "Stathis Papaioannou" <stathisp@gmail.com>
>>You would have to specify as part of the goal that it must be achieved
>>from within the confines of the box.

On Thu, Jun 26, 2008 at 6:32 PM, Tim Freeman <tim@fungible.com> wrote:
> That's hard to do, because that requires specifying whether the AI is
> or is not in the box.

From: "Vladimir Nesov" <robotact@gmail.com>
>If you can't specify even this, how can you ask the AI to do anything
>useful at all?

Tell the AI that some things have positive desirability, and other
things have negative desirability, but make sure that the things
you're describing are really things you care about. "The AI" is not
really a thing you care about, since if it copies its code into new
hardware or it influences somebody, the influenced entity is taking
action for the AI but it's ambiguous whether it's part of the AI.
Thus wanting the AI to achieve things when "the AI" is confined within
the box is not a description of want you want.

>Almost everything you ask is complex wish, a useful AI
>needs to be able to understand the intended meaning. You are arguing
>from the AI being a naive golem, incapable of perceiving the subtext.

I can ask the AI to figure out what I want and do it. That is not a
complex wish. My web site has a slightly buggy decision procedure for
this.

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT