Re: CEV specifies who the AI cares about (was Re: Can't afford to rescue cows)

From: Tim Freeman (tim@fungible.com)
Date: Sat Apr 26 2008 - 12:52:33 MDT


On Thu, Apr 17, 2008 at 11:59 PM, Nick Tarleton <nickptar@gmail.com> wrote:
> http://www.sl4.org/wiki/CoherentExtrapolatedVolition

On Fri, Apr 25, 2008 at 10:31 AM, Tim Freeman <tim@fungible.com> wrote:
> CEV fixes who the AI cares about. Quoting directly from the cited article:
>
> >As of May 2004, my take on Friendliness is that the initial dynamic
> >should implement the coherent extrapolated volition of humankind.
>
> The AI cares about the extrapolated volition of "humankind", not the
> extraploated volition of mammals or some other group.

From: "Nick Tarleton" <nickptar@gmail.com>
>The extrapolated volition of humankind could choose to extend the
>group. The selection of humankind is part of the *initial dynamic*,
>it's right there. If you fix humanity (or present humanity, or
>whatever) as part of the goal system/utility function, it will never
>change, because a rational agent resists changes to its
>supergoals/utility function.

I agree that rational agents resist changes to their utility function.

I'm not clear about the important difference between CEV and the AI's
utility function.

If the AI is going to do it, then it's essentially a utility function,
and change to it will be resisted.

Or maybe the intent is that the AI is going to do something else. I
couldn't see much talk of behavior in the CEV page, which is
disturbing given that the purpose of the entire exercise is to
describe what we want the AI to do. If that's the right
interpretation, what is CEV claiming the AI will be doing?

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT