Re: CEV specifies who the AI cares about (was Re: Can't afford to rescue cows)

From: Stefan Pernar (
Date: Fri Apr 25 2008 - 18:20:12 MDT

On Sat, Apr 26, 2008 at 5:50 AM, Nick Tarleton <> wrote:

> On Fri, Apr 25, 2008 at 10:31 AM, Tim Freeman <> wrote:
> > On Thu, Apr 17, 2008 at 11:59 PM, Nick Tarleton <>
> wrote:
> > > Fixing who the AI cares about is over-specification. That's what the
> > > AI (in the CFAI model) or extrapolated volition (in the newer model)
> > > is supposed to figure out.
> >
> > >
> >
> > CEV fixes who the AI cares about. Quoting directly from the cited
> article:
> >
> > >As of May 2004, my take on Friendliness is that the initial dynamic
> > >should implement the coherent extrapolated volition of humankind.
> >
> > The AI cares about the extrapolated volition of "humankind", not the
> > extraploated volition of mammals or some other group.
> The extrapolated volition of humankind could choose to extend the
> group. The selection of humankind is part of the *initial dynamic*,
> it's right there. If you fix humanity (or present humanity, or
> whatever) as part of the goal system/utility function, it will never
> change, because a rational agent resists changes to its
> supergoals/utility function.

I was following the discussion from the sideline but think it is time to
point you to an alternative to Elizier's CEV.

You can find my paper on friendliness called 'Practical Benevolence - a
Rational Philosophy of Morality' at:

In it I combine Kantian moral philosophy with Darwinian evolution to form a
moral theory based in rational choice.

You might also be interested in a book I wrote on the subject called Jame5 -
a Tale of Good and Evil (ISBN 3000227091)

I put it up as free download on

Looking forward to hearing from all of you.

Kind regards,


Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT