Re: [sl4] Potential CEV Problem

From: Vladimir Nesov (
Date: Thu Oct 23 2008 - 05:35:41 MDT

On Thu, Oct 23, 2008 at 2:32 PM, Edward Miller
<> wrote:
> I am assuming that to successfully determine the extrapolated volition of
> the human race, it would take an enormous extra amount of computational
> power. Before the CEV is determined, I am assuming that the AGI would be
> agnostic on the matter. Thus, its first task would be to acquire as much
> computing power as possible and then CEV might turn out to be one of those
> not-so-great ideas.
> Even if it did for sure completely rule out killing everyone as the actual
> extrapolated volition, it is still possible it would choose to use all of
> our atoms to build its cognitive horsepower simply because we are the
> closest matter available, and every planck unit of time in a
> post-singularity world might have to be treated with care as it has vastly
> more expected utility than our current meat-world (let's say 3^^^^3 utils).
> After it is done computing the CEV, even if it does decide to then create
> simulations of humans, would that be the scenario we want? I can't figure
> out how this could be avoided, at least given the CEV description given on
> ... which could be my own short-sightedness.

CEV describes Friendliness, "what we want [to do with AI]". How to do
that is out of its scope, a separate question.

>From the introduction:

"Friendliness is the end; FAI theory is the means. Friendliness is the
easiest part of the problem to explain - the part that says what we
want. Like explaining why you want to fly to London, versus explaining
a Boeing 747; explaining toast, versus explaining a toaster oven.
Friendliness isn't the hardest part of the problem, or the one we need
to solve right now, but all attention tends to focus on that which is
easiest to argue about. "

Vladimir Nesov

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT