Re: 6 points about Coherent Extrapolated Volition

From: Michael Anissimov (anissimov@intelligence.org)
Date: Sun Jul 24 2005 - 21:18:59 MDT


Hi Eliezer,

Few quick questions on the CEV post - notice that you've turned
"Collective Extrapolated Volition" into "Coherent Extrapolated Volition"
here - is this a permanent jargon change or are you just using the term
"coherent" to make some sort of point in this context? Please explain.

Eliezer S. Yudkowsky wrote:

> 3. THE CEV WRITES AN AI. THIS AI MAY OR MAY NOT WORK IN ANY WAY
> REMOTELY RESEMBLING A VOLITION-EXTRAPOLATOR.

...though it's extremely likely it would, right? In the broadest sense,
"volition extrapolation" basically means "guessing what people want", right?

> 4. THE CEV RETURNS ONE COHERENT ANSWER. THE AI IT RETURNS MAY OR MAY
> NOT DISPLAY ANY GIVEN SORT OF COHERENCE IN HOW IT TREATS DIFFERENT
> PEOPLE, OR CREATE ANY GIVEN SORT OF COHERENT WORLD.

Of course, if it doesn't display any sort of coherence in how it treats
different people, or doesn't create any sort of coherent world, that
would be a failure, right? Is this statement being put forth to help
people distinguish the difference between the CEV and the AI it creates?

> 5. THE CEV RUNS FOR FIVE MINUTES BEFORE PRODUCING AN OUTPUT. IT IS
> NOT MEANT TO GOVERN FOR CENTURIES.

Though of course, there could be substantial mutual information between
the CEV and the AI it creates - correct? Though such an AI (nor the CEV
which created it) would not "govern" in the anthropomorphic sense, it
would surely exert optimization pressure upon the world. There are
probably some people out there who feel infinitely uncomfortable with
the idea of a superintelligent AI with initial conditions set by a human
programming team creating changes in the world, and will hence object to
any such proposals, but of course it seems like this event is basically
unavoidable... I think it's important to distinguish between people who
are objecting to *any* FAI theory on the grounds that they haven't come
to terms with the reality of recursive self-improvement yet, and people
who have already accepted that superintelligent AI will eventually come
into existence whether we like it or not, and that it's merely our duty
to set the initial conditions as best we can. It's sometimes difficult
to tell the difference between the two, people because it seems like
people in group #1 may occasionally pretend to be in group #2 for the
sake of argument (which ends up going nowhere).

>
> 6. THE CEV BY ITSELF DOES NOT MESS AROUND WITH YOUR LIFE. THE CEV
> JUST DECIDES WHICH AI TO REPLACE ITSELF WITH.

...but the CEV isn't explicitly being programmed to create an AI output
- aye? The AI output is based on the assumption that our wish if we
knew more, thought faster, were more the people we wished we were, had
grown up farther together; where the extrapolation converges rather than
diverges, where our wishes cohere rather than interfere; extrapolated as
we wish that extrapolated, interpreted as we wish that interpreted, we
would decide to construct an AI that exerts a sort of optimizing
pressure on the world such that it makes it a better place to live? I
would agree with this assumption - I just think it's worthwhile to point
out explicitly for the sake of clarity. Theoretically, the (extremely
improbable) output of CEV could merely be a single object, like a
banana, or something along those lines, yes?

-- 
Michael Anissimov                               http://intelligence.org/
Advocacy Director, Singularity Institute for Artificial Intelligence 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT