Re: 6 points about Coherent Extrapolated Volition

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jul 24 2005 - 19:01:50 MDT


Russell Wallace wrote:
> On 7/25/05, H C <lphege@hotmail.com> wrote:
>
>>Eliezer's a pretty smart person. We can all agree on that.
>
>
> Definitely. Unfortunately this has ended up translating into a series
> of subjunctive existential threats, each more subtle and slippery than
> the last. On the bright side, it's certainly been an instructive set
> of exercises in thinking about existential risk.

Hold on a second. CEV is not a subjunctive planetkill until I say, "I think
CEV is solid enough that we could go ahead with it if I just had the AI theory
and the funding". I never said that.

With that stipulated, I don't see how you could possibly do better than
getting from point A to point Z on FAI theory by going through, and then
rejecting, a set of inadequate theories that you knew to be inadequate.

Though we quite disagree on what the problems with CEV are. For example, I
think a problem with CEV is that it requires the programmers to work blinded
in cases where that may not be possible, perhaps not even in theory; that CEV
relies too heavily on a Last Judge for error-checking; and that an
implementation of CEV may still be too far from the underlying motivations of
FAI to be fixed that way. This business with K-selection or dictatorships
still strikes me as a pretty basic misunderstanding of how CEV works.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT