Re: [sl4] Potential CEV Problem

From: Toby Weston (lordlobster@yahoo.com)
Date: Thu Oct 23 2008 - 16:08:16 MDT


Just in case we do, deep down, want to kill all humans. Perhaps we should add a hardcoded caveat to the friendliness function, that puts all basline, pre-posthuman, homo sapiens off limits to the AGI god's meddling. Let the Amish live whatever happens.

Toby

On 23 Oct 2008, at 19:17, "Eliezer Yudkowsky" <sentience@pobox.com> wrote:

On Thu, Oct 23, 2008 at 3:32 AM, Edward Miller
<progressive_1987@yahoo.com> wrote:

Even if it did for sure completely rule out killing everyone as the actual
extrapolated volition, it is still possible it would choose to use all of
our atoms to build its cognitive horsepower simply because we are the
closest matter available,

It's not like the AI has a utility function of "compute CEV" and then
suddenly swaps to a utility function of "implement CEV", which is what
you're describing. "compute CEV" is a subgoal of "implement CEV" from
the beginning. An *extremely* fuzzy picture of what people want, the
sort you could get from browsing the Internet, would tell you that you
shouldn't kill them to compute what they want.

and every planck unit of time in a
post-singularity world might have to be treated with care as it has vastly
more expected utility than our current meat-world (let's say 3^^^^3 utils).

3^^^3 can't happen given known physics.

I think it's extraordinarily more likely that a superintelligence will
discover a road to infinite time and computation than that it will
discover a finite computation in which Planck-time increments make
exponential differences to the final total.

You're presuming that even in this case, we probably wouldn't-want to
be killed off. I might even agree. Hence CEV. Anyway, the way I
proposed structuring it, nothing that killed off humanity could happen
until the AI was essentially sure, not just suspicious subject to
later resolution, that this is what we wanted - that was part of the
point of computing "spread" and not just summing up expected
utilities. There's other issues having to do with Pascal's Wager that
I'm not going into here, but google Pascal's Mugging.

-- 
Eliezer Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT