Re: Collective Volition, next take

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sat Jul 23 2005 - 15:03:29 MDT


On 7/23/05, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> Read some real evolutionary psychology. You might want to delete the words
> "good" and "evil" from your vocabulary while you do that, then restore them
> afterward, once your empirical picture of the world is correct.

*rolls eyes heavenward* Eliezer, if I wanted to write a textbook on
evolutionary psychology, I would do so. That's not what I'm trying to
do here. If you don't get the line of argument, then you don't.

> The problem here is the ratio of cubic expansion through galaxies to
> exponential reproduction. CEV doesn't solve this problem of itself, though it
> might search out a solution. Neither does CEV create the problem or make it
> any worse. Why do you suppose that we want to lock ourselves into a little
> box rather than expanding cubically?

By "locked into a box" I don't mean a bounded volume of physical space
- that's not the issue. I mean that everyone will be forced at all
times to follow the will of the Collective in every deed, word and
thought, with no escape, no refuge and no privacy; it doesn't matter
whether their material substrate is a meter or a gigameter away from
the center.

> Oh, and how exactly do you determine resource division between your domains?

The way I would suggest doing it is that each domain is allocated
(number of people) / (6 billion) of the total resources.

> Okay. I think I may write you off as simply having failed entirely to
> understand the concept of _extrapolated_ volition writing an AI, as opposed to
> putting an AI in the hands of an immediate majority vote, a fairly common
> misunderstanding. I honestly don't know how I can make the distinction any
> clearer short of grabbing people by the shirt and screaming at them.

Grabbing by the shirt and screaming is exactly what I feel like doing
right now! Let me try once more:

*It doesn't matter whether it's extrapolated or not!!!* [1] Someone
who wants to trample a human face when his IQ is 100 is still going to
want it when his IQ is 200, he'll just be more effective at finding
ways to get what he wants.

*This is the bit you keep refusing to understand.* I know you're smart
enough to understand it if you tried, but you keep half-believing in
the ghost of the old "intelligence => morality" idea. I'd feel a damn
sight safer if you went back to completely believing in it and just
working on unfriendly AI. At least that way the worst case scenario is
merely that we're all dead.

Let me put it another way:

Your hyperlibertarian Sysop scenario was effectively domain protection
with a domain size of 1 person, with the obvious problems previously
discussed.

With CEV, all of humanity is forced into the same domain.

What objection do you have to allowing the intermediate position of
multiple domains? I know you don't believe it's necessary. How certain
are you that you're right? If I'm right and you're wrong, you're
trying to bring about something worse than the extinction of all
sentient life. How do you get to be so sure of being right that you
insist on putting all our eggs in the one basket?

[1] That's not quite true - unextrapolated would be better, it'd mean
there'd be a better chance of a mistake resulting in an outright
planet kill, which would be an improvement on the CEV future.

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT