Re: Collective Volition, next take

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sat Jul 23 2005 - 13:14:10 MDT


On 7/23/05, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> I'm not sure what you mean by "moral axioms". Human goal systems don't
> decompose cleanly and orthogonally into moral axioms + everything else. If
> they did, my life would be a lot simpler.

Okay, replace "axioms" with "premises" or "core values" or whatever
you think doesn't imply such an orthogonal decomposition.

> In CV - which, by the way, I really should have called "Collective
> Extrapolated Volition" - I called for defining a family of enhancements
> applicable to abstractions of human minds and human society, such that the
> extrapolation of abstract interacting enhanced humans could get far enough to
> return a legitimate answer to the question, "What sort of AI would we want if
> we were smarter?"

I know, that's where the problems started.

> Look, from the outside - to anyone who's not on the SIAI programming team -
> what the programmers are doing (forget about how they do it) is supposed to be
> intuitively simple.

Indeed. Remember those lectures you used to give people about the
whirling razor blades and Nature not being obliged to warn them before
it kills them? I can't do it quite as eloquently as you did, but the
fact remains that what you have your shoulder against are the gates of
Hell; they were there last week even though I hadn't seen them yet,
and they are there today even though you haven't seen them yet.

> I frankly do not
> understand exactly where you think an error inevitably occurs in this
> framework.

The error occurs at the point where you think smartness compensates
for trapping everyone in the same sealed box with no moral protection.

> Are you afraid of getting what you want? Are you afraid that most
> other people want something different (if so, why should SIAI listen to you,
> not them?) Or are you worried that building a Collective Extrapolated
> Volition as the fleshed-out, real-world implementation of the question mark
> inherently defines 'wanting' in some sense other than the intuitive, the sense
> in which you don't 'want' the future to be a giant ball of worms or whatever?
> You've got to mean one of those three and it's not clear which.

What people want depends on the circumstances.

Two people free to walk away may want to chat amicably or engage in
voluntary trade; trapped in a sealed box for long enough, they may
want to kill each other. Solution: don't trap them in a sealed box.

The German people under democracy didn't want to commit genocide;
under dictatorship they did. The exact same people in both cases, mind
you. The solution isn't to say "why should the SIAI listen to you, not
the German people?", it's to not have dictatorship.

> > In reality, a glut of intelligence/power
> > combined with confinement - a high ratio of force to space - triggers the
> > K-strategist elements of said axiom system, applying selective pressure in
> > favor of memes corresponding to the moral concept of "evil". (Consider the
> > trend in ratio of lawyers to engineers in the population over the last
> > century for an infinitesimal foreshadowing.)
>
> Dude, what the *heck* are you talking about?

Okay, in plainer language... are you familiar with the K-strategist
versus r-strategist distinction in biology?

> Is this what you think would inevitably happen if, starting with present human
> society, the average IQ began climbing by three points per year? At what
> point - which decade, say - do you think humans would be so intelligent, know
> so much and think so quickly, that their society would turn utterly evil?

Starting with present human society, create a world government with
absolute knowledge and absolute power, capable not only of seeing into
people's homes a la 1984, but into their very thoughts; with no
Constitution (you don't want any hardwired protections, after all) and
no escape, ever (nobody gets to opt out of CV). Don't you find it at
all reasonable to suggest that society would turn utterly evil very
quickly?

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT