No it's not (was: The collective 'volition' project is abitrary and ideosyncratic)

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Jun 15 2004 - 22:15:32 MDT


Philip Sutton wrote:
> Hi Eliezer,
>
> You are fully aware that if you create a coercive collective 'volition'
> machine that you have to get it right first up or we're all screwed for
> eternity. I agree with you on this.
>
> But is it possible to get it right first time in practice or in theory?
>
> I believe that it is not for many reasons. But there is one foundational
> reason that, as far as I can see, cannot be got around.

Mm hm. So if I easily loop around it, you'll concede that you should never
declare anything impossible until you've spent at least a month, or, heck,
five minutes, trying to solve it?

> Your whole concept is based on the notion of extrapolating the
> collective will of *all humans*. But what if it's meaningless to frame the
> issue in terms of a single human collectivity?
>
> Humans certainly exist in collectives - family, organisation, city, nation,
> etc. etc. But these collectives are fluid. People come and go from the
> collective. Over time the collectives that people organise themselves
> into change - get bigger or get smaller, change membership.

A fallacy of verbal thinking again. The "collective" in "collective
volition" doesn't refer to a specific human social grouping. It refers to
the fact that the extrapolation includes supra-individual dynamics such as
people talking to one another.

> The collective 'human' is in fact not a real thing - it is a very useful
> scientific abstraction that groups people together based on common
> evolutionary history and the current fact that they can interbreed. It is a
> taxonomic concept not a physical thing. In the past there several
> species of humans - Homo Sapiens, Neandethal humans, Cro Magnon
> etc. If you were doing the job of creating a coercive collective 'volition'
> machine when the others were around would you have included all of
> these human species or just homo sapiens?

I don't have to answer this because at present, I can get away with using
just the six billion genetically human individuals for the *initial
dynamic*. The successor dynamic may or may not include chimpanzees, I
don't know.

> And if you were doing the
> job 1000 years into the future when humans have spread into the solar
> system and perhaps the galaxy and they had morphed through
> techological change into a vast variety of types - some up-loaded,
> some physically manifest, some hybrid, some....(I don't know - fill in
> your favourite amazing ways that we could be). Are all these entities
> humans?? Should they all be governed by the coercive collective
> 'volition' machine?

I'd hope not. I view a collective volition as a temporary patch (or, as a
matter of fact, a pointer to a temporary patch), not a long-term solution.
  Incidentally, the frantic alarmist terms are not helpful.

Not to mention, you haven't explained what you mean by "coercive".

Does the initial dynamic contain the potential to write a secondary dynamic
in which human infants grow up to be humans even if their directly
extrapolated individual volitions would contain no reference to this which
we regard as their destiny? Yes. Does the initial dynamic contain the
potential to write a secondary dynamic in which heroin addicts would wake
up one day with their addiction gone even if that wasn't in their initial
volition? Yes. Does the initial dynamic contain the potential to write a
secondary dynamic in which the entire human species is transported into an
alternate dimension based on a hentai anime? Only if that's a really good
idea for some nonobvious reason (or, perhaps, the obvious reason), or if I
screw up the initial dynamic. Present possibilities for catching this
include a Last Judge and some other things I'm thinking through.

I'm sorry that you're alarmed over this, but someone or other was bound to
be alarmed the moment "Friendliness" became specific enough and detailed
enough to alarm people. I do think you've misunderstood some things. But
if you want to persuade me, the rarest qualities I know of, the ones that
would cause me to sit up and pay attention, are moral caution and concrete
alternatives. At the same time, mind you. You have to propose a concrete
alternative that is morally cautious - that doesn't force me to make
irrevocable decisions for humankind with no opportunity for a humane
superintelligent veto.

> And what about other AGIs? And what if we find
> sentient advanced life somewhere else in the universe? And what
> about dolphins (if we gave then voice control over robots we might find
> that they could fast evolve into advanced sentients [as we understand
> it] too.)

I don't have to answer these questions. Though I'd certainly like to know
some of the subproblems before I need to make certain choices.

> But you might say that I'm being fanciful and not dealing with the
> present need.
>
> However, the very act of trying to create a coercive collective 'volition'
> machine might cause the taxonimic fiction of humanity to choose to
> break into two groups (that would then evolve down entirely different
> paths):
>
> - those willing to subject themselves to the coercive collective
> 'volition' machine
>
> - and those that do not agree to subject themselves to the coercive
> collective 'volition' machine
>
> Why does this matter? Because the output from a coercive collective
> 'volition' machine will vary (even if it can actually do what is claimed for
> it in terms of objectively reading and extrapolating the collective
> 'volition' of a certain group of humans) according to the specific sample
> of humanity that it extrapolates.

Possibly. Though, I suspect, far less than many seem to believe; the only
variance that seems to me likely are male and female poles in the
collective volition. Since those are the only subclasses of "human" that
psychologically differ by entire complex adaptations.

Again, I plan to target the initial dynamic on the six billion existing
genetically human individuals. This is an obvious solution and I can
presently imagine absolutely no acceptable reason to modify it.

> And since the choice of which humans to extrapolate is entirely
> arbitrary (being idiosyncratically chosen by one person, ie. you) the
> output is entirely arbitrary too.

Hah. I suppose that saying "everyone" is an idiosyncratic personal
decision, and yet somehow it just doesn't feel that way.

> Just to make it clear, I personally don't want to be controlled by your
> coercive collective 'volition' machine.

Well, gee, neither do a whole lot of people. *I* don't want to be
controlled by a collective volition. I suspect that virtually *no one*
wants to be controlled by a collective volition. And yet apparently "chaos
theory", or some such, prevents me from predicting that our collective
volition will not be to be controlled by our collective volition.

I also don't want to be turned into paperclips. This requires a humanely
directed SI of some kind in our solar system. This SI is not going to be
directed by a human-level intelligence. It's too dangerous. Not, Russian
Roulette dangerous. More like, Reverse Russian Roulette dangerous.

> If you insist on including all
> 'humans' in the extrapolation and the regime of coercion then I hereby
> declare that (using your example of changing the use of words) I no
> longer wish to be identified as a 'human'. I wish hereafter to be known
> as a 'person'. :)

If you seriously don't want your volition included in the collective
volition, at all, and it's the sort of decision you'd stick to after a few
years thinking (which is really hard for me to imagine) then I suppose your
volition would not be included in the collective volition. Actually,
that's an interesting question. Whether it's likely to work that way, or
guaranteed to work that way, depends on the order of evaluation. I think
it would *probably* end up being guaranteed to work that way. Not sure,
though.

The SI that is the decision function that is the collective volition would
still have the capability of deciding what to do with you. That's the way
the optimization process is written. Don't confuse capability with
probable intent. There would have to be (I keep on saying this) a
*reason*. Otherwise we're back to, "What if super-Gandhi goes around
letting the air out of people's tires?"

Did you read "Collective Volition", at all? If people are commenting on
this without having read it, *I'm* going to come round and let the air out
of your car's tires. This is discussed, extensively, in PAQ 4 and other
sections, and you haven't addressed any of the reasons I gave for why I am
*prohibited*, both technically and morally, from writing a Bill of Rights.
  You are asking me to do something every bit as morally dangerous as, say,
starting a communist revolution. It would help if you said something like,
"I realize this is incredibly, terrifyingly dangerous, like you extensively
describe in PAQ 4, but I want you to write in a Right that says..."

Write in a Bill of 10 Rights, and there'll be at least 3 Wrongs.

> I and any other 'people' I band together with, I'm sure, will be very
> happy to cooperate with you and your 'humans' on projects to prevent
> the world being turned into grey goo or any other such nasties, but any
> relations that I and other 'people' have with your 'humans' will be have
> to be based on negotiation and collaboration and not on coercion.
> Should you try to exercise coercion, I and the other 'people' will resist.
>
> Can you see what I'm getting at?

I'm not coercing anything. I am refusing to rule out, on my own authority,
the possibility of infants growing up into humans, which is, like it or
not, a case of coercion.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT