From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Tue Jun 15 2004 - 19:53:37 MDT
> A) Think that humans are too awful for their volitions to be
> extrapolated. (What am I supposed to extrapolate instead?)
> B) Think that people's present-day decisions are just fine, and this
> whole volition-extrapolating thing is unnecessary. (Are you absolutely
> sure about that? You would go ahead and do it even if you knew that with
> another few years to think about the subject, you would change your mind
> and be horrified at your previous decision?)
I would say this polarity completely misconstrues much of what people
are saying. I doubt that many people on the list think that humans have
no redeeming features (pole A) or that humans are perfectly fine as
they are (pole B).
For example, in my own case I have devoted the last 30 years or so to
trying to change the ways that humans act. I *don't* think that a lot of
what humans do is great. I want to see humans change quite a lot of
the things they do quite a lot. And I want to see the changes happen
But I think it's very important *how* you go about changing things.
You seem to have a very strong dislike of 'politics' - seeing it as
irrational and nasty etc. Quite a bit of 'politics' is like of course. But to
choose to not understand poltical processes is to turn your mind away
from critical *knowledge* and is therefore a good way to make your
ideas unworkable and vulnerable.
If you do not have an effective powerbase amongst humans, then you
will find that David Dukes prognostication could well come true:
"There's a good chance this concept of yours could get some very
bad press - and worse - governmental/police/military/whatever
intervention." That is if terrorist and crime groups don't get to your
system first so that they can use it for domination.
Also 6 billion people (quite powerful general inteligences, for all theor
faults) feeling tetchy about your proposed coercive collective volition
machine can form a very powerful swarm that could halt the project
and the machine even if it has become very powerful (but not yet
omnipotent). (eg. a very clever human can still be killed by a swarm of
very stupid viruses.)
But getting back to the theory.
Somebody's will belongs to themselves - it only has meaning if it is
manifested by *that* person. A collective will can only exist if a
collective of actual people form a group decision.
Anything else is simply a guess.
I can't know what my future will will be till I become my future person.
I might try to guess my future will and then act on it. But by that action
I convert my guess at my future will into my actual current will.
If anyone else tries to guess my future will or the future will of a
collective of humans all they are doing is guessing. If they impose this
guess they are dictating to real people who have a real will that has
Most people don't like being dictated to. And many people resist this
imposition. There are however a few cases where people agree to the
dictation of the group interest over their personal will and that usually
relates to issues of security and safety. But most people like to be able
to directly influence or control the process by which they cede control
and they like to have the ability to withdraw the delegation if it is being
exercised in a way that they don't approve of. If an FAI is to be
acceptably involved in coercion is should be as part of a process
whereby real people exercise their actual current will to empower that
FAI to act in certain circumscribed ways nd circumstances. And this
granting of power should be reversible or modifiable by a due process.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT