Sysops, volition, and opting out

From: John Stick (johnstick@worldnet.att.net)
Date: Mon Aug 06 2001 - 16:52:44 MDT


For the sysop to perform the tasks set out by Eli and Brian, it will have to
be more ambitious than some of the suggestions here. If it is going to
protect against dangerous uses of nanotech (not just grey goo scenarios but
some manipulations of genetics and brains), sysop space is not just
computronium, it is the entire known universe. And if it is going to
protect the inviolability of your self, whether code or wetware, as well as
protect your property and keep you from being infitrated by tainted programs
or data that would deprive you of your volition (see the "you can't keep a
bad AI down in a black box" thread), there is no doubt that the sysop is
simply the new government of the known universe. Like all governments, it
will outlaw some forms of destructive activity, define and strongly protect
rights in your person, define and protect property rights in computing
resources and perhaps real world resources, and set up rules for voluntary
transactions such as prohibiting fraud and mandating disclosure of dangerous
effects of programs and data. In sorting out what a sysop would likely do,
it might help to separately consider its various functions: What activities
will it prohibit and use force to prevent, what protections will it give
your person, what scheme of property rights and markets will it establish,
what activities will be permitted only if regulated, what disclosure rules
will be enforced, where will the sysop independently provide information and
warnings even if it doesn't mandate disclosure from one of the transacting
parties...

Some people won't like the sysop for just this reason. They were hoping the
singularity would drive governments to extinction. But using an innocuous
term like Unix scenario will not fool many people, and it will keep you from
squarely addressing the need for the sysop (if need there be), the
protections it can give, and the ways it can be made friendly.

As for opting out, there actually is a little hope for James Higgins.
Partial opt out provisions are inevitable. Protections against nanotech
disaster will be mandatory, but some of the schemes for disclosure, property
and markets need not be. If citizens after singularity differ much more
than contemporary humans in design and mental ability, there will likely be
different regimes of protection and disclosure for different classes of
citizens. As the sysop could in some respects tailor protections and
disclosures for each individual, there is the possibility of individuals
disclaiming some protections. And if someone wants to get in a spaceship
and head out for the frontier (and thus disconnect from the higher bandwidth
aspects of the web), it is very likely that some of the sysops functions
would no longer apply.

Using protection of volition as the moral underpinning of the sysops
activity is fair enough, so long as it is understood as a gesture in a
general direction, rather than anything more specific. Both Kant and Mill
can be understood as using protection of volition as the foundation of their
moral theories, but they differ on many issues. (Even Ayn Rand might be put
in that group, if one thought her writings were rational, moral, or
philosophical. I don't, but some here do, and agreements on practical
issues among these three will not be all that plentiful.) Gordon Worley's
attempt to diffuse situations where volitions conflict by using an
active/passive distinction (doesn't that capture the "doing/done to"
language?) does as well as most attempts: it solves some but not nearly all
situations (at the cost of changing a protection of volition theory to a
protection of justifiable volition theory where the "justifiable" does much
of the work). But hey, if thinking through morality were that easy, we
would have one less reason for developing more than human intelligence.

John Stick



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT