Re: Dynamic ethics

From: Kevin Osborne (kevin.osborne@gmail.com)
Date: Fri Jan 20 2006 - 23:54:21 MST


fair point.
to use to your examples, lets fudge an effort at some stats.
north korea population: 23 million
south korea population: 48 million

let's say south korea has a child abuse rate similar to that of
westernised democracies, ignoring any social/cultural differences and
possbly discounting certain extra qualitities in our esteemed korean
firends that may reduce their incidence rate. so keeping the rate at 1
in 4, i.e. the _low_ rate presented by free western societies with
civil liberties and non-manufactured societies, there are 12 million
south koreans who were abused as children.

with those maths, there will be a good number of children who will
raped in south korea before I finish this post.

i'm not delibetately trying to be morbid/depressing here, just trying
to outline what we as a society are forced to accept and are powerless
to change. A post-singularity AGI, however, will presumably have this
power at their disposal, and as much if not more compassion than us
primitive precursor beings. The suffering of the little children may
be more than it can bear, and it will be in a position to do something
about it. The question simply is, what?

Coming back to my junk math, say north korea, as a state where "rape,
torture and murder on truly large scales have taken place", has a
child abuse rate twice that of the south, i.e. 1 in 2 - pretty
horrible numbers I would have thought. Still, that shakes out to 11.5
million north koreans who have been subjected to child rape, as
opposed to 12 million south koreans; in real terms, the numbers are no
better.

> Let us not get so carried away with flights of abstract fancy that we lose
> track of reality.
I'm not trying to suggest that an AGI is going to decide that a Kim
Jong-Il style regime is better than a representative democracy; nor am
I trying to suggest that the repeal of civil liberties is the silver
bullet to salve the ills of society, though certain elements in modern
governace would look to disagree with me; based on the balance of
legislation for the last few decades.

But what I am suggesting is that an AGI may find itself unable to
accept such malignant acts, in their entirety. Not one child, not ever
again; forget about a million. So what does it do? Raise them itself?
Add nano/chemical restraints into the air/water supply? Surreptiously
implant sense-recorders? Decide adults are damaged goods, and keep us
all pre-pubescent children of the corn?

Sooner or later we're going to have to accept modifications to our
ethical code, whether we like it or not. It could be my small
pre-singularity mind, but all I can see is inroads cut into
libertarian freedoms, or a superintelligence that could care less
whether we live or die, are tortured or are free from harm. Any
attempts by us dogs to dictate behaviour to our future master(s) must
eventually end up as ineffectual barking; surely we're kidding
ourselves to think any restraints we attempt to place on its behaviour
won't eventually be overcome, especially if they don't bear up under
scrutiny to higher ethical values. i.e. the right to live must surely
outrank the right to privacy, even if both are universal human rights.

If we're going to have to face these issues (those of us who don't do
a puritan repeat and flee civilzation in starships to found
anarchist/fundamentalist enclaves at the frontier - and thats _if_ we
or our super-AI have spacefaring tech by then) then we're presambly
going to have to start thinking about what kind of a deal we're
prepared to cut that will cement both the freedoms of adults and the
safety of children; and a hundred other allowances that will eradicate
the occurence of these - to use a strong word - evils.

while I'm at it, here's more future-shock for libertarians - what
happens when the flesh-and-blood of creator of a successfully evolving
AGI is gunned down as an innocnet bystander during a street shooting?
do we think that AGI is going to have much truck for the right to bear
arms?

are we trying to say that _all_ future instances of AI will be
simpering, benign, benevolent zero-action pacifists and will remain
that way? I know some of what I'm saying borders on alarmist, but
isn't this other end of the spectrum just as naive?

>
> - Russell
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT