From: Kevin Osborne (firstname.lastname@example.org)
Date: Fri Jan 20 2006 - 07:47:02 MST
I would agree that a pre-transcend AGI would feel compelled to enforce
an ethical code that we may not agree with. I don't know however that
it'd agree to accepting sets of ethics across societal groups, insofar
as it would never accept certain boundaries being crossed by anyone.
There are certain ethical absolutes that if not universally agreed
upon are universally accepted as engendering a retaliatory backlash
from the stewards of power. These stewards have their own ethical
guidelines that force them to respond to such acts and to not turn a
blind eye. In this way they rewrite the ethical codes of those they
dominate. Carrying that through, future superior intellects will
rewrite our moral code, where necessary, via acts of force.
from the original post:
> We will need a multitude of ethical codes, because a human can make
> subtle distinctions that a dog can't. Any code capable of being
> implemented by a dog, could be improved on by a human.
carrying the logic of the last half of this statement through, an AGI
is eventually going to find am unguessable portion of human practice
as ethically unsound. we're not ethically ok with letting dogs eat
dead human babies in front of their mothers, even if the dog is
ethically ok with it - chances are we'd have that dog put down, and a
good portion of it's brethren to boot. An AGI may decide its not
ethically OK for dogs to post to highbrow-thought mailing lists
denigrating the intelligence of others, and could then decide to
(forcefully) genetically modify everyone else to be just as
> This immediately suggests that ordinary humans today should use
> different ethical codes, depending on their intelligence. This is of
> course unacceptable today.
what is unacceptable is a lack of compassioniate understanding for the
unintelligent or uneducated; though mostly this is applied to moral
issues; and as always, who can claim judgement rights, and can you
accept being judged? Going down that rabbit hole would provide an AGI
with precedent to write different rules for itself than us, and when
that happens, one would think post-singularity playtime would be well
and truly over.
As always, how we behave now, especially those who will likely be
involved in the parenting of our future master(s), should stake out an
ethical ground in regards to our perceived inferiors very, very
carefully. When it comes time to argue right and wrong with our
nascent/pubescent creations, any ethical weaknesses in character
and/or behaviour could be, well, fatal. The mistakes, missteps and
misguided opnions we hold today could, in fact, be fodder for our very
own judgment day when our child does as all children do and realises
we are asking them to be so much better of heart and act than we have
been able to be; that we are asking them to do as we say, and not as
we have done.
( a side note: one would have to assume a future ego-possessing
self-curious AGI will one day google itself, and end up right here
reading these very missives. I wonder what it will make of it? I for
one would like to tender our future equal-superior a hearty welcome to
this feisty little intellectual bake-off and to tender it to cherish
our hopes for humanity's future :-)
coming back to the topic at hand: where the application of military
force would come in is where we cross the acceptable behavior line as
drawn by the AGI. We have such acceptability boundaries in our
current ethical framework; there are certain activities that humanity
as a group will not tolerate being practiced anywhere on the face of
the earth and when notified of such acts will initiate military and
police actions in order to halt their occurence. Current activities in
this class include genocide, slavery and irresponsible weaponry
development/deployment. Our recent and present ineffectiveness in
ceasing as opposed to reacting to these issues would unlikely be
mirrored when effected by a singularity-level AGI.
Such an AGI is likely to set and reset its own ethical floor over time
and from then on find certain activities unacceptable within its
presence. I'm sure there a plenty of great examples in the literature,
but here are a few of the top of my head:
- anything other than direct democracy; acceptance of any kind of
corruption or graft detrimental to the greater good.
- societal acceptance of the likely future occurences of certain
reprehensible acts. i.e. we accept that 1 in 4 or so of all new
children born will be sexually abused. We accept that a certain
percentage of new wives will be battered.
- Other human-human and society-human acts of tacit malignancy that
will not be solved by any kind of personal wealth boom related to
singularity-level activities. i.e. homelessness and starvation
_should_ disappear, but racism, rape and abandonment are unlikely to.
And of course, worst of all, what if the future AGI overseeing us
decides that political correctness is pure ethical gold, and that
anyone contravening its principles should receive their version of a
Mao-style PC re-education? :-)
Perhaps someone can point me to archive discussions that deal with
further questions that this raises, such as:
- how would an all-powerful post-singularity AGI-descendant implement
preventative measures for ethically reprehensible acts?
- surely one obvious option is that we are all monitored and directed
for the 'greater good'? In the liberty-scales of this decision, does
the raping-to-death of one child outweigh the outlawing of
risk-creating freedoms of the rest of the populace? how about a
thousand children? a million?
- Is there another way to enforce the creation of a society free from
acts of malignancy apart from stripping its members of privacy and
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT