Re: Two draft papers: AI and existential risk; heuristics and biases

From: Olie Lamb (
Date: Mon Jun 05 2006 - 01:37:02 MDT

On 6/5/06, John K Clark <> wrote:
> (You) think that the very definition of a good AI is one that is enslaved to
> do exactly precisely what the colossally stupid human beings wants to be
> done. That is evil, I'm sorry there is no other word for it.

Whether a powerful intelligence allows humans to do things, or does
things for them makes for a pretty small consequential difference.

Humans want to do a lot of things that aren't very nice.

Whether a powerful intellgence does them for the humans, or allows the
humans to do them, works out much the same (even if it might be a tiny
bit ethically different)

Most people wish violence (etc) on others at some point. Usually,
it's for indirect and defensible-under-certain-context reasons like
revenge / punishment / keeping societal order.

Occasionally it's for direct reasons such as pleasure.

A sysop could remove some indirect motivations for violence, by
*effectively apologising* for people's past indescretions, preventing
further acts that necessitate revenge, and could also make some direct
reasons for violence obsolete, such as through the simulation of
violent acts.

However, where humans are inclined to violence for stupid reasons, a
Sysop would either have to interfere with their actions (making them
unhappy), or interfere with their motivations. Or I've not thought of
an option.

If interfering with motivations is cool... worry.
If making people unhappy is cool... be concerned.

I've been writing a piece of fiction that looks into this problem.
Unfortunately, given a number of things, it will probably never be

The conditions are this: The protagonist wishes to harm someone else,
just because they want to do it. They aren't looking for the pleasure
of the experience, so a simulation would not satisfy the desire. They
want to really harm someone.

Also: the protagonist has very strong views about personal integrity.
He does not wish to be uploaded, drugged, plugged into anything. He
just wants to be left in peace. To hurt people.

How can a more powerful entity (not a real sysop...) make this person
satisfied, without breaching his personal integrity?

It's a tricky problem.

(My "solution" is to use methods that the violent man doesn't see as
breaching integrity to try to convince him to change his motivations.
That is, working to maki him net-happy, but forcing some degree of
unhappy. A story needs some resolution, dagnabbit.)

-- Olie

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT