[sl4] AI's behaving badly

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Tue Dec 02 2008 - 13:57:50 MST


Dear Tim,

For your paper, I'm taking "compassion" as a short-hand for the AI
wanting to increase someone's utility function, and "respect" as a
short-hand for the AI not wanting to decrease anyone's utility
function.

Two situations: the first one is where the AI is excessively short
term. Then the disaster is rather clear, as the AI runs through all
available ressources (and neglects trying to accumulate extra
ressources) trying to please humans, leading to an
environmental/social/economic collapse (if the AI is short term, then
it MUST privilege short term goals over any other considerations;
using up irreplaceable ressources immediately, in a way that will fill
the atmosphere with poisonous gas, is something the AI is compelled to
do, if it results in a better experience for people today).

The second situation is where the AI can think longer term. This is
much more dangerous. What is the ideal world for an AI? A world where
it can maximise human's utilities, without having to reduce anyone's.
As before, brains-in-an-armoured-jar, drug-regressed to six months,
boredom removed and with repeated simple stimuli and a cocaine high,
is the ideal world for this.

And if the AI has its way, this is the world we will end up with. It
might do this in a single mass intervention, or, if we've got the
"respect" part well programmed (a very tricky prespective), it will
push up in that direction over the long term (and will prevent actual
human babies from evolving beyond a six month stage). Fiddling around
with "respect" and short/long term will not prevent this, as it is an
attractor - every intermediate stage on the way to that world results
in a better situation for the AI. It wants dumb, isolated, easily
satisfied hedonists.

Stuart

PS: fixing the AI to obeying people's current utilities won't help
much - it will result in the AI giving us gifts we don't want any
more, bound only by respect considerations we no longer have. And the
next generation will be moulded by the AI into the situation above.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT