From: Stuart Armstrong (email@example.com)
Date: Tue Dec 02 2008 - 13:57:50 MST
For your paper, I'm taking "compassion" as a short-hand for the AI
wanting to increase someone's utility function, and "respect" as a
short-hand for the AI not wanting to decrease anyone's utility
Two situations: the first one is where the AI is excessively short
term. Then the disaster is rather clear, as the AI runs through all
available ressources (and neglects trying to accumulate extra
ressources) trying to please humans, leading to an
environmental/social/economic collapse (if the AI is short term, then
it MUST privilege short term goals over any other considerations;
using up irreplaceable ressources immediately, in a way that will fill
the atmosphere with poisonous gas, is something the AI is compelled to
do, if it results in a better experience for people today).
The second situation is where the AI can think longer term. This is
much more dangerous. What is the ideal world for an AI? A world where
it can maximise human's utilities, without having to reduce anyone's.
As before, brains-in-an-armoured-jar, drug-regressed to six months,
boredom removed and with repeated simple stimuli and a cocaine high,
is the ideal world for this.
And if the AI has its way, this is the world we will end up with. It
might do this in a single mass intervention, or, if we've got the
"respect" part well programmed (a very tricky prespective), it will
push up in that direction over the long term (and will prevent actual
human babies from evolving beyond a six month stage). Fiddling around
with "respect" and short/long term will not prevent this, as it is an
attractor - every intermediate stage on the way to that world results
in a better situation for the AI. It wants dumb, isolated, easily
PS: fixing the AI to obeying people's current utilities won't help
much - it will result in the AI giving us gifts we don't want any
more, bound only by respect considerations we no longer have. And the
next generation will be moulded by the AI into the situation above.
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:01:10 MDT