From: Wei Dai (email@example.com)
Date: Tue Nov 13 2007 - 22:35:47 MST
SIAI has publicized the importance of the morality question. Certainly
disaster will ensue if a powerful AI gets that question wrong, or has the
wrong answer programmed into it. But it seems to me that getting any of the
other questions I listed wrong can equally lead to catastrophe. (See
examples below.) Assuming that it is unlikely we will obtain fully
satisfactory answers to all of the questions before the Singularity occurs,
does it really make sense to pursue an AI-based approach?
To create an AI with abilities and intuitions comparable to human beings on
these subjects, we would need to either reverse engineer where our
intuitions come from and how we are able to contemplate these questions, or
use evolutionary trial-and-error methods. Neither of these approaches seems
to have an advantage over improving human intelligence. The former is likely
slower and more difficult, and the latter is probably more dangerous.
Ok, I don't expect SIAI to change its entire mission (and name!) but it
wouldn't hurt to keep this problem in mind.
Below I will give some examples of how things could go badly if an AI gets
the answers wrong.
> How does math really work? Why do we believe that P!=NP even though we
> don't have a proof one way or the other?
Due to faulty mathematical intuitions, the AI starts believing it's likely
that P=NP, and devotes almost all available resources into searching for a
polynomial time algorithm for NP-complete problems, in the expectation that
everyone will be much better off once a solution is found.
> How does induction really work? Why do we intuitively know that, contra
> Solomonoff Induction, we shouldn't believe in the non-existence of
> halting-problem oracles no matter what evidence we may see?
I've written about this already at
> Is there such a thing as absolute complexity (as opposed to complexity
> relative to a Turing machine or some other construct)?
Complexity is clearly related to induction, and morality may also be related
to complexity (Peter de Blanc and I have both suggested this; see
http://www.overcomingbias.com/2007/10/pascals-mugging.html). So getting this
question wrong probably implies getting induction and morality wrong as
> How does qualia work? Why do certain patterns of neuron firings translate
> into sensations of pain, and other patterns into pleasure?
The AI wants to prevent people from running torture sims, but unfortunately
it can't tell how realistic a sim needs to be to generate pain qualia. To be
safe, no one is allowed to play Dungeons and Dragons anymore, even the
> How does morality work? If I take a deterministic program that simulates a
> neuron firing pattern that represents pleasure, and run it twice, is that
> twice as good as running it once? Or good at all?
This one doesn't need further explanation, I think (hope).
> Why am I me, and not one of the billions of other people on Earth, or one
> of the many people in other parts of the multiverse?
The AI decides that the simplest explanation for this is that it is the only
conscious entity in the universe, and everyone else (especially human
beings) must be philosophical zombies.
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:01:11 MDT