SIAI's direction

From: Wei Dai (weidai@weidai.com)
Date: Sat Oct 23 2004 - 20:19:59 MDT


I think SIAI's greatest accomplishment so far is to illustrate how hard it
would be to build a safe AI, and how dangerous an unsafe AI would be. To
me, its critical work has been a lot more persuasive than its constructive
work. So much so that I no longer think that building a safe AI is the
best route to the Singularity. It's too hard, the the cost of failure is
too great. Further work in the same direction may be counterproductive. It
will increase the probability of a safe AI slightly, but could increase
the probability of an unsafe AI more, especially if SIAI were to publish
results that could be used by others less concerned with safety, as some
are suggesting.

Furthermore, it seems there is a conflict between safety and other
desirable qualities, such as open mindedness and philosophical curiosity.
Do we really want to live under the control of an AI with a rigid set of
goals, even if those goals somehow represent the average of all humanity?
An AI that may be incapable of considering the relative merits of
intuitionist vs classical mathmatics, because we don't know how to program
such capabilities into the AI, or considers this activity a waste of time,
because we don't know how to embed such pursuits into its goal structure?

Many of us are interested in the Singularity partly in the hope of one day
being able to solve or at least explore with greater intelligence long
standing moral and philosophical problems. How will we be able to do that
if we're all under the control of an AI with built-in moral and
philosophical certainties (i.e., axioms or the equivalent of axioms)?
But how can we make probable that it's safe without such axioms? Not to
mention that we have little idea what these axioms ought to be.

Since AI is only a means, and not an end, even to the SIAI (despite "AI"
in its name), I wonder if it's time to reevaluate its basic direction.
Perhaps it can do more good by putting more resources into highlighting
the dangers of unsafe AI, and to explore other approaches to the
Singularity, for example studying human cognition and planning how to do
IA (intelligence amplification) once the requisite technologies become
available. Of course IA has its own dangers, but we would be starting with
more of a known quantity. Even if things go bad, we end up with
something that is at least partly human and unlikely to want to fill the
universe with paper clips.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT