Re: large search spaces don't mean magic

From: Carl Shulman (cshulman@fas.harvard.edu)
Date: Fri Aug 05 2005 - 13:38:27 MDT


> So I will argue that there are no plausible assumptions in which the
> strategy of "create a superintelligent AI and keep it in a box" is
> safe _and_ useful _and_ feasible. Therefore, however confident you are
> of your ability to keep a box sealed, it doesn't make sense to set out
> to create a superintelligent AI unless you have a plan for making sure
> it will be Friendly.

The box is an additional line of defense, in case one was overconfident about
that 'surefire Friendliness plan.' Here's my scenario:

1. Create that "plan for making sure it will be Friendly," and examine it until
you are very sure that it will work.
2. Construct an implementation in a box.
3. Ask the AI for techniques to verify that the plan to create Friendliness was
guaranteed to work: advances in computer science, designs for nootropic drugs
to enhance programmer intelligence, humanly comprehensible analyses of the
design etc.
4. If the AI delivers and the techniques can be used to independently verify
that the original design was indeed correct, build a new AI to that
specification, monitoring its development using the enhanced intelligence and
better techniques. The original AI is not released until after it can no longer
take over the world (due the presence of other AIs.) Otherwise, destroy the
boxed AI and try to use fMRI lie detectors or other advanced social control
technologies to prevent individuals from wiping out humanity with
nanotechnology and the like.

Carl



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT