Re: Passive AI

From: Nick Bostrom (nick.bostrom@philosophy.oxford.ac.uk)
Date: Tue Dec 13 2005 - 17:00:01 MST


Michael Vassar wrote:

>Nick Bostrom said
>"There are different options for who should decide what questions could be
>posed to the Oracle. It might be difficult to ensure that the best such
>option is instantiated. But this problem is not unique to the
>Oracle-approach. It is also difficult to ensure that the first AGI is built
>by the best people to do it. The question here is, for whichever group has
>control over the first AGI - whether it's SIAI, the Pentagon, the UN, or
>whatever - what is the best way to build the AGI? "
>
>Of course, we don't need to worry about what the best way to build an AI
>is for the pentagon, UN, or whatever, since they will absolutely not
>listen to us.

"Whatever" also includes private AI groups. Maybe the probability that
"they" will listen to "us" is small, but I think the probability that "we"
will create the first AGI is also small.

> How many of the world's most respected minds, far more respected than
> anyone here can realistically hope to become, protested nuclear build-up?

The cases are different. Opting to go for Oracle first does need not to
mean giving up a powerful technology that adversaries would then inquire
instead, nor does it require global coordination.

Anyway, protesting nuclear build-up might have been a good thing to do even
though in retrospect we know it didn't succeed.

>"Find the most accurate answer to
>the question you can within 5 seconds by shuffling electrons in these
>circuits and accessing these sources of information, and output the answer
>in the form of 10 pages print-out. "
>
>Two difficulties with this include the difficulty of bringing an AI to
>useful oracle status without utilizing rapid take-off or bootstrapping
>procedures

Yes, that would surely be a difficulty.

> and the difficulty of defining allowable methods.

The point is that it might be much easier to reliably define allowable
methods than to define something like Friendliness or Collective
Extrapolated Volition in the right way.

>Without an understanding of the programmer's minds, the best output might
>be a compressed version of the input and the utilized data. To do much
>better, the AI will probably need roughly human-level mental modeling,
>which implies non-trivial volition extraction anyway.

Yes, but less ambitious than in CEV. More importantly, if only this part
fails we might get a few useless pages of print rather than an existential
disaster.

Nick Bostrom
Director, Future of Humanity Institute
Faculty of Philosophy, Oxford University
10 Merton Str., OX1 4JJ, Oxford +44 (0)7789 74 42 42
Homepage: http://www.nickbostrom.com FHI: http://www.fhi.ox.ac.uk

For administrative matters, please contact my PA, Miriam Wood
+44(0)1865 27 69 34 miriam.wood@philosophy.ox.ac.uk



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT