Re: AI, just do what I tell you to

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Tue Oct 30 2007 - 17:39:07 MDT


On 31/10/2007, Nick Hay <nickjhay@gmail.com> wrote:

> On 10/30/07, Stathis Papaioannou <stathisp@gmail.com> wrote:
> > The AI should always remain in character as the ideal expert.
>
> Why? This is aiming pretty low if we are considering a smarter than
> human AI. If we're not, then you probably can't implement CEV
> anywhere (probably = maybe, there are tricks I don't know). It's not
> even clear this is useful for less intelligent AIs.
>
> Are you saying an ideal expert would have competing motives? That's
> far from my idea of ideal.

No, an ideal expert is one who (a) has perfect knowledge of their
field, and (b) has no competing interests of their own.

> > We've had to deal with problems like this throughout history anyway.
> > Consider nuclear weapons, and the people who might have urged their
> > use in the belief that a pre-emptive strike against the other side was
> > a good idea. What we rely on is that humanity collectively will do the
> > right thing. This will apply to AI's as well, when there are many of
> > them all monitoring and policing each other, as they have been
> > programmed to do. If one rogue human with an AI can easily do terrible
> > things despite this, then we are doomed, and no attempt to ensure that
> > each AI comes out of the factory friendly will work.
>
> In that scenario we may well be doomed. But perhaps we can build an
> AI which uses the first mover advantage to protect against later rogue
> AIs. If one AI can do terrible things maybe another AI can do
> terrific things.

With every other powerful human invention it has been the fact that
other competing interests have eventually gained access to the
technology that has prevented absolute world domination by one party,
not the benevolence of the original inventor. I would be far more
comfortable if multiple AI's with a variety of competing interests
arose at about the same time than if the first AI quickly gained
primacy, no matter how carefully guaranteed the friendliness of that
first AI was. Of course, wishing it does not mean it will be so.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT