Re: AI, just do what I tell you to

From: Nick Hay (nickjhay@gmail.com)
Date: Tue Oct 30 2007 - 15:23:20 MDT


On 10/30/07, Stathis Papaioannou <stathisp@gmail.com> wrote:
> On 30/10/2007, Nick Hay <nickjhay@gmail.com> wrote:
>
> > > Are hired human experts intelligent? The idea is that they provide
> > > advice and other services without letting any competing motives of
> > > their own interfere.
> >
> > If you've built this AI, why did you build in competing motives?
>
> The AI should always remain in character as the ideal expert.

Why? This is aiming pretty low if we are considering a smarter than
human AI. If we're not, then you probably can't implement CEV
anywhere (probably = maybe, there are tricks I don't know). It's not
even clear this is useful for less intelligent AIs.

Are you saying an ideal expert would have competing motives? That's
far from my idea of ideal.

> > A more extreme example, the wisher commands the AI to make the sun go
> > nova, because they've always wondered what it looks like. For some
> > reason they do not understand that this would destroy humanity (which
> > they do not want), even if this is explained carefully to them. In
> > some sense it will be "their fault" that the human species dies, and
> > yet making the sun go nova doesn't seem like a good idea.
>
> We've had to deal with problems like this throughout history anyway.
> Consider nuclear weapons, and the people who might have urged their
> use in the belief that a pre-emptive strike against the other side was
> a good idea. What we rely on is that humanity collectively will do the
> right thing. This will apply to AI's as well, when there are many of
> them all monitoring and policing each other, as they have been
> programmed to do. If one rogue human with an AI can easily do terrible
> things despite this, then we are doomed, and no attempt to ensure that
> each AI comes out of the factory friendly will work.

In that scenario we may well be doomed. But perhaps we can build an
AI which uses the first mover advantage to protect against later rogue
AIs. If one AI can do terrible things maybe another AI can do
terrific things.

-- Nick



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT