Re: AI, just do what I tell you to

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Tue Oct 30 2007 - 07:56:54 MDT


On 30/10/2007, Nick Hay <nickjhay@gmail.com> wrote:

> > Are hired human experts intelligent? The idea is that they provide
> > advice and other services without letting any competing motives of
> > their own interfere.
>
> If you've built this AI, why did you build in competing motives?

The AI should always remain in character as the ideal expert. If you
want it to fix your plumbing it should not concern itself with
charging you as much as possible, or finishing in time to watch a
football game. It should listen to what you want it to do, advise you
of what it thinks it ought to do (given that it knows more about
plumbing than you do), then go ahead and follow your instructions. If
you tell it to do something stupid...

> I think future predictable horror should be at least allowed as a
> veto. Suppose someone really really wants to destroy their brain,
> just to see what happens. They think they're implemented by an
> immortal soul, so this seems harmless enough. If the AI didn't grant
> this wish they'd be indignant: who are you to refuse my order?
> However, if they found out souls don't exists they would predictably
> be horrified, and wish that the AI had ignored their previous order.
>
> In this scenario, it is not helpful for the AI to shut up and do what they say.

But we have the same problem with any tool, like a kitchen knife. It
might be a good idea to make smart knives that can never be used in
suicide or murder attempts, if this were possible, and ban
old-fashioned knives. But with an AI it would be rather difficult to
build in rules like this. Fixing the plumbing might result in
increased water use, which a decade down the track may somehow cause
one excess human death. It would render the AI almost useless if it
reviewed every command on the basis of such considerations.

> A more extreme example, the wisher commands the AI to make the sun go
> nova, because they've always wondered what it looks like. For some
> reason they do not understand that this would destroy humanity (which
> they do not want), even if this is explained carefully to them. In
> some sense it will be "their fault" that the human species dies, and
> yet making the sun go nova doesn't seem like a good idea.

We've had to deal with problems like this throughout history anyway.
Consider nuclear weapons, and the people who might have urged their
use in the belief that a pre-emptive strike against the other side was
a good idea. What we rely on is that humanity collectively will do the
right thing. This will apply to AI's as well, when there are many of
them all monitoring and policing each other, as they have been
programmed to do. If one rogue human with an AI can easily do terrible
things despite this, then we are doomed, and no attempt to ensure that
each AI comes out of the factory friendly will work.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT