Re: Building a friendly AI from a "just do what I tell you" AI

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Mon Nov 19 2007 - 04:06:31 MST


On 19/11/2007, Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:

> Other people have already said this to you, but I'll say it
> differently: you're projecting human thought patterns on to the AI.
> Just because the AI is smart doesn't mean that it thinks to check
> before dropping a piano on a busy street. That's from an example I
> saw somewhere; if you're moving a piano out of a 6th story
> apartment, think about every bit of cognition that has to go right
> for you to look down and check for the presence of people rather
> than just dopping the thing. This has nothing to do with
> intelligence, it has to do with a complicated set of mental
> structures that are used to check the validity of sub-goals. An AI
> need not have any of these, so you say "Get the piano out of the
> apartment" and it kills 3 people. That doesn't mean it's not smart,
> it means it doesn't think like you do.
>
> All your posts on this topic seem to assume the AI thinks like you
> do. You need to revisit that.

An AI need not think in any particular way nor have any particular
goal. But if it is superintelligent, figuring out the subtleties of
human language and what we call common sense should number amongst its
capabilities. If not, then it wouldn't be able to manipulate people
and would pose much less of a threat.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT