Understanding the problem of friendliness

From: Vladimir Nesov (robotact@gmail.com)
Date: Thu Mar 06 2008 - 11:38:42 MST


On Tue, Feb 12, 2008 at 1:52 AM, Eliezer S. Yudkowsky
<sentience@pobox.com> wrote:
> http://en.wikipedia.org/wiki/AI-complete
>
> Why go to all the trouble of building an AI? Why not just build a
> natural-language-understander that compiles English requests to
> programs, and then type into the prompt, "Please make an AI"?
>
> The English-to-program-compiler is hence AI-complete, meaning that if
> you can build it, you can build an AI - hence you shouldn't expect it
> to be any easier than AI.
>
> Similarly, building an AI that knows what you "really mean" by
> "Friendly" when you type "Please make a Friendly AI" at the prompt, is
> FAI-complete, and not any easier than building a Friendly AI.
>
> (I find that conversations of this sort have more the shade of someone
> trying to figure out how to game the Dungeons and Dragons rules for
> the wish spell, than AI science... remember, nothing ever runs on
> English rules; even your brain doesn't run on English rules.)
>

OK, I think got that now. I couldn't see how can there be AGIs that
can't get what you mean by e.g. 'friendly AI', but are still
dangerous, so I jumped to conclusion that problem with making AGI
friendly must lie in it having a rigid (Plato-style), unreliable or
diverging goal system (thing everybody is talking about), which my
proposal seems to solve, but in retrospect this procedure should be
obvious, so it's beside the point.

Unfriendly AGI is a problem of idiot savant. It has enough ability to
interface with real world, it can do certain things much better (or
faster, cheaper) then humans, and in many areas it can become a
serious, potentially runaway power. It won't be able to understand
subtle enough issues, such as what you mean by 'friendly', but will be
able to solve some real-world puzzles that are more straightforward or
roughly follow from few first principles that it embodies.

The problem of friendly AI is a problem of making an AGI that listens
to the world as opposed to blindly rewriting it.

-- 
Vladimir Nesov
robotact@gmail.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT