From: Vladimir Nesov (email@example.com)
Date: Mon Feb 11 2008 - 17:46:23 MST
On Feb 12, 2008 1:52 AM, Eliezer S. Yudkowsky <firstname.lastname@example.org> wrote:
> Why go to all the trouble of building an AI? Why not just build a
> natural-language-understander that compiles English requests to
> programs, and then type into the prompt, "Please make an AI"?
> The English-to-program-compiler is hence AI-complete, meaning that if
> you can build it, you can build an AI - hence you shouldn't expect it
> to be any easier than AI.
> Similarly, building an AI that knows what you "really mean" by
> "Friendly" when you type "Please make a Friendly AI" at the prompt, is
> FAI-complete, and not any easier than building a Friendly AI.
Sure. I was trying to make a point that as a fundamental problem, any
kind of AGI seems to be FAI-complete. If we have AGI, all we need is
good technological process to convert it into FAI, no show-stopper
research goals that can't be met for 50 years. It's risky, but it's
probably more likely to turn out to be the case.
It's not 'please make me a friendly AI', it's a trial and error
process, with independent verification of results it produces. In
order to be useful this way, it doesn't need to reliably do what is
asked, it only needs to potentially be coercible into constructing
useful models of certain difficult problems, including common-sense
communication, which is what I understand by AGI. It's just a
heuristic optimization process that can be put to good use.
-- Vladimir Nesov mailto:email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT