Re: What are "AGI-first'ers" expecting AGI will teach us about FAI?

From: Daniel Burfoot (daniel.burfoot@gmail.com)
Date: Sat Apr 12 2008 - 20:53:26 MDT


On Sun, Apr 13, 2008 at 12:11 AM, Rolf Nelson <rolf.h.d.nelson@gmail.com>
wrote:

> Large numbers of people have made various AI advances in the past.

It's not clear to me that this statement is true, in the following sense: I
don't necessarily believe that any particular piece of current AI theory
(whatever that is) will ultimately be useful for building an AGI. On the
other hand, many advances were useful in the sense of explicating problems
and exploring why certain methods aren't as powerful as we might think.

Of course, this depends on the granularity with which you define "advance".
I think reinforcement learning is an advance, but only if defined in the
broadest possible terms (an agent pursuing reward in an uncertain world). I
don't necessarily believe any current RL algorithm will help with AGI - the
formalism is just too limiting. I would make a similar statement for neural
networks and statistical learning theory.

> At what point will you know that AGI has advanced enough that FAI can
> proceed?

This is an interesting question. I would say AGI is nearly ready if one
could define a general purpose algorithm that provides the solution, or a
core element of the solution, to a wide variety of tasks like face
recogition, speech recognition, computer vision, and motion control; all
without being specifically designed for those purposes.

Regarding the question of how AGI will help for FAI, I consider it
reasonable to believe that if an AGI can learn abstractions, as it must in
order to become intelligent, then it can also learn the abstraction "good",
if seeded with an appropriately large amount of knowledge about human
culture. This relates to Plato's notion "forms" and in particular the "Form
of the Good".

Dan



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT