Re: What are "AGI-first'ers" expecting AGI will teach us about FAI?

From: Daniel Burfoot (daniel.burfoot@gmail.com)
Date: Mon Apr 14 2008 - 06:26:31 MDT


On Sun, Apr 13, 2008 at 10:41 PM, Rolf Nelson <rolf.h.d.nelson@gmail.com>
wrote:

> On Sat, Apr 12, 2008 at 10:53 PM, Daniel Burfoot
> <daniel.burfoot@gmail.com> wrote:
> >
> > This is an interesting question. I would say AGI is nearly ready if one
> > could define a general purpose algorithm that provides the solution, or
> a
> > core element of the solution, to a wide variety of tasks like face
> > recogition, speech recognition, computer vision, and motion control; all
> > without being specifically designed for those purposes.
>
> Call this the Burfoot Date for now.
>
> 1. How confident are you that AGI wouldn't have taken over by the
> Burfoot Date?

I would say that taking over the world is strictly more difficult than face
recognition etc. I don't consider this to be an obvious statement, however
(I can imagine, but deem unlikely, an AI that could take over the world
without being able to recognize faces). I expect the AI would have to
perform significantly more learning in order to obtain intelligence
sufficient to "take over".

2. Assuming AGI hasn't taken over by the Burfoot Date, how much time
> would remain between the Burfoot Date and when the AGI takes over?

I imagine the appropriate time scale would be on the order of years.

I also don't consider it inevitable that the AGI would take over, given the
above mentioned abilities. Humans were at about our current level of
intelligence for a long time before modern civilization came about. Thus, an
agent can have intelligence but for whatever reason not do anything with it.

3. How will you proceed when the Burfoot Date comes up? How do you
> believe others will proceed after the Burfoot Date?

It's far enough away that I haven't yet worried too much about it. However,
I would consider various safeguards appropriate:

1) limiting the amount of computing power available to the AI
2) limiting the amount of energy available to the AI
3) advocating government oversight of further research
4) limiting the AI to passive observation of the world
5) limiting the types of goal functions that are given to the AI

Of course, I don't believe that these safeguards are perfect. Regarding
others, I'm not sure what they'll do and that causes me some lack of sleep,
but not too much, as the Burfoot Date is still quite far away, I think.

As an amusing aside, Avogadro did not know the value of his number even to
within an order of magnitude.

Dan



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT