Re: Seed AI (was: How hard a Singularity?)

From: Eugen Leitl (eugen@leitl.org)
Date: Tue Jun 25 2002 - 16:44:03 MDT


On Tue, 25 Jun 2002, James Higgins wrote:

> Why does it matter? How many bits does it take to fully describe a
> toaster? We are NOT trying to recreate a human baby, we are trying to

Not many. Toaster's world is simple, and hence it's an exceedingly simply
structured object.

Would you like your reality light or medium brown?

> create an Artificial Intelligence. One that, in fact, may end up thinking

No. You're trying to create an AI that can learn from its environment. An
AI is Not a Toaster.

> and working in much different ways than human minds do. Thus the

It still lives and dies in the same environment as human minds do. As such
it better be competitive.

> complexity of a baby (which is pointless anyway since the heart, liver,
> toes, etc. are irrelevant) or any other living thing is irrelevant.

The baby doesn't have to worry about the monetary system or the network
protocol either. Further, I gave you some slack by putting the baby as the
upper bound. You might find out that's not that much slack.
 
> >What I'm saying is that an AI has to have a large fraction of the world to
> >be represented in it before it even can start to learn. Because the world
> >is dirty, and complex, the resulting architecture is that, too. The vessel
> >may be elegant, but not the contents.
>
> Also very much not true. If the AI only understood math but could learn
> and actually invent new knowledge in that domain would it not be
> Intelligent? Why would it have to know about cars, hamsters, pizza or the

No. It would not be naturally intelligent.

> like to succeed?

It would not succeed. It would be an useful idiot (savant). Bring on that
good stuff, we have sore need of it.

> Also, an engineering solution NEVER has to be dirty and complex in order to
> solve a dirty & complex problem. An inelegant solution will produce less

Correction: the engineering solution to the problem class you're familiar
with. If you think you understand the AI class of problems, you most
definitely don't understand the AI class of problems.

> than reasonable performance and/or won't be flexible. Thus a successful AI
> implementation will almost certainly be elegant, otherwise it would require
> substantially more resources (computing power, memory, etc) or get stuck in
> a rut shortly after leaving the gate. And thus would be beat to the punch

You assume you can make it leave the gate in an orderly and elegant shape.
I fear this is pure wishful thinking.

> by the first elegant implementation to surface.

Where's the URL of your demo?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT