Re: AI Goals [WAS Re: The Singularity vs. the Wall]

From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Apr 25 2006 - 11:43:34 MDT


Jeff Medina wrote:
> On 4/25/06, Ricardo Barreira <rbarreira@gmail.com> wrote:
>> But I bet that there are already tons of
>> detailed studies about this.
>
> Ricardo is spot on. There has been extensive work done analyzing what
> a goal is, what types of goals humans and nonhumans and
> systems-in-general have or might have or should have, intentionality,
> motivation, and self-deception about goals. The usual suspects --
> psychology, computer science, economics, et al. -- are all involved.
>
> For one to attempt to discuss goals seriously without having read up
> on this background material is an instance of a general sort of error
> most commonly made by those who have never studied any particular
> field in depth, and hence don't grok that for almost every interesting
> topic, and actually many uninteresting topics, much has already been
> written that they *must* become familiar with before anyone who *is*
> knowledgable in the area will listen to anything they have to say
> (and, perhaps more importantly, they must become familiar with it to
> avoid reinventing wheels, re-committing known mistakes, and otherwise
> churning away without contributing to any real progress in the field).
>
> It's good that some people are recognizing that people have a tendency
> toward armchair theorizing without getting specific, technical, or
> otherwise utile via a more formal, precise approach. But to suggest
> the way out of this is to reinvent definitions and ontologies related
> to 'goals', void of prior work, is ... no better? ... no, it's better,
> it's progress... but it's not better enough to have shrugged off the
> "complete waste of time" crown.
>
> If methodologies, in addition to subject matter, were to have 'shock
> levels', this one would be decidedly 0.
>
> That said, let me reiterate that I am entirely supportive of your
> intention ("1" in your list); it's the "2" in play that needs an
> upgrade.

I have read quite a few of the different categories that you mention,
and I have an ongoing task to pull these together into a form that can
be used to design real systems, .... but much of the literature comes
with heavy ontological baggage that makes it very difficult to *use*
that previous work to build a working motivational system.

"Difficult" is a polite way of putting it. The less polite way is that
the literature is a joke. I have read it, and I am not reinventing the
wheel, I am inventing a wheel that actually makes sense and does not
involve assumptions that are ridiculous.

(Want an example? Go to the literature and find a proposed
goal/motivational system that does *not* presuppose that the cognitive
system already has a sophisticated, semi-adult semantic representation
of the world, grounded in some way. You need to find a system that
avoids this assumption if you are going to hope to build a neonate
cognitive system that is able to learn its own concepts, because the
neonate will need a motivational system if it is to function, but it
doesn't have the adult semantic representations so it has nothing
sensible to put on its goal stack. There are various ways one might
circumvent this problem, but most of them involve methods for defining
motivations independently of the cognitive system, and setting up a way
to let the one link into the other during development. How many papers
in the literature discussing this explicit and crucial issue? Zero.
How many that allude to it in a particular context, where someone is
already making many assumptions about how the cognitive system works,
most of which are no longer accepted...? Well, there are some such
papers because I occasionally come across them, but their solutions are
virtually useless in the general case.)

(A P.S. on that example: I tried to confine it in one paragraph, but I
hope we can take it as read that I know it has more depth to it).

Someone once accused me of being negligent because I refused to learn
all the gory details of what was going in modern behaviorist research
before dismissing it. I disagreed then, and I say the same thing now
(and I have Eliezer on my side here, since he once used exactly the same
argument ;-)): I can know enough about what is going on in a field to
know that the majority of it is such a waste of time that I would be
better off reinventing [sic] it from scratch.

My previous post was just a way of getting the discussion going and
provoking some thoughts from other people. I might have given the
impression that I was making this stuff up on the spur of the moment,
and although I can understand how someone might see it that way, it is
not true. I guess my choice of words did not help: "So let's begin to
construct a better definition of what's happening here......" was not
meant to be a serious beginning, more a rhetorical device.

Richard Loosemore



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT