Re: Review of Novamente

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri May 10 2002 - 19:20:53 MDT


Ben Goertzel wrote:
>
> I.e., perhaps the wiser attitude is: "Keep your mouth shut except among your
> own little group, because talking about your ideas with others is just going
> to lead you to spend all your time in arguments and none of your time
> getting any work done. You'll never convince anyone you're on the right
> track, because nearly everyone in the world believes AGI is impossible, and
> nearly everyone who believes AGI is possible believes *they* have the secret
> ingredient to AGI and therefore you cannot."
>
> The amount of work required to explain our intuitions about the system, and
> our anecdotal experiences with earlier versions, and why we think the system
> can give rise to the emergent structures and dynamics we think it can, is,
> it's becoming clear to me, a LOT. Is it worth doing this work instead of
> working on the system itself, which may produce evidence that will be more
> convincing than any words? Maybe not.

I'm certainly going to have to put in the work to explain the system I want
to build to the people who will be helping build it, and furthermore, to get
to that point, I need to show suggestive evidence that the theory of AI is
worth testing. Of course, for me too, there comes a point where I feel that
a discussion has run out of steam and decide to let the disagreement stand -
usually at that point where the other person has stopped saying anything new
and is merely repeating arguments that I've already answered. At other
times I just don't have the time to respond. But I do think that, in
general, explaining AI theories, even very complex parts of AI theories, is
worthwhile. Setting the goal of creating an academic consensus is an
impossible barrier, but that doesn't rule out all discussion.

> My terrifyingly advanced age was not the point of that statement. My point
> was entirely different: that I had been thinking a lot about *emergent mind*
> before launching into detailed AI design, so that for me and my team, our
> detailed work was implicitly understood in the context of all this prior
> stuff on emergent mind. A context that was not adequately drawn into the
> book, as I've said a lot of times already.

And "Levels of Organization" is a small fraction of my thoughts about AI,
but having given someone "Levels of Organization", if they ask a question
whose answer isn't in _Levels_, I would still try to answer it. I think
that being able to precisely articulate your own intuitions is part of what
goes into building an AI; when my intuition says something, I usually know
why my intuition is saying it, and I answer with the reason behind the
intuition rather than asking people to take it on faith. Of course, other
people may feel differently about how a discussion should be conducted, but
if I'm consciously choosing not to ask people to take my intuitions on
faith, then it needs to be recognized that we're arguing by different rules
- observers need to take into account that I have just as large a base of
hidden complexity as anyone, even if I don't point it out. I guess I don't
really find "I've thought about this for years" to be an impressive argument
- I've thought about it for years too, and a lot of people have thought
about it for years and still failed, and I could conceivably be shown up by
someone who's only thought about it for a few months but is far smarter than
I am. At most it's an enabling condition, but not really one that is
impressive in itself. If you've spent years thinking about AI, then you
should have a strong theory and should be able to argue from the theory
itself rather than arguing from the extrinsic credibility of being an
expert. This is especially important in AI, where the senior experts are,
when you think about it, people who've spent a lot of time *not* slaying the
AI dragon.

> In bio & finance we got really awesome results in terms of being able to
> recognized fancier patterns than anyone else.

What results? Why are they awesome? According to my already-given estimate
of the system, Novamente does have a genuine AI capability in the domain of
pattern recognition and may be able to achieve a genuine AI capability in
solving some goal-oriented problems in the patterns it can recognize, so
recognizing fancier patterns than anyone else is exactly what I *think*
Novamente ought to be able to do, but it would still help to have a specific
example of a novel pattern that Novamente recognized. I could, after all,
be too optimistic.

> My estimate of the amount of manpower required to make an AI has gone DOWN
> over the last 3 years, not up. It went up from 1997-2000, and has gone down
> from late 2000 till now. This is because of the design simplifications that
> went into the move from Webmind to Novamente...

My estimate of the manpower required for AI has also gone down as I learned
more powerful solutions, but my estimate *started out* in around 2025 as a
massive planetwide project.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT