From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun May 05 2002 - 23:15:37 MDT
> Ben, sometimes writing code is taking the easy way out.
Sure, that's true. Personally I spent about 7 years theorizing about this
stuff before writing any code, and although I started prototype coding in
1994, I still spent more time theorizing than dealing with implementation
issues until 1997.
Frankly, I code *very little* at the moment; I reached my peak of coding
activity in 1997 when I started working on Webmind and hadn't recruited any
programmer-collaborators yet. I am really a far better theorist than
Anyway, I definitely can't be accurately accused of having "rushed into
> I understand that
> you believe resources should be put into Novamente, rather than,
> say, SIAI,
I think resources should be put into a whole host of AGI projects, not just
If I were very wealthy, I'd fund a Novamente team, but I'd also give some
cash to you, peter voss, Pei wang, and others with interesting AGI projects.
Unfortunately for us all, however, this is not the case!!!
> But with all due respect,
> Novamente seems to be constructed out of ideas that I had at one point or
> another, but which I looked at and said: "No, it's not that easy. This
> problem is harder than that - this method will work for small
> problems, but
> not for big problems; it's not good enough for real AI."
I can believe that various parts of Novamente are *somewhat similar* to
things that you (and many others) studied and dismissed in the past.
However, I do not believe that you have previously proposed the detailed
ideas inside Novamente and then rejected them.
I know from your comments on ANN's on this list a few months ago, that you
don't have the depth of knowledge of ANN's to have proposed the
ANN/nonlinear-dynamics aspects of Novamente.
Similarly, your confusion about the relation between probabilistic logic &
term logic, in a recent e-mail, indicates to me that you don't have the
depth of knowledge of that area to have proposed those aspects of Novamente
in the past.
Did you, at some point in the past, propose to use combinatory logic to
represent complex procedures in an inference-, association- and evolution-
friendly way? I really doubt it.
Frankly, most of the components of Novamente are *somewhat similar* to
things that *I* studied and dismissed in the past.
And then I came to believe that by improving these various components
appropriately, and integrating the improved version appropriately, I could
make a system that could become an AGI.
I did not start with these component technologies. I began with a
high-level philosophical vision of an AGI, and then basically filled in the
various "slots" required by this vision, with appropriately improved
versions of existing technologies. My first attempt at a system like this
(Webmind) was a bit too much of a hodge-podge, but Novamente has a lot more
simplicity and coherence, which I like.
You are very right that none of the component technologies of Novamente
will, on their own, work for big problems. Our belief is that by *piecing
together unscalable technologies in the right global architecture*, one can
arrive at a system that *will* work for big problems.
I understand that you don't share this belief. But you should understand
that the Novamente design is in no way refuted by the observation that one
or another of the component technologies, on its own, is not scalable or is
not suitable as an AGI.
> To me it looks
> like Novamente is going to try for real AI and go splat.
> just not that
What baffles me is not the fact that you hold this opinion, but the
incredible air of certainty with which you make such a statement. You *may*
possibly be right, but there's just no way you can *know* this!!
> are welcome to believe that the problem of creating true intelligence is
> enormously smaller than I think it is, and that enormously less complexity
> is needed to handle it, in which case I'm sure it makes sense for you to
> criticize me on the grounds of not having flung myself at the
> problem yet.
I think that a true intelligence needs to be an *incredibly* complex system,
but I think that much of this complexity can be made to *emerge* from an
appropriately structured AGI framework. I don't think that a very high
percentage of the complexity of an AGI has to be *explicitly* there in the
> From my perspective, it is very easy and tempting to start
> implementing an
> inadequate design, but futile.
Well, yes, if you don't have a design that you believe is adequate, it may
not make sense for you to implement anything.
On the other hand, as you surely realize, it's also possible you would learn
something by experimenting with implementing a design that was not adequate
for the grand goal.
> You have been known, from time to time, to remark on my youth and my not
> having running AI code, which I consider to be "cheap shots" (i.e., taking
> the easy way out),
Sorry if I seemed to be taking cheap shots at you, of course that was never
Of course, your age or your inexperience are not important points. The
important thing is the quality of your ideas.
So far, I find the quality of your ideas in the *philosophy of AI* to be
I'm eager to see you make the transition from philosophy to AI system
> so let me take what I fully acknowledge to be a cheap
> shot, and ask whether either Novamente or Webmind have done
> anything really
> impressive in the realm of AI? If you have so much more
> experience than I,
> then can you share the experiences that lead you to believe Novamente is a
> design for a general intelligence, rather than (as it seems to me) a
> pattern-recognition system that may be capable of achieving
> limited goals in
> a very small class of patterns that are tractable for it?
I am not going to turn this into a 100 page e-mail. I'd rather spend the
time working on Novamente or improving our systematic exposition of it.
But I don't want to "cop out" so I will say something.
As you know, we have not yet created an AGI. We have created software
systems (Webmind & Novamente) that have decent achievements in various
"narrow AI" domains such as text processing, financial prediction, and
biological data analysis. I think it's very cool that we have one design
that can excel in all these areas, but this kind of "multi-area narrow AI"
is different from AGI.
With Webmind, we ran lots of experiments showing how putting different AI
components together caused dramatic improvements in efficiency and
scalability. I really don't want to run through all the details tonight,
it's time for bed...
We ran some simple "experiential interactive learning" experiments, but they
were more at the level of a "digital roach" than a "digital doggie"... and
then we ran into nasty performance problems with the Webmind software
architecture (remedied with the Novamente architecture, and not connected
with scalability problems in the underlying AI algorithms).
So, nope, we have no more experience with "real AGI" than you or anyone else
on the planet. No one has built one yet.
Whether the various experiences we've had experimenting with our AI systems
have been of any value for AGI or not, I suppose time will tell. So far our
feeling is that they HAVE been valuable.
It is certainly not the case that we fully implemented our AI design, tried
it out as an experiential learning system, and then found that it didn't
work except for simple pattern recognition tasks. Sadly enough, we never
finished implementing Webmind. We spent a lot of time applying the partial
versions to various business-related data analysis tasks, and we found that
we'd been *really fucking dumb* to implement the system as a Java
distributed agents system, because the performance with a large number of
nodes and links in the system was just terrible. On the other hand, we
learned a hell of a lot that seems valuable to *us*, even if it doesn't seem
valuable to you. We learned a lot about how to make the various component
technologies we're interested in work well together, so as to drastically
accelerate each others' scalability and intelligence. And we observed
plenty of interesting Novamente-relevant dynamical phenomena. The list of
all these detailed lessons would be 100's of pages, and although the book
draft you read didn't have enough of such "practical lessons" in it, a later
draft eventually will.
> It looks
> to me like
> another AI-go-splat debacle in the making.
Yeah, I think you've probably repeated that often enough, Eliezer.
I know that is your opinion, and everyone else on this list (if they have
bothered to read all these messages) does as well.
I think you're as wrong as wrong can be on certain points ...
You think I'm as wrong as wrong can be on certain points...
And neither of us can prove ourselves right.
So there's no real use to continue repeating it over and over, is there?
As I said, I think this level of disagreement is pretty natural in a field
where there is so little hard knowledge to go on, so that projects need to
be guided largely by intuition.
And let us not lose sight of the fact that, compared to most of the AI
community, and most of the people on the planet, we agree almost totally on
almost everything!!! ;->
> Why do it? Why make all these lofty predictions? When SIAI
> starts its own
> AI project, we aren't going to be telling people we'll have a
> [whatever] in
You know, I think I've actually gotten over that mistake, which I'll admit I
was making 4-5 years ago.
We are not making any promises about when we will have a human-level AGI.
The time to complete *engineering* the current Novamente design can be
predicted with some degree of accuracy.
But the time to tune all the parameters to get the thing to work right, and
to teach the thing anything meaningful, etc., is very hard to predict --
*even if the design is right*. Nothing like this has ever been done before.
> Right now it looks to me like, in another few years, I'm going to
> be dealing
> with people asking: "Yeah, well, what happened to the Novamente project
> that promised us transhuman seed AI, and (didn't pan out) / (turned out to
> be just a data-mining system)?" And I'm going to wearily say, "I
> in advance that would happen, and that in fact I would end up
> answering this
> very question; here, let me show you the message in the SL4 archives."
Eliezer, you could be right. Time will tell.
I'll tell you one thing though. It's sure pretty easy to talk to people
trying to do ambitious things and tell them "That won't work! You've
underestimated the difficulty of the problem!"
If you say that to 10 people trying to actually build AGI right now, you're
bound to be right in at least, say, 8 or 9 of the cases. In which case
you'll come out looking like you're pretty clever -- 80% or 90% prediction
> You keep saying that I ought to just throw myself into design, as
> if it were
> an ordinary problem of above-average difficulty, rather than a
> critical step
> along the pathway of one of the ultimate challenges.
The fact that something is a critical step along a very important pathway,
tells you ABSOLUTELY NOTHING about how difficult it is, actually.
> In the first chapter
> of your manuscript you casually toss around the terms "seed AI" and
> "transhuman intelligence" as if they were marketing buzzwords. You don't
> present it as a climax of a long, careful argument; you just toss
> it in with
> no advance justification. It's like you first claimed that
> Novamente could
> do general intelligence because that was the most impressive thing you'd
> heard of, and once you heard about the Singularity you decided to add that
> as a claim too.
Eliezer, I gave a lot more extensive discussions of the future of technology
in Creating Internet Intelligence, and there will be still more in my
forthcoming book The Path to Posthumanity.
The focus of the manuscript you read was on the Novamente AI design, not on
transhumanist philosophy or even philosophy of mind. An extended discussion
of such matters would have been out of place there. The book manuscript I
gave you has many flaws, but I don't think the lack of an extended
discussion of transhumanist and Singularitarian philosophy is one of them.
In fact, in that chapter, I reference my own previous writings on related
topics, and yours as well, which I think is the appropriate thing to do in a
book with a different focus.
In any case, the good or bad qualities of my writing style are not all that
relevant to the quality of my AI design. You may try to draw parallels, but
I think you're reeeeeeallly stretching.
Also, your speculations as to my personal motivations for introducing these
concepts are pretty far off. My interest in real AI as a world-changing
technology far predated my work on Webmind or Novamente.
In fact, I chose to work on AI, in the late 1980's, because: Out of all the
advanced technologies I could think of, it seemed like the one most likely
to create a huge impact. I also considered time travel research, but I
figured that real results in that area were too far off. I considered life
extension, genetic engineering, and brain science. But I figured that I'd
be better off creating an AI that could become generally intelligent -- and
then figure out all these other areas of science way better than my measly
So my desire, for many years before I started working on Webmind, let alone
Novamente, was to create an AI that would be much smarter than me,
especially in the domains of science and engineering. The idea of
intelligence-increasing self-modification was very familiar to me from
various speculative futurist stuff I'd read.
Was I thinking about the Singularity back then, in exactly the terms in
which Kurzweil is discussing it now? No, and I'm still not a 100%
Singularity true believer like you are; I still consider it reasonably
possible that technological advance will flatten out for a while for some
reason we can't see yet.
But I was thinking about superhuman intelligence, and AI as a way to vastly
accelerate general scientific and engineering progress and enable life
extension -- WAAAAAY before I had any specific ideas about AI design. These
general desires were why I started designing Webmind & Novamente in the
It's been amusing to me to see the sci-tech community sloooowly catch up
with me -- so that now, newbies such as yourself can pop up and accuse *me*
of stealing these ideas which I've been nurturing for so long!!! ;->
> Lenat can claim that Cyc is a design for a real
> AI. Newell
> and Simon can claim that GPS is a design for a real AI. It doesn't mean
> that you've gotten started coding a seed AI and I haven't. It means that
> you have a much lower threshold for accepting what looks to you like a
> probable solution.
I don't think I have a *lower threshold*, I think I just have a *different
intuition* as to what a seed AI should look like.
Maybe when (if ever ;) you present your design to me, I'll think that it
doesn't look like a viable design! Then I'll be able to tell you that YOU
have a "lower threshold" ;->
> And I'll admit I'm annoyed, and I'm even
> more annoyed
> that you're using the term "seed AI"
"Seed AI" is a convenient term; but if I knew my use of it was going to
annoy you, I would have coined a different term for the same thing. I had
no desire to annoy you via my choice of wording, of course.
Although I didn't know the phrase back then, I was thinking about how to
achieve "seed AI" back when you were in diapers, dude!! :>
Anyway, I wish you could find a better way to expend your emotional energy,
than being annoyed at someone who agrees with you more than 99.99995% of the
people on Earth.
I wish you could accept that reasonable, intelligent, knowledgeable people
can have different intuitions about which AGI designs are viable or not.
Because there just ain't enough evidence for any of us to prove that we're
right and the other guys are wrong!
But hey, in the end, we're all just pathetic little meat machines, right?
Nobody's perfect -- yet ;>
One more thing. I have spent waaaaaay too much time e-mailing to this list
over the last few days. I type fast, but not *that* fast. I need to get
more work done!! (And I need to get my fucking car fixed, it was broken
into and the radio ripped out of the dashboard, on Friday ;-[ ) So if your
e-mails in the near future get briefer replies than this one, this is why.
It's not a boredom with the dialogue, just a reflection of the amount of
other things besides e-mailing that I have to do right now.
-- ben g
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT