From: Ben Goertzel (firstname.lastname@example.org)
Date: Tue Mar 12 2002 - 07:51:18 MST
Intelligenesis broke out
the one-AI one-theory straightjacket, which had previously held for
intelligence* projects (e.g. Cyc) even if it was occasionally violated
more pragmatic robotics architectures and so on. Correspondingly,
broke out of the AI-as-single-algorithm straightjacket, not so much
any individual researcher had a picture of AI as a supersystem, but
all the different researchers thought that AI was composed of different
systems. In combination, all the ideas added up to a much bigger idea
any previous single AI researcher had ever had for general
Yes, I agree with this, pretty much.
Actually, most of the researchers involved DID believe in building an AI as
But there were many different ideas of how this supersystem should be. For
instance, Pei saw it as having inference at the center and other things at
the periphery. Shane and Youlian (roughly like Peter Voss) thought it had
to be founded on a neural-net-like dynamic. Etc. etc.
Of course I expect my visit to Webmind
larger role in my week than it did yours, and hence looms larger in my
yeah -- for most of the staff it played the role of light entertainment ;>
I was quite happy to get a chance to meet with you & talk in person
The two most important questions, from my perspective, are: (1): Now
you're working with the Novamente approach, did you learn from
Intelligenesis *how* to build supersystems, or did you just learn about
supersystem that will become a new cul-de-sac for you?
That's an interesting question. We certainly learned a lot *intuitively*
about how to build AI supersystems. We do not have a scientific,
systematized understanding of how to build AI supersystems.
However, last month I spent some time working out an alternate,
non-Novamente approach to AI based on neural networks, called Hebbian Logic.
I found that having worked out the basics of this, it was pretty easy for me
to envision how to build a whole AI supersystem founded on Hebbian logic.
Of course, it would take me at least 6 months to work out and write up the
details of this envisioned AI supersystem in a comprehensible way. But, I
guess that this is some evidence that we did gain *some* generic knowledge
about how to build AI supersystems. Cassio and Pei and I did anyway, I
don't know about everyone else ;)
(2): How much
intelligence does it take for a seed AI takeoff anyway? The latter one
particular has too many internal variables for me to guess it. It
anywhere from human-level intelligence to just above Eurisko.
It is very clear to me intuitively that "just above Eurisko" is not right.
I feel very strongly that the answer is: Human-level or above. Of course I
realize that "human-level" is a pretty vague term. But I think that for the
hard takeoff to happen, one way or another, the seed AI in question has got
to learn or reinvent a lot of computer science theory....
current estimation of me appears to be as someone who'd make a nice
researcher for Intelligenesis, at least if he could learn to just build
own Friendliness system and see what it contributes to intelligence as
whole, instead of insisting that everyone do things his way. This is
kind of you, and I do appreciate it. But the thing is, I'm not
to be a typical Intelligenesis researcher. I'm supposed to be the guy
takes the project over the "hump" that's defeated all AI projects up to
My estimation of you is a lot more complicated than that ;)
I think your achievements as an *AI philosopher* are quite considerable.
As a thinker about *AI design*, I think you have a lot of deep and
interesting ideas; but, as far as I can tell, nearly all of your ideas are
still at a relatively preliminary and theoretical stage. As an AI
designer, you seem to me to be at roughly the same stage as Shane Legg,
Youlian Troyanov, Anton Kolonin and a number of other non-famous
genius-level thinkers I know -- all of whom have deep intuitions about how
to build an AI,and all of whom are engaged in the process of transforming
their intuitions into designs. This is where I was from roughly 1988-1996,
before I coded the first crude (and useless!) Webmind system.
Now, of course I realize that you haven't seen me in action enough to
that I'm any smarter than a run-of-the-mill AI researcher
Eliezer, I'm sure you are very smart. So am I.
Not surprisingly, there are a LOT of terrifyingly clever people working in
Here are 10 names. Eliezer, Ben, Pei Wang, jeff pressing, anton kolonin,
shane legg, Cassio, Thiago, Senna, Guilherme. ALL of us have genius-level
IQ's. ALL of were the smartest kids on our block, in our whole school, in
all or nearly all of our university classes, etc. etc. None of us are "run
of the mill" computer science researchers in any sense.
If you're asking me to believe that you possess a level of supergenius above
and beyond all us other highly clever individuals -- well, no, I don't
believe it. I'm not totally closed to the idea but I haven't seen you
demonstrate this level of supergenius so far!!
If you're asking me to believe that you have some special insight into
aspects of the AI problem that no one else has -- well, I can believe that a
lot more easily. Einstein for instance had a special insight into physics
that his other -- equally clever -- colleagues did not have. Having a
special insight into some domain is an interesting combination of things:
general intelligence, specialized intelligence in the domain, and a
philosophical/personal bias that in some way matches a given domain at a
trust AI researchers' arguments until you see them implemented in
I would also trust rigorous mathematical proofs, to an extent.
> I can solve problem 1 by giving you detailed information about
> (privately, off list), though it will take you many many days of
> and asking questions to really get it (it's just a lot of
I'll take it. Please send.
We're working on a new draft of our overall design doc; it should be ready
in early April, so I'll talk to you about it then...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT