From: Ben Goertzel (firstname.lastname@example.org)
Date: Mon Apr 01 2002 - 22:20:44 MST
> Well, you've told me a bunch of times that you don't think I
> understand how
> difficult AI is. Hopefully you will not be offended if I say the same in
> return; I have looked upon the functional decomposition of
> intelligence and
> I don't think you realize how huge it is. I know that you think
> it is a lot
> larger now than you used to think when Webmind was getting
> started, and that
> therefore you *think* you have learned how huge intelligence is,
> but I don't
> think you've seen all of the challenge or even most of it.
Actually, our estimate of the size of the problem of creating an AI peaked
Throughout 2001 we kept discovering how to reduce various mind functions to
special cases of a moderately small set of mind functions, in what I think
is a pragmatic way.
I think one's estimate of the size of the problem of implementing AI depends
largely on how much functional specialization one is willing to build in, as
opposed to "being encouraged to emerge."
I spent about 15 years carefully studying the nature of human and general
intelligence, before trying to create a detailed software design for a Real
AI. I think this study period gave me a decent sense of the magnitude of
the problem. But the complexity of intelligence doesn't directly translate
into implementation complexity, due to the emergent aspect of intelligent
> This is humanity's last software project. It is one that has
> defeated some
> very smart people for fifty years, and I haven't yet seen anything that
> indicates Novamente has overcome it. This software project *is* *the*
> *Singularity* and it would be foolhardy to expect it to be easy.
Yes, I agree, creating Real AI is not easy. But this year it feels easier
than last to me, based on my concrete work simplifying and systematizing the
> It would
> be foolhardy to expect to crack the barrier if we hit it with
> anything less
> than full force. It is *disrespect for the job* to start out by saying,
> "And let's do this on a part-time volunteer basis with F2F contact once a
> month," instead of "If the programmers were all MARRIED it might still not
> be enough; the least we can do is all work out of the same office."
I think this is a rather silly statement.
I do not agree at all that it is "disrespectful of the task of creating Real
AI and bringing about the Singularity" for me to have different ideas about
software project planning than you do.
I understand that your opinions on this topic are strongly felt and have
reasons underlying them. But the fact is, you have never led an
all-in-one-place development team, *nor* a substantial broadly-distributed
development team. This doesn't mean your intuitions are wrong, of course.
But it does give me less inclination to accept them.
Perhaps your preferences say more about how YOU are comfortable working,
than they do about how such projects must be managed in general.
> understanding has improved to the point where I now think I can see how to
> create seed AI with one midsized nonprofit, instead of a Manhattan Project
> or industrywide effort as I once thought, but to carry out the Singularity
> as a one-man job or with a part-time volunteer crew is still unreasonable.
My view is: It might take 100 part-time volunteers to do the job of 10
full-timers, but if the 100 volunteers are available and the 10 full-timers
are not, then I'd take them.... Of course the management overhead is much
worse in the "100 volunteers" case but is it really *unreasonable*?? I
don't see why.
Personally I am aiming for a "20 full-timers plus 50 part-time volunteers"
setup, with a couple of the full-timers devoted to coordinating the efforts
of the volunteers. Needless to say I am far from having 20 full-timers or
50 part-time volunteers on the project at this point... but we'll get there
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT