Re: AGI project planning

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Mon Dec 05 2005 - 11:17:08 MST


Ben Goertzel wrote:
> I estimate that with a 3-5 year development effort by a team
> of 5-8 expert AI programmer/scientists (like my current
> Novamente development team), we could have a human-level
> "artificial child", running on a ~200-500 PC cluster.
> Another 5-10 years could viably yield an adult-level
> "artificial scientist" potentially able to launch a
> Singularity.

Ben I've done a fair amount of this kind of prediction myself,
both for challenging but conventional software engineering tasks
and for AGI. I'm fairly sure that the tools and knowledge I have
to help me make these predictions are roughly comparable to the
ones you have, and there is just no way you can predict anything
but a near-best case. You can plan out how you think things will
go if you theory is all correct and you hit no major problems,
you can guess about how long it might take people to implement
things if they turn out to be comparable to other problems you've
solved in the past, but you simply can't make strong predictions
that specific things will work in specific time periods given X
resources. I'm guessing that your above resource requirements
meet a minimum level of comfort for you, where the problem starts
to /feel/ achievable (e.g. there is a staff member assigned to
each tricky area, the computational requirements meet a simple
extrapolation from test problems times a fudge factor, whatever).
But this is not a well-justified statement that 'given resources
X, we can do Y as long as there are no major disasters'. Back
when I was doing normal software engineering, I prided myself on
being able to make and uphold those kind of predictions when
others couldn't, but AGI is just /different/. Even given sound
theory (which is hard enough to start with), all projects face a
flood of difficult, subtle problems each of which can block
progress for effort for months, years or indefinitely. I don't
think even Eliezer would claim that you can foresee them all,
and you just can't make strong predictions about the results of
trying to traverse that kind of minefield.

Based on your past statements I'm pretty sure you're well aware
of all that Ben and would acknowledge your statement as hopeful
optimisim; e.g. 'given the above, I feel we would have a
nontrivial chance of success...', which is fine. I'm just
reminding everyone else, if they haven't already concluded as
much from the sheer number of people who've said something like
'given enough funding to make my project feel serious and well
equipped, we can probably build an AGI in X years' and failed
miserably, not to take these kind of claims as serious
predictions. It is unfortuante that the funding environment
strongly rewards those who claim 'Yes, I can definitely build
an AGI in X years given Y million dollars' and sound convincing
about it; it encourages both scams and honest self-delusion. I
am glad that so far I have been able to raise funding without
having to make such claims.

 * Michael Wilson

        
        
                
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT