Re: Singularity awareness (no news here)

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 02 2006 - 14:14:00 MDT


> "In my estimate, the implementation in neural hardware is not necessary,
> and I believe that we have the hardware to do it now. In fact, I think
> we have had that hardware for about a decade. That is a rough estimate
> based on my understanding of human cognitive architecture, and my take
> on what the design of the first successful AGI will be like."
>
> Im curious, what qualifies you guys to make such a statement ?
> Dont get me wrong, if its just the product of your own
> "uneducated"-homegrown thinktank, thats quite fine, but one could also get
> the impression that you're workign with/educated in the field of CS/AI
> and/or an aspect of the human psyche.
> Just curious :o)

Hi,

That statement was not made by me, but I almost agree with it, except
for the "we have had that hardware for about a decade" part.

However, even though I have a math PhD and plenty of AI and CS
experience in industry and academia, and some cognitive science
experience in academia as well -- I must admit that my optimism about
the hardware requirements for AI is largely based on intuition.... We
don't really know.

-- Ben
>
>
> On 6/2/06, Richard Loosemore <rpwl@lightlink.com> wrote:
> >
> > If I may: this seems to be an example of what has come to be a standard
> > calculation of <When Gate-Count Will Equal Neuron-Count>. People come
> > up with different numbers, of course, but lots of people do the same
> > calculation.
> >
> > Now, you point out that this is only some kind of upper bound, and that
> > it may not be as important as (e.g.) architecture ...... but to my mind
> > this kind of calculation is a *complete* distraction, telling us almost
> > nothing but making us think that it means something.
> >
> > In my estimate, the implementation in neural hardware is not necessary,
> > and I believe that we have the hardware to do it now. In fact, I think
> > we have had that hardware for about a decade. That is a rough estimate
> > based on my understanding of human cognitive architecture, and my take
> > on what the design of the first successful AGI will be like.
> >
> >
> > Richard Loosemore
> >
> >
> >
> > Keith Henson wrote:
> > >
> > > [Reposted from another list with permission. Nothing new, but an
> > > indication that the local topics are being discussed elsewhere -- Keith
> > > Henson]
> > >
> > >> Date: 31 May 2006 10:35:31 -0800
> > >> From: dan miller <danbmil99@yahoo.com >
> > >> Subject: Re: Moore's Law and AI (Real or Artificial Intelligence): was
> > >>
> > >> ( the following offered as a simulus for discussion and debate; I'm not
> > >> claiming it's scientifically rigorous )
> > >>
> > >> I think it's possible to put forward a somewhat reasonable estimate of
> > >> computing power necessary to roughly equal human-level intelligence.
> > >> If we
> > >> look at a typical insect, which has on the order of 20,000 - 200,000
> > >> neurons
> > >> (I know, not all neurons are created equal, but this is
> > >> back-of-the-envelope) -- we can ask ourselves, how does this setup
> > >> compare,
> > >> in terms of "intelligence", to a silicon-based machine that has similar
> > >> capabilities?
> > >>
> > >> [caution: arm-waving begins]
> > >>
> > >> I conjecture that a typical Darpa GC vehicle represents a similar
> > >> level of
> > >> complexity in terms of its ability to sense, react, and (to a degree)
> > >> plan
> > >> its behavior within its environment. Clearly there are many
> differences,
> > >> but I'm pretty sure it's within an order of magintude one way or the
> > >> other.
> > >>
> > >> The GDC vehicles were designed as one-off prototypes, so the
> > >> technology used
> > >> was not highly optimized for low cost, power consumption, etc. CMU and
> > >> Stanford both used about half a dozen powerful PC's each; but it's
> > >> obvious
> > >> to me that optimizations, including special-purpose chips or FPGA's,
> > >> could
> > >> reduce that requirement by at least an order of magnitude.
> > >>
> > >> So conservatively, a present-day-class, 2+ ghz Pentium-based computer
> is
> > >> capable of emulating the functional capabilities of something like an
> > >> ant.
> > >>
> > >> So 2G mips ~== 20K neurons; one neuron = 100,000 mips
> > >>
> > >> Humans have on the order of 10^11 neurons; 10^11 * 100,000 = 10^16 mips
> > >>
> > >> After sketching this out, I looked up Hans Moravec's estimate, which is
> > >> 10^14. I guess he's planning to write his neuron simulators in
> assembly
> > >> code.
> > >>
> > >> My engineer's gut tells me this estimate is an upper limit, and that
> > >> appropriate special-purpose hardware would enable the right sort of
> > >> computational horsepower attainable at reasonable cost within 10 to 15
> > >> years.
> > >>
> > >> It's interesting to note that if the typical guesses are correct,
> > >> Google is
> > >> just about at this level of computational ability.
> > >>
> > >> None of this is meant to suggest that the architecture isn't more
> > >> important
> > >> than the gate count; but it's nice to have some likely upper bounds on
> > >> what
> > >> kind of performance you might need to get to serious AI.
> > >>
> > >> - -dbm
> > >
> > >
> >
> >
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT