From: James Rogers (firstname.lastname@example.org)
Date: Mon Apr 09 2001 - 17:09:40 MDT
At 02:37 PM 4/9/2001 -0700, Dani Eder wrote:
>My rough estimate of human-level cpu power is
>10^11 neurons x 10^4 synapses x 100 Hz = 10^17 bits/s
>= 3000 Tflops.
This doesn't seem quite right. You are basically describing something that
can do 10^17 ops on 10^15 data structures per second. The big problem with
this is that it would require an obscenely fast memory bus (say 8-megabits
wide at 8-terahertz) to feed the processor core in the best case
scenario. Given the tech for this kind of memory bus, the memory required
(which would fit snugly in a 64-bit address space) and the processor core
would already be solved problems. Massive distributed computing won't work
because the effective memory bandwidth is so low that the actual throughput
will be orders of magnitude less than suggested by simply aggregating the
abilities of individual processors.
That said, I think a well-engineered AI solution optimized for silicon
would require substantially less hardware than suggested by the above.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT