From: Richard Loosemore (email@example.com)
Date: Sat Jun 03 2006 - 13:35:48 MDT
Keith Henson wrote:
> I consider the full scale simulation of every nerve cell to be a worst
> case "proof of principle." It may take a lot less, but if you accept
> that human brains have intelligence, then a sufficiently detailed
> simulation of one should also exhibit the property.
> Last time I did the calculation, considered at the cortical column level
> and requiring one square cm of silicon to duplicate the function of a
> column, it came out to be about 150 m of silicon on a side and ate the
> output of a substantial power plant.
> Keith Henson
Understood, but as I said before, I think it is spurious and misleading
to talk about upper bounds of this sort when we have absolutely no idea
whether it is necessary to duplicate the architecture down to the neuron
level. And especially when some people in the field have reasons to
believe that the real requirements are several orders of magnitude more
modest. [See below]
I really don't mean to be so critical of your particular calculation,
but what usually happens after someone mentions these brain calculations
is that a huge amount of effort and discussion time is invested in it,
and before you know it everyone is talking in terms of putting dates on
the arrival of AGI, on the basis of what computer power is needed to do
it, or (even worse), potential investors are *citing* these calculations
as evidence that they should not invest because we are not there yet.
P.S. My Calculation
Somebody is bound to ask, so here is a sketch of the basis for my own
Approximate number of cortical columns: 1,000,000. If each of these is
hosting a single concept, but they are providing a facility for moving
the concept from one column to the next in real time, to allow concepts
to make transient connections to near neighbors, then most of them may
be just available for liquidity purposes (imagine chinese puzzle on a
large scale... more empty blocks means more potential for the blocks to
move around, means greater liquidity). So, number of simultaneously
active processes: perhaps as few as 10,000 (conservative estimate based
on considerations that originate in the richness of the sensorium), not
Further suppose that the function of concepts, when active, is to engage
in relatively simple interactions with neighbors in order to carry out
multiple simultaneous relaxation along several dimensions. When the
concepts are not active they have to go through different sorts of
calculations (debriefing after an episode of being used), and when they
are being activated they have to (effectively) travel from their home
column to where they are needed. Considering these "other" computations
together we notice that the function may implement multiple functions
that do not need to be simultaneously active.
Now, all of the above functions are consistent with the complexity and
layout of the columns. Notice that what is actually being computed is
relatively simple, but because of the nature of the column wiring the
functions take a good deal of wiring to implement the functions ... so
the columns look computationally demanding but when implemented in
silicon the functionality is not nearly as difficult.
Finally, when implementing the 10,000 processes in silicon, take account
of the relative clock speeds and you can probably simulate 100 to 1000
processes simultaneously, if you use FPGA hardware (say, one the
Celoxica boards that Hugo de Garis is making such good use of). Depends
on the complexity of the function, and on the bandwidth requirements.
That gives you a computational requirement of between 10 and 100 desktop
machines with one $6,000 FPGA card in each one. Obviously that is just
the cognitive core: you'd need peripherals as well. Assuming 100
machines rather than 10, and another fifty equivalent for peripherals,
that would be an approximate AGI cost of $1 million.
Ten years ago, IBM had enough power to do that comfortably, hence my
assessment that we were already there ten years ago.
I am sure someone will query the odd architecture implied in the above:
take it from me that I am not pulling this out of the hat, but just
using one possible architecture that I have been working on. No way I
can explain the background to that, but I hope I have given enough to
show that the numbers have some basis.
Strangely enough, Ben and I come to similar conclusions about hardware
requirements (I believe that is right: correct me if I am wrong, Ben),
even though we are coming from very different directions. That might
only mean we are optimists of similar stripe.
And finally: that is only the *hardware* folks! Don't even begin to
get me started on the software that we would need in order to make use
of the hardware.
Oh, yeah, of course, I already got myself started :-) on the software
issue, because that is what I talked about (or would have, but for all
the interrupting questions ;-() at the AGIRI workshop a couple weeks ago.
[I have crossposted this to the AGI list because I think there might be
interest there also].
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT