Human-level software crossover date

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Apr 11 2001 - 18:24:05 MDT


My own opinion? AI and human architectures, not to mention developmental
rates, are so radically different that analogies between hardware are not
of much use. As hinted at in the posted dialogue, I think that all the
vast parallelized caching of the human brain is there because everything
needs to happen within a handful of serial steps.

Since I've said that software will be the limiting factor, a more
interesting question, to me, is how much software is contained in the
human brain. My current guess on the decomposition of the human brain
goes something like this:

>From the MIT Encyclopedia of the Cognitive Sciences, "Columns and Modules"
(written, of course, by William Calvin):

"The cerebral cortex sits atop the white matter of the brain, its 2mm
thickness subdivided into about six layers. Neurons with similar
interests tend to cluster. Columns are usually subdivisions at the
submillimeter scale, and modules are thought to occupy the intermediate
millimeter scae, between maps and colums. Empirically, a column is simply
a submillimeter region where many (but not all) neurons seem to have
functional properties in common. They come in two sizes, with separate
organizational principles. Minicolumns are about 23-65 um across, and
there are hundreds of them inside any given 0.4-1.0 mm macrocolumn."

"Each cerebral hemisphere has about 52 "areas" distinguished on the basis
of differences between the thickness of their layers; on average, a human
cortical area is about half the size of a business card. Though area 17
seems to be a consistent functional unit, other areas prove to contain a
half-dozen distinct physiological subdivisions ("maps") on the
half-centimeter scale."

[Eliezer's note: It's called a "map" because connections between maps
usually preserve topological properties - i.e., if map A is interconnected
to map B, then neurons near in A are often near in B, and so on. So the
default assumption is that maps, wherever they appear, implement some kind
of sequential processing stages.]

"It now appears that a column is like a stalk of celery, a vertical bundle
containing axons and apical dendrites from about 100 neurons (Peters and
Yilmaz 1993) and their internal microcircuitry."

MITECS, "Cerebral Cortex":

"Imagine the crumpled sheet expanded to form a pair of balloons with walls
2.5 mm thick, each balloon with a diameter of 18 cm and a surface area
close to 1000 cm^2. The pair weighs about 500 grams, contains about 2 x
10^10 cells connecting with each other through some 10^14 synapses, and
through a total length of about 2 x 10^6 km of nerve fiber..."

Therefore:

             1 neuron
           100 minicolumn
        40,000 macrocolumn
     (unknown) module
    32,000,000 map
   200,000,000 area
20,000,000,000 cerebral cortex

             1 brain
           104 area
           624 map
     (unknown) module
       500,000 macrocolumn
   200,000,000 minicolumn
20,000,000,000 neuron

   [ All areas in mm^2 ]

      0.00001 neuron
      0.001 minicolumn
      0.5 macrocolumn
  (unknown) module
    400 map
  2,000 area
200,000 brain

Some minor discrepancies here between the first/second sets of numbers and
the third, but it works out close enough for Fermi numbers. Also note
that this is just the cerebral cortex, not the cerebellum and the limbic
system and so on. I'd factor in, e.g., cerebellar chips, but I don't know
how many of them there are. So I'll just hope the cerebral cortex is a
large enough fraction of the brain that the Fermi numbers remain basically
accurate.

The "map" and "module" levels seem the most likely targets for
identification with complex functional adaptation. Since the module level
is a hypothesized intermediate level of organization, I don't know how
many modules there are to a map, how many macrocolumns to a module, and so
on. In general, there would appear to be about 800 macrocolumns to a
map. Thus, 50 macros/module implies 16 modules/map, and vice versa.
Frankly, I'm not sure I believe in the whole "module" theory of
organization, but it's what I'm working with here.

If 2% of the genome is useful and 30% of that specifies brain
organization, the brain would be specified by 20M base pairs, or 5
megabytes of data. (Which may seem ridiculous, but remember that the
whole genome is just 750 megabytes.) It doesn't seem likely, given
current theory, that it's 50 megabytes or 500K, so the Fermi numbers look
about right. If modules are complex functional adaptations and there are
40 modules to an map, then this leaves 800 base pairs = 266 amino acids =
200 bytes per complex functional adaptation. I'm not much on the
low-level detail of genetics, so let me know if this doesn't sound right.

Which leaves us with:

1 mind =
100 systems; 100 architectural components; 100 problem domains.

1 system =
5 or 6 major subsystems.

1 subsystem =
40 modules; 40 complex functional adaptations; 40 design components.

** At this point, repetition begins. Complex functional adaptations may
specify an overall architecture that uses some kind of fractal algorithm
for patterning of the lower levels, followed by self-wiring. Code is
written once, then multiply instantiated, possibly self-modified or
self-adapted, and so on. **

1 module/component =
100 macrocolumns; 100 major computational clusters; 100 subtasks.

1 cluster =
400 minicolumns; 400 behavioral elements in a cognitive process.

1 element =
100 neurons; 20 instructions; 1 low-level procedure invocation.

So if the software in the human brain is vaguely analogous to the software
needed for AI, 600 major subprojects and 25,000 function points is the
requirement for coding equivalence (*if* you know *exactly* what you're
doing). 200,000,000 procedure invocations per hundredth of a subjective
second is the requirement for human-equivalent intelligence. Both
requirements look positively conservative by comparision with the usual
run of estimates.

Unfortunately, the software needed for AI is *not* analogous to the
software needed for the human brain. Not even vaguely. For example, I
don't think it will actually take 100 different orthogonal systems to
implement general intelligence; more like 15-40. I think that any
analogies, if they exist, are likely to stop at the 15-40 project and
50-200 subproject level, with the decomposition of the human brain
providing little guidance on how many function points per subproject or
how many cognitive elements per cognitive subtask per cognitive
subprocess.

Nonetheless, that's my contribution to the Moravecian quest for crossover
dates.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT