Re: Human-level software crossover date

From: xgl (xli03@emory.edu)
Date: Thu Apr 12 2001 - 21:19:57 MDT


        actually this is a reply to mitchell porter's email, but i had to
start at the source.

On Wed, 11 Apr 2001, Eliezer S. Yudkowsky wrote:

>
> 1 brain
> 104 area
> 624 map
> (unknown) module
> 500,000 macrocolumn
> 200,000,000 minicolumn
> 20,000,000,000 neuron
>
>
> The "map" and "module" levels seem the most likely targets for
> identification with complex functional adaptation. Since the module level
> is a hypothesized intermediate level of organization, I don't know how
> many modules there are to a map, how many macrocolumns to a module, and so
> on. In general, there would appear to be about 800 macrocolumns to a
> map. Thus, 50 macros/module implies 16 modules/map, and vice versa.
> Frankly, I'm not sure I believe in the whole "module" theory of
> organization, but it's what I'm working with here.
>
> If 2% of the genome is useful and 30% of that specifies brain
> organization, the brain would be specified by 20M base pairs, or 5
> megabytes of data. (Which may seem ridiculous, but remember that the
> whole genome is just 750 megabytes.) It doesn't seem likely, given
> current theory, that it's 50 megabytes or 500K, so the Fermi numbers look
> about right. If modules are complex functional adaptations and there are
> 40 modules to an map, then this leaves 800 base pairs = 266 amino acids =
> 200 bytes per complex functional adaptation. I'm not much on the
> low-level detail of genetics, so let me know if this doesn't sound right.
>

hmm ...

        the human genome is about 3.0e8 base pairs;
        four possible bases, so each base = 2 bits;
        6.0e8 bits / 8 bits/byte = 7.5e7 bytes;

... so far so good but ...

        7.5e7 * 2.0e-2 * 3.0e-1 = 4.5e5 bytes = 1.8e6 base pairs;
        40 modules/map * 624 maps/brain = 24960 modules;
        1.8e6 bases / 24960 modules = ~ 72 bases/module = 18 bytes/module;

On Thu, 12 Apr 2001, Mitchell Porter wrote:

>
> Well, let's say that the difference between
> one module and another consists in certain genes
> being either on or off (expressed or never expressed).
>
> You've estimated 20000 modules, and 2^15 > 20000,
> so 15 genes can in theory provide the requisite
> variety. So, again in theory, each module needs
> just 15 bits - 2 bytes! - to specify the on/off
> states of those genes, and by pooling their
> "byte budgets" the modules can easily afford
> those 15 genes themselves.
>

        actually, i doubt you can get closure on a subset of the genome so
cleanly. genes act in stochastic contexts, and those contexts are
generated by the interaction of more genes -- strange loops abound. also,
genes and their products are subject to complex feedback control at every
level, from transcription to translation to degradation, and are very,
very far from on/off entities. in addition, there is the complication of
splice variants that are present in different forms in the brain and in
... say, heart as well.
 
> Now in fact the gene activation pattern of a
> cell type has to be determined by other genes,
> so you're going to need considerably more than 15
> to code up the whole architecture. But basically,
> by the 200-byte estimate above, each module can
> afford one gene, and *20000 genes* should be more
> than adequate to code for a 20000-module
> architecture if it recycles system and subsystem
> design patterns, as the brain surely does.
>

        first of all, if my math hasn't failed me, it looks like 20 bytes
instead of 200. in any case, i don't really see the merit of dividing
genomic information content to this fine a granularity ... what problem
does this make easier? it just doesn't seem like a particular productive
perspective to take. to me, at least, genes seems more like triggers
rather than blueprints ... a whole lot of what we consider interesting
structure has to come from interaction effects.

-x



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT