Re: answers I'd like, part 2

From: Adam Safron (asafron@gmail.com)
Date: Thu Nov 15 2007 - 08:56:24 MST


On Nov 15, 2007, at 5:53 AM, Stathis Papaioannou wrote:

> On 15/11/2007, Adam Safron <asafron@gmail.com> wrote:
>
>> This seems like a fallacy of composition. Simple brain function?
>> All
>> of these phenomena are dependent upon functional relationships
>> between
>> neurons. But this does not mean that we will be able to understand
>> more complex configurations–by "complex", I'm referring to difficulty
>> of understanding and not necessarily structural/functional
>> complexity–
>> just because we understand simpler configurations. Neuroscientists
>> have detailed mechanistic explanations of basic perceptual processes.
>> They have had nowhere near this kind of success when it comes to
>> things like "executive functions". It may have to do with the fact
>> that the nature of information processing is more idiosyncratic
>> (self-
>> organizing in a complex way) in the frontal lobe of the brain.
>> Bottom
>> up perceptual processes are topographic and map the external world in
>> a fairly tractable manner. Consequently, we have fairly detailed
>> models going down to the neuronal level. We don't have this for
>> higher-order cognition.
>>
>> We could emulate the human brain by modeling the activity of
>> different
>> neural regions, but this would be an extremely limited form of
>> reverse
>> engineering. Emulation isn't understanding. Ideally, we would like
>> detailed understanding of the engineering principles underlying
>> cognition. Without this, we will be limited in our abilities to
>> anticipate the emergent properties of the emulated brains. If you
>> achieve a super-intelligence using this sort of method (the ethics of
>> which are questionable), I don't see how we will be able to ensure
>> benevolence (which is important if you're a non super-intelligence).
>
> To emulate the behaviour of neurons in a brain would involve
> calculating all the outputs from a volume of neural tissue given all
> the inputs from the surrounding tissue. If we could do that, we would
> be able to calculate what signals the brain would send to the vocal
> apparatus after receiving any given input from the auditory nerve. The
> only obstacles to doing this, given an adequate neural model, would be
> having sufficiently fine-grained information about brain state and
> sufficient computational resources to run the model. It would require
> resolution and simulation down to at least the molecular level, and it
> is hard to imagine that we would be able to pull off such a feat
> without understanding and copying higher level cognitive functions
> first; rather like building a flying machine by emulating a bird's
> wings, muscles, cardiovascular and nervous system before being able to
> build a rubber band-powered ornithopter.

If you're modeling based on functional connections between neurons,
this does not imply that you need to understand how the neuronal level
leads to higher-level cognitive functions. However, a properly
functioning neuron-by-neuron model of the brain would probably not
require molecular-level resolution. If you were to use molecular-
level details in your model, the required computational resources
would be enormous. These are much larger than the ones suggested by
Kurzweil, which would lead to emulation capabilities circa the year
2030-ish, based on his estimates of future computational resources.
The magnitude of the required computational resources might make it
impossible for it to function as fast as a meat-brain. This depends
on the maximum computational rate we can reasonably expect using
advanced computing technologies. It is unclear to me when we will
have access to such technologies (e.g. carbon nano-tube computers) or
what their limitations will actually be. What did people think of the
speculations on the limitations of computation in Kurzweil's book?

However, your method of interacting with the brain-in-a-box via
sensory inputs/outputs–don't give it access to the web–could
potentially overcome much of the benevolence problem (though not all
of it).

However, it is unclear that this will constitute a very plausible
means of creating a super-intelligence. A brain-in-a-box would not
know how to improve its own functioning. Even if you model our best
minds down to the molecular level, no one alive today would know how
to improve the intelligence via neural modifications (besides making
everything uniformly faster). The brain is a non-linear, self-
organizing system. Emulation is not understanding.

If you use a non-molecular modeling approach, you will not start with
a fully functioning sentience. You will have an AI-child that you
may be able to nurture into a some modicum of sentient functioning.
There is a good possibility that it would be insane, as human neural
organization co-arises with embodied experience in the world. Also,
children do not start out with fully-developed brains. You would have
to have a detailed model of how neurodevelopment and neuroplasticity
work over time. We'll get to this state of understanding one day, but
even then, it's not clear that this is a good method for achieving
super-intelligence.

But this is all assuming that we're talking about emulation. Assuming
advanced nano-technology and advanced knowledge in molecular/cellular
neurobiology (not molecule-by-molecule modeling capabilities) here's
an idea which I have been toying with lately: 1) Flood a person's
brain with nanobots. 2) Have the nanobots determine the functional
properties of every neuron and glial cell in the brain (including
endocrine functions). 3) Have the nanobots replace each of the cells
with a functionally-identical synthetic equivalent. 4) Connect the
synthetic brain to an artificial body or virtual body. If you do
these things, we will have been limited from many of the constraints
of our biology. In addition to overcoming the fact that our brains
age, we could probably speed up cognition by several orders of
magnitude (if your embodiment is in a virtual world that could keep-
up). But the time-scales of developing these technologies may make
this idea irrelevant. By the time we have these capabilities, we may
have already developed an AGI that figured out a better way.

-adam

>
>
>
>
>
>
> --
> Stathis Papaioannou
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT