Re: physics of uploading minds

From: Martin Striz (mstriz@gmail.com)
Date: Fri Oct 28 2005 - 11:00:46 MDT


On 10/27/05, Phillip Huggan <cdnprodigy@yahoo.com> wrote:
> Is there any literature that the process of uploading, formatting one's
> individual mind upon a substrate other than one's brain, will actually work?
> I'm starting to compile a taxonomy of singularity pathways and I think
> uploading should be removed entirely from this list. I don't deny a
> sentient agent can be engineered with uploading technology that is
> behaviourally indistinguishable from an "original". But this is a far cry
> from ensuring one's immortality by saving static copies of one's mind.
> The fundamental flaw appears to be in the belief that uploading (for
> immortality) only requires a copying level of resolution that is the
> smallest physical brain structure necessary to form one's mind. This would
> merely create another individual. The laws of physics that give rise to the
> fields neurons and microtubules operate under, would have to be reproduced
> as well for uploading to copy personal identity. It has been mentioned
> developing uploading is too hard to contemplate pre-singularity. I would
> take this further: uploading is too hard post-singularity; achieving it
> would require enacting new laws of physics. Uploading = being god, and
> discussion of it doesn't belong alongside other mind/intelligence
> singularity hybrids.

Moravec's solution seems to be the only plausible way to upload,
unless I'm missing something. But in my past ruminations over how the
process would work, I've come up with some questions of my own.

Clearly we recognize that by removing the entire brain and simply
attaching a fully functioning nanotech simulation to the spinal cord,
we would not have uploaded ourselves, but simply replaced our mind
with an identical (but not the same identity) copy. We would still be
"running" (for the sake of smooth narrative, I'm using folk
terminology to discuss the mind rather than how I view consciousness
technically) on the brain that had been removed and perhaps stored in
a vat somewhere.

At the other extreme, it seems intuitively obvious that sequential
single-neuron replacement of endogenous wetware with some functionally
equivalent silicon or nanotech hardware would in the end yield the
same (identity) mind running on a new substrate. So at what point
between these two extremes to do we reach a threshold at which
identity is conserved or lost? Could we, for example, remove one
entire cerebral hemisphere and replace it, then remove the other
hemisphere and replace that? My intuition is that this would also
work (maybe I'm wrong). What if we removed 75% of the brain in the
first procedure, then the other 25%? Now I'm no longer so sure.

What if we sped up sequential single-neuron replacement so that we
could replace 10 billion neurons per second (not at once, each
individually, but so quickly that at our ability to perceive time it
would seem to occur simultaneously)? What if we sped that up so that
we could replace all 100 billion neurons within 0.1 seconds? How
would that differ from a whole brain transplant? If they are
essentially identical procedures, why would Moravec's method work and
not a whole brain transplant?

Conserving the identity of the mind seems to be predicated on the
notion that the mind is a distributed set of computational processes
that requires some threshold level of integration of processes to
maintain its integrity. But then the mind as a seamless global entity
is an abstraction -- an illusion.

My thinking now is that there are two solutions.

1) There can't be a threshold. If you can replace 50% of the brain
and conserve identity, then you must in principle be able to replace
51% and on up.

2) The threshold is exactly 50%. The substrate that contributes more
cognitive processes to the hybrid mind becomes the dominant center of
narrative/perceptual gravity -- the new self. Sequential
single-neuron replacement still works up to, perhaps, the minimum
firing rate of neurons, which is the rate limiting step of integrating
the distributed processes into a seamless global experience. In other
words, if you replace neurons at a rate faster than their ability to
synchronize, you lose the integrity of the global experience (mind)
and replace it with a new one.

I don't know which one is right. It obviously depends on your
philosophy of mind, but I lean towards the second one.

I welcome your comments.

Martin Striz



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:18 MST