Re: Simulation argument in the NY Times

From: Benjamin Goertzel (ben@goertzel.org)
Date: Sun Aug 26 2007 - 10:38:24 MDT


On 8/21/07, Matt Mahoney <matmahoney@yahoo.com> wrote:
>
>
> --- Stathis Papaioannou <stathisp@gmail.com> wrote:
> > I'd be certain that an exact biological copy of me has the same
> > consciousness. I'd be almost certain that a neuron by neuron computer
> > emulation of my brain would have the same consciousness as me (David
> > Chalmer's fading qualia argument). However, I couldn't be certain that
> > some machine designed to copy my behaviour well enough to pass for me
> > would have the same consciousness as me; it might be a p-zombie, or
> > more likely it might just have a completely different consciousness. I
> > would agree to be destructively uploaded in the first two cases, but
> > not the last.
>
> So you argue that consciousness (defined as that which distinguishes you
> from
> a p-zombie) depends on the implementation of your brain? Does it matter
> if
> the neural emulation is optimized by simulating average firing rate as
> opposed
> to individual pulses. How about simulating the collective behavior of
> similarly weighted neurons with single neurons? How about simulating the
> visual cortex with a scanning window filter? What aspect of the
> computation
> results in consciousness?
>

I tend to agree with Stathis... I would comfortably "destructively upload"
to create
a physical or digital copy of my brain that would act like me and claim to
be me ... but
would be much less comfortable to do so in order to create a behavioral copy
of me.

I don't think there is anything mystical or weird about this attitude.

Let me explain my perspective, in terms of my pattern-theoretic approach to
self and consciousness.

I strongly suspect that my individual consciousness and individual self are
tied to the patterns emergent within my brain, and the patterns emergent
between my brain-states and their coupled environment-states.

Furthermore, I strongly suspect that the patterns at hand are mostly
relatively high-level patterns, rather than low-level patterns such as
bindings between particular protein molecules, etc. ...

So, in my view, a physical or digital copy of my brain that maintained
behavioral faithfulness to my original brain, is very very likely to persist
the patterns that constitute my individual consciousness/self.

On the other hand, I **just don't know** whether a behavioral imitation of
myself would necessarily manifest these patterns-that-constitute-me.

Resolving the latter point seems to require exploring some deep and nearly
entirely uncharted mathematical territory.

The question is: If you fix (hold constant) the **external** behavioral
patterns of a simulated human organism (where the constancy is relative to
the perceptual/cognitive acuity of human observers, I presume), and also
bound the amount of computational resources available for the simulated
organism (to some fairly low level, similar to the computational resources
needed for a direct brain simulation), then how much can the correspondent
**internal** and **emergent-internal/external** patterns vary?

The resolution of this question would tell you whether having
behavioral-patterns-like-Ben actually implies having
individual-self-and-consciousness-defining-patterns-like-Ben.

But until this math problem is resolved, we really don't know whether our
behavioral imitators are necessarily "us" in terms of the patterns that
characterize our individual consciousnesses/selves.

Anyway, that's my analysis of the situation according to the theory of self
and consciousness I outlined in "The Hidden Pattern" (BrownWalker, 2006),
and it happens to agree with Stathis's intuition.

-- Ben Goertzel



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT