Re: The GLUT and functionalism

From: Lee Corbin (lcorbin@rawbw.com)
Date: Tue Mar 25 2008 - 11:20:54 MDT


Stathis writes

> Lee wrote:
>
>> No, because sufficiently low-level table lookups are just fine. Not
>> as any kind of estimate to take to the bank, suppose me to be
>> claiming that when you start looking up bit patches of 10^6 or so
>> ---or in the inimitable example of a Life Board, a region 1000x1000
>> ---then a very small diminution of consciousness occurs.
>
> If consciousness is Turing emulable, then it is GOL emulable.

Surely.

> Suppose we have a large Life Board which is emulating a human mind,
> interfacing with a camera, microphone and loudspeaker so that it has
> vision, hearing and speech. The emulation is shown a picture of a dog
> and asked to describe it, which it does, just as well as you or I
> might.

I do take this as a direct implantation on the Life Board's version
of our V1 visual processing center.

> Next, a change is made to a patch of the Board so that those
> squares are looked up rather than calculated. This patch is large
> enough that it causes the theorised diminution in conscious
> experience.

Then this is probably not the example you want. If I'm seeing a
dog and the experimenters who have computer access to my V1
make a perfect substitute, then naturally I can't even see the
difference. The hardest part for us to try to deal with here is
that starting with V1 and going all "the way up", there isn't any
clear dividing line between the person's "mind" and the outside.
But I still think that you'll have to aim higher :-) than V1 here.

> The emulation is still looking at the dog and describing
> what it is seeing as the change is made. What happens?
>
> If there is a change in consciousness then the emulation notices that
> the picture has suddenly gone blurry, or one of the dog's legs has
> disappeared, or whatever (we can imagine that the looked-up patch
> increases in size until the change in visual perception becomes
> noticeable). So, as per your instructions, the emulation tries to
> report this change.

Well, I don't know what the point is here that you're trying to
get to. A brutal change is of course going to affect the future
states of the "subject". That is, in the sequence Sa->Sb->....
a sudden substitution of Sm' for Sm may not be alarming to
the subject---he doesn't know that Sm' was not supposed
to occur---but as he reports his experiences, presumably,
the description becomes different from what it would have been.

> However, there is a problem: the squares on the Board which
> interface with the loudspeaker are *exactly the same* as
> they would have been if the looked-up patch had actually been
> calculated.

Ah, there we go. This is closer to the crux.

> So the emulation would be saying, "It's the same picture
> of a dog, try looking up a larger patch of squares", while thinking,
> "Oh no, I'm going blind, and my mouth is saying stuff all on its
> own!".

Oh, wait.

> But how is this possible unless you posit a disembodied soul,
> which becomes decoupled from the emulation and goes on to
> have its own separate thoughts?

Of course. There are no souls, and if you perfectly substitute
an entirely different set of pixel values over some region of
the board, the calculation nonetheless proceeds exactly as
before.

> The other possibility is that there is a change to visual perception
> which is not actually noticed. When all the squares relating to visual
> perception are looked up the emulation becomes blind, but it doesn't
> realise it's blind and continues to accurately describe what is shown
> to it, using zombie vision. This is almost as implausible, and begs
> the question of what it means to perceive something.

I totally agree. Such a distinction is beneath you and me :-)

As I say, *all* my visual inputs could be looked up rather than
faithfully passed in by my retina along all those nerve fibers.
Naturally I'd never know, (unless what I was seeing was starting
to clash with my other senses).

> The above is a variation on Chalmers' "Fading Qualia" argument:

I think that Chalmers almost by definition can never find what he
is looking for, because any explanation would fail to satisfy him
either for one reason, or if that doesn't work, then a new one.
I'm afraid that an explanation would have to *make* Chalmers
feel conscious, or feel an experience.

Extremely hypothetical guess: if you took the set of all 26^500
explanations of 500 characters in length, not one of them would
satisfy those who insist that there is an insoluble mystery to the matter.

Lee

> http://consc.net/papers/qualia.html
>
> (I might add that many cognitive scientists don't like Chalmers due to
> his insistence that there is a "hard problem" of consciousness, but in
> actual fact, he is mostly an orthodox computationalist, and the above
> paper probably presents the strongest case for consciousness surviving
> neural replacement scenarios.)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT