Re: The GLUT and functionalism

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Tue Mar 25 2008 - 19:16:27 MDT


On 26/03/2008, Lee Corbin <lcorbin@rawbw.com> wrote:

> > Suppose we have a large Life Board which is emulating a human mind,
> > interfacing with a camera, microphone and loudspeaker so that it has
> > vision, hearing and speech. The emulation is shown a picture of a dog
> > and asked to describe it, which it does, just as well as you or I
> > might.
>
>
> I do take this as a direct implantation on the Life Board's version
> of our V1 visual processing center.
>
>
> > Next, a change is made to a patch of the Board so that those
> > squares are looked up rather than calculated. This patch is large
> > enough that it causes the theorised diminution in conscious
> > experience.
>
>
> Then this is probably not the example you want. If I'm seeing a
> dog and the experimenters who have computer access to my V1
> make a perfect substitute, then naturally I can't even see the
> difference. The hardest part for us to try to deal with here is
> that starting with V1 and going all "the way up", there isn't any
> clear dividing line between the person's "mind" and the outside.
> But I still think that you'll have to aim higher :-) than V1 here.

You can go up as high as you feel is necessary. You suggested that a
1000x1000 patch of the Board that was looked up rather than calculated
might cause a small deficit in consciousness. If that's not enough
then imagine that half the Board is looked up and the other half
calculated: surely that should be noticeable?

> > The emulation is still looking at the dog and describing
> > what it is seeing as the change is made. What happens?
> >
> > If there is a change in consciousness then the emulation notices that
> > the picture has suddenly gone blurry, or one of the dog's legs has
> > disappeared, or whatever (we can imagine that the looked-up patch
> > increases in size until the change in visual perception becomes
> > noticeable). So, as per your instructions, the emulation tries to
> > report this change.
>
>
> Well, I don't know what the point is here that you're trying to
> get to. A brutal change is of course going to affect the future
> states of the "subject". That is, in the sequence Sa->Sb->....
> a sudden substitution of Sm' for Sm may not be alarming to
> the subject---he doesn't know that Sm' was not supposed
> to occur---but as he reports his experiences, presumably,
> the description becomes different from what it would have been.

The states - the patterns of squares - are not affected; what is
affected is the way the patterns are generated. You have argued that
if a sufficiently large patch of the Board is looked up rather than
calculated, this will affect the quality of the experience. For
reasons as below, I don't see how this is possible. I think we are
forced to conclude that looking up an arbitrarily large proportion of
the Board, up to 100%, will make no difference at all to the emulated
consciousness.

> > However, there is a problem: the squares on the Board which
> > interface with the loudspeaker are *exactly the same* as
> > they would have been if the looked-up patch had actually been
> > calculated.
>
>
> Ah, there we go. This is closer to the crux.
>
>
> > So the emulation would be saying, "It's the same picture
> > of a dog, try looking up a larger patch of squares", while thinking,
> > "Oh no, I'm going blind, and my mouth is saying stuff all on its
> > own!".
>
>
> Oh, wait.
>
>
> > But how is this possible unless you posit a disembodied soul,
> > which becomes decoupled from the emulation and goes on to
> > have its own separate thoughts?
>
>
> Of course. There are no souls, and if you perfectly substitute
> an entirely different set of pixel values over some region of
> the board, the calculation nonetheless proceeds exactly as
> before.
>
>
> > The other possibility is that there is a change to visual perception
> > which is not actually noticed. When all the squares relating to visual
> > perception are looked up the emulation becomes blind, but it doesn't
> > realise it's blind and continues to accurately describe what is shown
> > to it, using zombie vision. This is almost as implausible, and begs
> > the question of what it means to perceive something.
>
>
> I totally agree. Such a distinction is beneath you and me :-)
>
> As I say, *all* my visual inputs could be looked up rather than
> faithfully passed in by my retina along all those nerve fibers.
> Naturally I'd never know, (unless what I was seeing was starting
> to clash with my other senses).
>
>
> > The above is a variation on Chalmers' "Fading Qualia" argument:
>
>
> I think that Chalmers almost by definition can never find what he
> is looking for, because any explanation would fail to satisfy him
> either for one reason, or if that doesn't work, then a new one.
> I'm afraid that an explanation would have to *make* Chalmers
> feel conscious, or feel an experience.
>
> Extremely hypothetical guess: if you took the set of all 26^500
> explanations of 500 characters in length, not one of them would
> satisfy those who insist that there is an insoluble mystery to the matter.
>
>
> Lee
>
>
> > http://consc.net/papers/qualia.html

The quoted paper has nothing whatsoever to do with the "hard problem"
(if that's what you were referring to). It is an argument that,
whatever consciousness may be, it should be possible to generate it in
a suitably configured non-biological substrate. The only
(naturalistic) way to avoid this conclusion is if the brain contains
fundamentally non-computable physics, and there is no evidence that it
does.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT