Re: The GLUT and functionalism

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Thu Mar 13 2008 - 04:12:21 MDT


On 13/03/2008, Lee Corbin <lcorbin@rawbw.com> wrote:

> Well, er, yes. Not much of a reason on the face of it :-)
> But isn't it true that if you follow Putnam in his "Representation
> and Reality" (extremely un-recommended by yours truly),
> then you must suppose that any given rock performs the
> calculations making up Stathis just as well as your organic
> body does? In other words, if I have a choice of using
> the Tsar Bomba (50 megatons) on the rock or on your
> own person, if you come to visit me, then why do you care
> whether I totally destroy the rock or totally destroy
> Stathis's everyday human person?

Yes, it's an obvious point. But the idea that any computation can be
implemented by any system is just the starting point. From this it
follows that every computation is implemented necessarily by virtue of
its status as a Platonic object (consciousness is a property of the
abstract computation just as squareness is a property of the abstract
square), and then the problem becomes one of defining a measure to
explain why out of all these computations we experience the orderly
world that we do. Suggestions have been made as to how the measure of
a computation might be proportional to the length of the program
producing it - eg. see this post from the Everything list in which Hal
Finney and others (including you) discuss this question:
http://groups.google.com/group/everything-list/browse_thread/thread/c56e49173ab9070c/210afd125ae346dc?lnk=gst&q=tegmark

> > and (b) that it doesn't result in information flow between
> > the states. But I don't think it's obviously absurd, and I
> > see the lack of information flow (or inability to handle
> > counterfactuals) as just making it impossible for us as
> > external observers to use the system for computation.
>
>
> Could you explain a bit more to me about this? Between
> perhaps not using "counter-factual" correctly, or whether
> it makes a fig of difference about "external observers"
> (it doesn't), I'm not sure I'm following you. Perhaps an
> example distinct from the Monday/Tuesday one would
> help me.

I agree with you to an extent about the significance of causality in
computation. Suppose there are steps in a computation which don't
follow from the preceding step, but just happen to occur correctly *as
if* they followed from the preceding step.

For example, imagine a machine M1 into which you input "6*7", gears
and levers and so forth go clickety-clack, and after 100 steps it
outputs "42". Next, consider another identical machine, M2, into which
you input "6*7", but at the 73rd step you destroy it. The next day on
the other side of the world, by fantastic coincidence, someone else
builds a machine, M3, which just happens to be in identical
configuration to M1 (and hence M2, had it not been destroyed) at the
73rd step. M3 then goes clickety-clack through steps 74 to 100 and
outputs "42".

I would agree with you that even though the activity of M2/M3 seen in
combination might look the same as the activity of M1, they are not
equivalent computational systems. This is because M1 would
appropriately handle a counterfactual, but M2/M3 would not: if the
input to M1 had been "4*5" the output would have been "20", whereas if
the input to M2 had been "4*5" the output from M3 would have still
been "42", as the lack of a causal link between M2 and M3 means there
is no way for the input of M2 to influence the output of M3. The
obvious significance of this is that M2/M3 is useless as a
computational device. It could be made useful by introducing reliable
information transfer between the two machines, say by an operator
passing M2's final state to be used as M3's initial state. The new
M2/M3 system is then equivalent to the intact M1, albeit a bit slower
and more cumbersome.

Now, let's suppose that implementation of the computation 6*7 = 42 is
associated with a primitive moment of consciousness, and for
simplicity that this is the case only if the computation is
implemented in full. We would then both agree that M1 and M2/M3 with
reliable information transfer would give rise to consciousness. You
would argue that M2/M3 without reliable information transfer would not
give rise to consciousness. But what if the information transfer
doesn't fall into the all or none category? For example, what if the
operator transfers the right information some of the time based on
whim, but never reveals to anyone what he decides? The M2/M3 system
(plus operator) would again be useless as a computation device to an
external observer, but on some runs, known only to the operator, there
will definitely be a causal link. Does consciousness occur on those
runs or not? Does it make a difference if the operator lies 99.999% of
the time or 0.001% of the time? Does the computation know when he's
lying, or does it know the proportion of time he intends to lie so
that it can experience fractional consciousness at the appropriate
level?

You will have a hard time defining criteria (let alone a mechanism)
whereby a computation "knows" that there is a causal link. It is
simpler to assume that consciousness occurs purely as a result of the
right physical states being implemented, while the presence of a
recognisable causal link only determines whether the system can be
used by an external observer for useful computation.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT