From: Lee Corbin (email@example.com)
Date: Sat Mar 15 2008 - 12:05:27 MDT
I have taken the liberty of appending Stathis's entire original description
of his Thought Experiment (TE), his points, and his conclusion from his
"Sent: Thursday, March 13, 2008 3:12 AM Subject: Re: The GLUT and
functionalism" to the very end of this email, mostly for my own reference.
> Lee wrote:
>> > We would then both agree that M1 and M2/M3 with reliable information
>> > transfer would give rise to consciousness. You would argue that M2/M3
>> > without reliable information transfer would not give rise to consciousness.
>> Yes, I would so argue.
>> > But what if the information transfer doesn't fall into the all or none category?
>> > For example, what if the operator transfers the right information some of the
>> > time based on whim, but never reveals to anyone what he decides? The
>> > M2/M3 system (plus operator) would again be useless as a computation
>> > device to an external observer, but on some runs, known only to the
>> > operator, [***] there will definitely be a causal link [***].
>> Very clear.
> Thank-you for following the thought experiment so closely so far.
> However, I think I have made an error by writing "there will
> definitely be a causal link" above. In the extreme case, the operator
> might transfer every possible state in sequence, knowing but not
> saying which of these is the right one to implement the computation.
> Does that count as a causal link on the run in which this occurs? As
> far as you can tell by observing him, the operator is no more
> knowledgeable than an ignorant person trying out every possible state.
> Could the computation possibly divine his mental state in order to
> decide whether there is a causal link and thereby become conscious?
Naturally, I don't see it as the computation getting access to his
mental state, or anything like that. It's perhaps a bit like the operator
supposedly transferring quantities of Argon by gas canister into
a target receptacle but sometimes transfers Krypton either by
accident or design. The delicate mass of the target will be affected
without any access to his intentions, etc. (Sorry for the crude analogy, I
hope it doesn't have problems, and I hope I am not belating the obvious.)
Maybe I've got the wrong picture of what you are describing?
Does the following implement your TE in more slightly more
concrete terms? The 6*7 = 42 computation is carried out in
Australia by someone with a pocket calculator. The M2/M3
is carried out by the calculator reaching all but the last step
of the calculation, when the machine is destroyed but a nimble
operator manages to record the semifinal state on a diskette and
sends it to Vienna. A child in Vienna happens to receive this
diskette, transfers the state to his own calculator, and finishes
it, getting the answer 42. But on some cases the operator sends
a faulty semifinal diskette, and then either by luck the answer 42
is obtained, or else, say, 58 is obtained. You direct our attention
to the case where by luck 42 is obtained anyway, despite the
Either actual information flows, or it doesn't, i.e., the channel is
noisy or it's not. The ignorant person trying out "every possible
state" means what? I apologize for not being able to correctly
visualize what you mean here---it's probably quite clear but I
can't see it. Maybe the child in Vienna tries out a huge ensemble
of diskettes one by one, and every so often one of them happens
by sheer chance to be identical to the proper diskette produced
>> It may (or may not) be simpler, as you suggest, to suppose that
>> ALL [my emphasis] that is necessary is that the right physical
>> states occur [e.g. by random diskette] or are implemented somehow.
>> I doubt very much that there is a logical flaw in your suggestion.
Because, so far as I can see, it's this sort of thing that lies behind the
entire Theory of Dust = Schmidhuber = timeless computation sort
of thing, and I am entirely confident that your side is making no
*logical* flaw. It's just---as is so often the case---which side has
to undergo the greater awkwardness or embarrassment in trying
to maintain difficult or unwieldy positions.
>> On the other hand, I doubt that there is
>> any insoluble problem with mine---just a bit of awkwardness,
>> e.g., why is a 3+1 dimensional creature conscious, a 2+1 dimensional
>> creature conscious (as in Flatland or the Life Board), but a 3 dimensional
>> frozen block that is *completely* isomorphic to the 2+1 structure
>> not conscious?
> How can you be so sure about that last point?
About assuming the complete isomorphism? What do you mean?
You could easily have an ordinary 3D sculpture totally isomorphic
to a 2D run through time. I used to suggest to people that they
visualize a stack of very thin gels, each recording the state of a
Life Board. Piled on top of each other, they depict with 100%
fidelity a Life Board computation.
>> Your "awkwardness", on the other hand, is that you cannot really
>> give (so far as I know) any reason why I should choose to detonate
>> the Tsar Bomba next to the Stathis guy in Australia, or a rock I
>> pick up at random. They both emulate my friend Stathis, right?
> If a rock emulates anything then blowing it up isn't going to make any
> difference, since the point is that it doesn't matter what the rock's
> atoms are doing.
Touche. All right, then suppose I have a choice between (a) somehow
magically removing from the universe---and causing to entirely cease to
exist---a 400 kilogram of Stathis, or blowing your present biological
incarnation to smithereens.
> On the other hand, if you blow up the physical Stathis, that would
> mean that at least some branches of the computations in Platonia
> simulating me come to an abrupt end.
Well, I'm sure you don't weigh 400kg, so let's say that you weigh
100kg. In comparison to the biological 100kg Stathis, how much
"computation of Stathis", if I may ask, does a 100kg marble
statue of you emulate? Or, in other words, right now your 100kg
because it's ordinary matter at about 295 degrees Kelvin, already
emulates you to some degree. What degree?
> So, even though whatever will be will be, I prefer that you blow
> up the rock.
Oh good. You never know where our thought experiments might
Stathis's original formulation: Said he:
"I agree with you to an extent about the significance of causality in
computation. Suppose there are steps in a computation which don't
follow from the preceding step, but just happen to occur correctly *as
if* they followed from the preceding step.
"For example, imagine a machine M1 into which you input "6*7", gears
and levers and so forth go clickety-clack, and after 100 steps it
outputs "42". Next, consider another identical machine, M2, into which
you input "6*7", but at the 73rd step you destroy it. The next day on
the other side of the world, by fantastic coincidence, someone else
builds a machine, M3, which just happens to be in identical
configuration to M1 (and hence M2, had it not been destroyed) at the
73rd step. M3 then goes clickety-clack through steps 74 to 100 and
"I would agree with you that even though the activity of M2/M3 seen in
combination might look the same as the activity of M1, they are not
equivalent computational systems. This is because M1 would
appropriately handle a counterfactual, but M2/M3 would not: if the
input to M1 had been "4*5" the output would have been "20", whereas if
the input to M2 had been "4*5" the output from M3 would have still
been "42", as the lack of a causal link between M2 and M3 means there
is no way for the input of M2 to influence the output of M3. The
obvious significance of this is that M2/M3 is useless as a
computational device. It could be made useful by introducing reliable
information transfer between the two machines, say by an operator
passing M2's final state to be used as M3's initial state. The new
M2/M3 system is then equivalent to the intact M1, albeit a bit slower
and more cumbersome.
"Now, let's suppose that implementation of the computation 6*7 = 42 is
associated with a primitive moment of consciousness, and for
simplicity that this is the case only if the computation is
implemented in full. We would then both agree that M1 and M2/M3 with
reliable information transfer would give rise to consciousness. You
would argue that M2/M3 without reliable information transfer would not
give rise to consciousness. But what if the information transfer
doesn't fall into the all or none category? For example, what if the
operator transfers the right information some of the time based on
whim, but never reveals to anyone what he decides? The M2/M3 system
(plus operator) would again be useless as a computation device to an
external observer, but on some runs, known only to the operator, there
will definitely be a causal link. Does consciousness occur on those
runs or not? Does it make a difference if the operator lies 99.999% of
the time or 0.001% of the time? Does the computation know when he's
lying, or does it know the proportion of time he intends to lie so
that it can experience fractional consciousness at the appropriate
"You will have a hard time defining criteria (let alone a mechanism)
whereby a computation "knows" that there is a causal link. It is
simpler to assume that consciousness occurs purely as a result of the
right physical states being implemented, while the presence of a
recognisable causal link only determines whether the system can be
used by an external observer for useful computation."
This archive was generated by hypermail 2.1.5 : Mon Jun 17 2013 - 04:01:05 MDT