Re: Objective Meaning Must Exhibit Isomorphism

From: Lee Corbin (lcorbin@rawbw.com)
Date: Wed Mar 05 2008 - 22:10:41 MST


Stathis writes

> Yes, I was exploring the idea that a random system might still have
> "objective meaning". It will of course be meaningless to any external
> observer, but it might be meaningful to its own *internal* observers.

That's a remarkable thesis, and I'm a little skeptical. If by "random"
you mean that the system really does have KC equal to its size in
bits, (what I mean by "random"), then it defies my understanding
that it could even have internal observers.

To be concrete, there is this completely sealed box in front of you
that happens to have Kolmogorov Complexity equal to something
on the order of its mass multiplied by Avogadro's number, or some
large multiple thereof depending on how much information per cubic
meter the box can possibly have. But any observer, even a
photographic plate, has non-random structure. So how can this be?

>> That was sort of Hilary Putnam's point when he said that any rock
>> emulates anything. But to me, it's pointless unless an emulation is
>> driven causally in real time in order to effect a computation.
>
> I don't see why you say that. A moment of consciousness implemented in
> a dust cloud should be just as good as a moment of consciousness
> implemented in your head.

Don't want to be picky, but how long is a "moment"? For me, if
there is no information flow, there cannot be consciousness. <sigh>
I admit that by "information flow" I'm subscribing to a belief in
the importance of time as a basic physical reality---I've never
bought into Barbour's views or any of that. It's all too theoretical
and unsure.

For example, if you were right, then it could be that coherent local
small systems like Lee Corbin are only 0.00000000001 of where
I really get my runtime. In that case, it wouldn't matter if this
particular organization in this little corner of California in this little
corner of our galaxy continued to execute.

Important: morally, it wouldn't be such a great crime to wipe out
people here on this little planet, since .9999999999 of their
experience would go on just as before (on the reading, say, that
our level-one universe is incredibly vast, but not infinite, and there
is just one planet Earth).

Lee

> You might argue that the advantage of a head
> is that it will continue producing subsequent moments of consciousness
> while the dust cloud will not. However, that isn't so if anything can
> be interpreted as a computation: the dust cloud will produce all of
> your moments of consciousness in parallel; or if you object to that
> sort of economy some other dust cloud vastly distant in time and space
> will implement your next moment, and another one the moment after
> that, and so on. This seems to me to be an inevitable consequence of
> the multiple realisability criterion of functionalism. We can avoid it
> if we say the brain contains special non-computable processes, but I
> see no reason to do that.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT