Re: Parallelism (Re: Introduction)

From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Fri Sep 09 2005 - 06:37:17 MDT


J. Andrew Rogers wrote:

>On 9/8/05 11:30 PM, "Tennessee Leeuwenburg" <tennessee@tennessee.id.au>
>wrote:
>
>
>>Quantum theory allows us the possibility of using action at a distance
>>to ensure consistency over a large area.
>>
>>
>
>
>Last I checked, quantum theory does not allow information to move faster
>than the speed of light. Being able to absolutely guarantee synchronicity
>between any two points in space at any specific moment in time is the
>computer scientist's wet dream. That we pretend we can do such things does
>not mean that we actually can. All we can really do is reduce the
>probability of an incorrect state with clever tricks, like error
>detection/correction that can handle a subset of possible failures.
>
>IIRC, the inability to guarantee correctness of state was discussed in GEB
>albeit obliquely.
>
>
I addressed this in my own follow up which you may or may not have
noticed. I wasn't suggesting you could use quantum mechanics to
communicate instantly, but rather that you could operate on data over
here which is entangled with some data over there being operated on by
another process, and know that the two different locations were
proceeding from a consistent starting point.

I might be wrong, but I thought this would bring an advantage, because
you could "know" something about the other system, even though it is far
away. For example, you could measure your local qubits, and from that
know on what basis the distant processing is proceeding on. I don't know
whether that's really useful or not, but it might help.

But I agree that it's a weak point I made, it's certainly not a silver
bullet to the problem, just something I was speculating about being
useful. Perhaps best left unsaid.

>>If a brain was so large that one region could only produce output that
>>would affect another region "tomorrow", it would be difficult to
>>imagine a resulting consciousness which experienced a consistent
>>history, but was able to experience "moments" lasting less than the
>>time taken to achieve consistency between brain areas.
>>
>>
>
>
>One of the basic mechanisms of distributed computing is having every locale
>have a consistent view of history, even if it is on a different point of the
>timeline than other locales in some absolute context. The mark of a
>generally effective distributed processing system is that if you turned off
>all external input there is a very high probability that it would eventually
>settle into a globally consistent state. It is worth pointing out that
>databases like Oracle or PostgreSQL do exactly this as well to guarantee
>correctness on single processors with high-concurrency (virtual parallelism)
>and an extremely high probability of correctness. The modern Multi-Version
>Concurrency Control (MVCC) family of algorithms is a distributed processing
>algorithm designed to approximately guarantee eventual global consistency
>even in single address spaces, never mind networked systems. And in fact,
>databases do give every user a consistent tidy view of the database, but
>often two different users will have different inconsistent views that are
>nonetheless internally consistent for each user. As is in evidence, this
>usually works out very well in practice despite the fact that different
>views in the system are operating on different points in the database's
>history.
>
>For many purposes, significant synchronization latencies will not impact the
>correct functioning or consistency of the overall system as viewed from any
>given point in the system. However, synchronization latencies will impact
>the overall throughput and intelligence of the system.
>
>
Interesting ... What about the system's awareness of itself? Just from
introspection, it doesn't seem like my consciousness is seated in
anything more locationally precise than "my brain". Bits of my brain
variously inform my consciousness to a greater or lesser extent, but I'm
not sure that's what you are talking about.

If a brain were so large that a globally consistent self-awareness were
not possible, what might happen? Either the self-awareness would
"ignore" inconsistencies -- i.e. wouldn't notice things that happened in
smaller time-frames than the time it takes to settle into a reasonably
consistent stage, or the self-awareness wouldn't be global
self-awareness, or there would be multiple consciousnesses emerging from
the same brain.

I really like your discussion about setting to a globally consistent
state, and the example of a very complex database. In your example, you
talk about locally consistent views of the database. To continue the
analogy with a brain, what do these locally consistent views represent?

Cheers,
-T



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT