Re: Perpetual motion via entropy disposal (was Re: effective perfection)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Nov 26 2000 - 13:31:13 MST


Mitchell Porter wrote:
>
> First, the mistakes:
>
> >The first law of thermodynamics states that "You can't win"; you cannot
> >decrease the amount of entropy in the Universe.
>
> This is the second law. The first law is conservation of energy.

D'oh! I knew that. Actually, it would be more accurate to say that I've
*heard* that, but the version I keep on remembering is "You can't win, you
can't break even, and you can't get out of the game." Sorry.

> And: Hamiltonian != phase space. The Hamiltonian is a function
> on phase space, whose value is the total energy, and in terms
> of which you can write Hamilton's version of the laws of motion.

Double d'oh! I remember that too - or think I do; can't trust LTM.
Wasn't there a name for phase space, though? M-something? I can't find
it on the 'Net, though, and "phase space" is something for which it's
really convenient to have a single word.

Incidentally, one formulation I've run across online is that the
"Hamiltonian" refers to the (6N-1)D surface in (6N)D phase space with
constant energy, with the (as it turns out) first law of thermodynamics
referring to a constraint on the trajectory through phase space; i.e., any
individual point must move along the Hamiltonian hypersurface that has
constant energy. The memory I seem to remember having is that the
"Hamiltonian" is the (6N+1)D hypersurface that shows the energy as a
function of phase space and is used locally to describe potential energy
surfaces. Sigh... I either need to actually read my set of the Feynman
lectures, find a good physics encyclopedia, or give up physical science.

> Now, some more substantive comments.
>
> - The negative-energy PMM
>
> Paul Davies talks about negative-energy perpetual motion
> machines at http://www.newscientist.com/ns/980321/features.html,
> along with the extra influences that would stop them working.

<RANT>

Paul Davies sez: "It is easy to dream up scenarios that produce
unphysical or paradoxical consequences--building perpetual motion
machines, or even travelling backwards in time. To physicists, these are
alarming notions."

Who knows, Davies may turn out to be right; but even so, this strikes me
as the sort of thing that future historians call a 'blind spot', like the
unbreakability of the sound barrier or the impossibility of
heavier-than-air flight. "Despite numerous demonstrations that their own
theories permitted the possibility of time travel / negative energy /
naked singularities etc., twentieth-century physicists still refused to
believe, calling them 'alarming notions'." This rant is more about time
travel then about thermodynamics, but real physical laws don't look like
they're about to be violated every second Tuesday. Sometimes physics
fights back, as when it turns out that the Alcubierre warp drive required
more energy than existed in the entire Universe, but sometimes the Cosmic
Censorship patches turn out to be equally flawed themselves - the
Alcubierre drive was replaced by the Van Den Broeck drive, and so on. The
original "Cosmic Censorship" hypothesis preventing naked singularities
also turned out to be flawed, as I recall.

Real physical laws, like conservation of mass and energy, are not
preserved by all these Cosmic Censorship patches; the global law is the
result of the absolute firmness of the local law. Paul Davies talks about
"permissible fluctuations in entropy", but did you ever hear of a
permissible fluctuation in mass-energy?

If current physical theory offends a physicist, so what? God does play
dice with the Universe, Einstein, get over it. Even the more imaginative
physicists, like David Deutsch, suffer from the same disease, confining
closed timelike curves to quantum universes and so on. Time travel,
*including* global causality violation, is explicitly permitted by General
Relativity. Physicists will just need to learn to deal with it.

Oh, never mind. I'm not a physicist. You can get personally annoyed with
physical theories, but you're supposed to buy the right to do so with a
physics doctorate. This system works so well generally that I would have
no real objection to being slapped down by it personally. Not my job.

</RANT>

> I would take issue with the description of phase space
> here. Suppose you have a world in which particles can be
> created and destroyed. This doesn't mean that phase
> space changes in size each time that happens, it means that
> phase space is the union of zero-particle phase space,
> one-particle phase space, two-particle phase space, etc.;
> and when a particle is created, you move from the N-particle
> region to the (N+1)-particle region.

This is an interesting way of looking at it. It looks to me like
particles being created and destroyed - in particular, negative-energy and
positive-energy particles annihilating, as opposed to matter-antimatter
interactions which give rise to gamma rays or other new particles - are
intrinsically many-to-one or one-to-many rather than one-to-one. As I
understand it, there is no way to predict, from the current state of the
Universe, where a negative and positive particle pair will come into
existence; thus the current phase-space point of the Universe can give
rise to multiple points at the next instant - one possible alternate point
for each place a virtual particle pair can pop up. Similarly, a virtual
pair ("virtual pair" in the negative-positive, not matter-antimatter,
sense) can annihilate either over *here* or over *there*, and the state of
the Universe will be same either way, since neither particle exists any
more - a many-to-one interaction. It's an interesting question whether
this same objection applies to Feynman diagrams; that is, whether a
positron and electron can collide in two different ways and give rise to
precisely the same gamma particle. I myself don't know.

(As Hal Finney from "Extropians" put it:

> However I'm not sure it makes sense to explain that the whole idea works
> because of increasing the volume of phase space due to adding particles.
> If all it took to violate thermodynamic laws were to change the number
> of particles, it would happen all the time. Chemical reactions change
> the number of molecules, which is often what we count in thermodynamics,
> and nuclear reactions can change the number of subatomic particles.

)

Anyway.

There are two possible ways that entropy could be conserved; either the
method of manufacturing negative matter could intrinsically create as much
additional entropy (through one-to-many interactions) as could be
destroyed by annihilating the matter (through many-to-one interactions);
or, it could be demonstrated that phase space really is unified and all
interactions involved are one-to-one; that is, no two different
annihilations will give rise to the same state of post-annihilation
Universe.

It's hard to tell which theory is represented by Paul Davies's Censorship
objections, but I would guess the former, since he does talk about
"permissible fluctuations in entropy". Paul Davies's objections seem to
show that if you try and take a beam of negative energy and use it to cool
something off, or if you try to go somewhere where there's a lot of
negative energy, you'll get hit by at least as much positive energy as
negative. This is not too surprising - conservation of mass-energy says
that positive energy has to show up somewhere. My specific suggestion
relied on actually manufacturing negative *matter*, not just beams or
regions of negative energy - which, confessably, is a much less analyzable
proposal since there's much more real analysis of negative *energy* than
negative *matter*. Nonetheless, physicists do make proposals relying on
negative matter, not just negative energy, and I recall that some
manufacturing proposals have been made. The lynchpin of my proposal is
not to just cool down matter with a negative-energy beam; it is to use the
negative and positive matter as actual heat sinks for the entropy of your
laced positive-matter and negative-matter Solar-System-sized computer,
then throw the matter away. In particular, it would be interesting to see
what Paul Davies would make of a proposal to transfer entropy to the
negative energy or positive energy itself before it annihilated, rather
than directing negative energy at an external target.

> - The quantum collapse PMM
>
> The proposal here is pretty vague. It's just, 'what if
> entropy-increasing processes somehow acquired very low
> probability'.
>
> There is a 'quantum thermodynamics', it's called statistical
> mechanics, and entropy still increases there. There's interesting
> recent work in quantum information theory which suggests that
> entropy and entanglement are related, and maybe there will be
> distinctively quantum ways to *locally* decrease entropy ... but
> there's no inkling of a global violation of the second law.
>
> Once again on phase space ... You can conceive of a
> quantum state as a wavefunction on a classical state space,
> or as a point in Hilbert space, but in neither case does
> 'phase space'(*) itself change size when a wavefunction collapses.
> In the first case, the wavefunction is suddenly restricted to
> a small region of state space; in the second case, the state
> vector jumps to a different point in Hilbert space.
> (*) Technically, I'd rather say 'configuration space', since
> phase space refers to a space which has a position *and*
> a momentum coordinate for each degree of freedom, and neither
> Hilbert space nor the space upon which a wavefunction is based
> is like that.

The key feature of the proposal is not that the phase space changes sizes;
that is simply one way of looking at the underlying "problem", which is
that multiple points in phase space move to single points in phase space.
E.g., an electron with a .8 probability amplitude of being at A and a .6i
amplitude of being at B, and an electron with a .9 amplitude of being at A
and a .4i amplitude of being at B, can both collapse to having a 1
amplitude of being at A.

Hal Finney's objection in the parallel Universe of "Extropians" is quoted
here:

> If you think of a many worlds model, any such attempt would not actually
> blip any other universes out of existence, but rather it simply allows
> you to learn where you are in the multiverse. Learning where you are
> can't change the overall statistics of what happens. So I think this
> blipping-out is a bad model and misleading about what could happen.

The problem is, of course, that the many-worlds model just isn't true. If
quantum collapse were an observational-effect illusion, it would make no
difference what the probability amplitudes were; we would have the same
observed chance of winding up in a .2 probability Universe as a .8
probability Universe. In this case, of course, my proposed method for
entropy disposal would totally fail.

Hal Finney's objection could be resurrected by pointing out that
decreasing the probability of high-entropy Universes, or increasing the
probability of low-entropy Universes, could require that a superposed
state learn how much entropy it has, which - in turn - could turn out to
generate more entropy than could be disposed of. After all, getting
particular states to add up or cancel out has to happen *before* the
quantum collapse. That would be a classic Censorship patch. It might be
possible, however, to generate all the self-observation entropy in the
high-entropy superposed states that are cancelling themselves out, thus
getting around that objection as well.

My mental visualization here is not a Maxwell's demon situation, but
something more "emergent" - some way of ensuring, e.g., that electrons
which all happen to be moving in the same direction, have amplitudes that
build up, while electrons which happen to be moving in different
directions have amplitudes that cancel out. From the internal,
pre-collapse perspective, no entropy violation has occurred. Let's say
there are two electrons. There are four superposed states where the
electrons are moving in the same direction, and four superposed states
where the electrons are moving in opposite directions. It's just that the
first four superposed states all have amplitudes of .8i, while the second
four superposed states have amplitudes of .8i, -.8, -.8i, and .8
respectively, cancelling each other out. In which case you could create a
crystal in which quantum effects turned heat directly into electricity,
although it might not be direct current, or at least not direct current of
a predictable direction, as that would violate CPT.

> - The time-travelling Maxwell's-demon PMM
>
> This I haven't quite heard before, although I daresay someone
> who studies wormholes has thought about thermodynamics in
> wormhole spacetimes.
>
> But there's a hidden energy cost in waiting for the low-entropy
> states to come along. It takes energy to register the current
> state and decide whether it's low entropy. So entropy will be
> generated by the selection process. This is the parable of
> Maxwell's demon.

Maxwell's demon applies to individual particles, rather than whole
systems. Different rules might apply if you're willing to wait a
"decillion" years to get your low-entropy state. In other words, it could
be that the entropy cost of Maxwell's demon making one "selection" is
constant, and that it's performing the repeated discrimination on each and
every particle that defeats the process. Of course, it's just as possible
that constructing a demon which will actually last for a "decillion" -
I'll just say "zillion" - years would cost far more than any useful work
that could be gained from the system.

But your objection is probably correct; e.g., performing a discrimination,
or continually checking for one, on an entire physical system, could
easily turn out to have entropy costs far in excess of entropy gained.

A more interesting idea is this: Perform the Maxwell's Demon
discrimination, extract the information, send the information backwards in
time, and then don't perform the costly process that got you the
information in the first place. This throws away entropy into a future
that gets wiped out of existence by tampering with the past.

> As for phase space here ... the Hamiltonian framework describes
> the state of the universe by a point in phase space, and the
> history of the universe by a path in phase space. A universe
> with time travel is probably better described in some other
> way, since it likely can't be divided up into a simple series
> of 'spacelike surfaces', it will have some more complicated
> topology.

Yes, that's what I see as a possible opportunity for violating the usual
"conservation of phase space volume" rules.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT