## List of MacBook Hardware Problems

December 28, 2007

My MacBook is about 16 months old now, and already I’ve had to deal with the following issues:

• Two hard disk failures
• Splitting of handrest area
• Failure of internal microphone (it seemed to work for a minute after resetting the SMC, but quickly stopped responding to my test noises again, and has failed to work on subsequent resettings of both SMC and NVRAM)

And why, why am I able to use Gmail Chat in Safari for one Gmail account, but not for another? I can’t even change anything under the settings for the latter account to enable Gmail chat when I’m using Safari — the ‘Chat’ options tab is not available.

I am almost sure that my next laptop will be running Linux only.

## The Overhyped Cosmological Arrow of Time

December 23, 2007

Something had been bugging me about Sean Carroll’s Arrow of Time FAQ, and it was probably because his answers were too pat. For example, after acknowledging that the entropy of the universe is not well-defined, he writes:

If you don’t understand entropy that well, how can you even talk about the arrow of time?

We don’t need a rigorous formula to understand that there is a problem, and possibly even to solve it. One thing is for sure about entropy: low-entropy states tend to evolve into higher-entropy ones, not the other way around. So if state A naturally evolves into state B nearly all of the time, but almost never the other way around, it’s safe to say that the entropy of B is higher than the entropy of A.

It sounds like here he’s saying that entropy can be defined as that which always increases in time. Akin, perhaps, to Boltzmann’s assertion in his Lectures on Gas Theory that the direction in which entropy increases is always the “future” in the same way that the direction pointing away from the centre of the earth is always “up”. But if that is the case, we’d hardly need a cosmological explanation of the second law. After all, even if entropy had undergone a monotonic decrease from the big bang till now, we shouldn’t (by Sean’s argument) interpret it as decreasing with time. We’d either redefine entropy to be something else that we conveniently found to at least mostly increase with time, or invert our direction of time — interpret time to be increasing towards the Big Bang. Either way, we can’t be sure that there is a problem that needs to be solved by a cosmological solution. If we can redefine entropy once we find that it has not been increasing with time the way we want it to, then why can’t we now just redefine entropy to be something else other than the inconvenient thing that seems to increase in time persistently for all systems despite our microdynamically time-reversible laws? If we can invert our interpretation of the direction of time to fit the direction of increasing entropy, then there is just no need for a cosmological solution — just reinterpret the local entropy minimum to be when time ‘started’!

This is why we shouldn’t blithely commit ourselves to saying that entropy’s principal characteristic is that it increases with time. Presumably there are other characteristics of entropy we hope to retain as we search for a definition of the universe’s entropy. So the fact that we can’t find a proper definition for the universe’s entropy should bother us — if we can’t find anything that resembles other aspects of entropy other than its increase with time, then we can’t use our makeshift definitions of the universe’s entropy to back up the claim that the beginning of the universe was a low-entropy state. It certainly couldn’t be anything other than a low entropy state if we’ve defined our criteria for a definition of entropy to be one which characterizes the Big Bang as a low entropy state.

As I commented over at Cosmic Variance, I recommend this John Earman paper for an account of how all the cosmological definitions of entropy to date have been unsatisfactory.

## Classical and Quantum Distinguishability

December 23, 2007

I’ve written before on the spuriousness of the claim that one needs quantum mechanics to understand the ‘correction’ of a factor of N! to the usual expression for the entropy of an ideal gas. Van Kampen, Jaynes and Swendsen have independently made this argument, though with different approaches.1 The upshot is that if you take a probabilistic definition of entropy, it doesn’t matter whether you calculate the entropy of a classical ideal gas by assuming that its particles are distinguishable or by assuming that its particles are indistinguishable: you get the same, correct Sackur-Tetrode entropy either way.

But if the distinguishability of classical particles has nothing to do with the resolution of the Gibbs paradox, then what lies behind the difference between classical statistics and quantum statistics? Why does ‘classical indistinguishability’ have no effect on classical statistics, while quantum indistinguishability leads to completely different statistics? According to Simon Saunders,2 the answer lies in the discretization of the phase space of quantum systems.

When we construct a probability measure over the phase space of a classical system, the most natural choice is that of the volume of phase space. A probability measure over the phase space of a quantum system, though, would be a discrete measure that counts the possible distinct number of states allowed. Consider, then, a very simple quantum system consisting of two indistinguishable particles with three orthogonal states. This system has six possible orthogonal two-particle states, because (2, 1) for example is the same state as (1, 2) if the particles are indistinguishable.

Now consider the classical analogue of this quantum system. Since a classical system has a continuous phase space, the classical analogue is that of a system with its 2-D phase space coarse-grained on each ‘particle’ dimension into three different sections (all figures from Saunders’ paper):

When we consider indistinguishable classical particles, however, we have to halve the above phase space diagram, since if two states with particles swapped are to count as one and the same state, then the diagram is symmetric about its diagonal:

This halved diagram then correctly represents the phase space representation of a classical system of two indistinguishable particles. Note, though, that even though the two particles are indistinguishable, if we take the usual volume-of-phase-space probability measure, the ‘diagonal’ states (1, 1), (2, 2) and (3,3) are half as likely as the non-diagonal states. And this is exactly what we would have gotten had we considered the particles distinguishable instead and counted (1,2) and (2,1) as two distinct states, since there would be twice as many non-diagonal combinations as diagonal combinations. So taking the probability measure on the reduced phase space of the second diagram gives us the same results as considering permutations of distinct states on the full phase space of the first diagram. Since entropy is, in the modern formulation, a measure of the probability of a system’s macrostate, the entropy of a classical system is not affected by considerations of distinguishability.

Quantum systems, on the other hand, are impervious to the classical effects of ‘halving of the phase space’, since as discrete systems they are not reliant on the volume of phase space for their probability measure. In the above example, if the system is quantum with distinguishable particles, then we have nine possible states each with equal probability. If the system has indistinguishable particles, we have only six possible states, each one equally weighted. So moving from distinguishable particles to indistinguishable particles does result in a change in statistics (and entropy) in the quantum case.

Naturally, Saunders has a more mathematically detailed exposition than this, but I thought the simple phase space diagrams provided the most immediate and natural feel for the crux of the issue.

The phase space diagrams also allows us to see why Pauli spoke of Bose-Einstein statistics causing particles to “condense into groups” of the same kind. In the quantum indistinguishable system, the diagonal states, containing pairs of particles in the same states, are weighted just as much as the non-diagonal, heterogeneous states. In the classical system, the diagonal states are weighted less. So groups of particles in the same states are favoured in quantum statistics relative to classical statistics.

There’s some other interesting points in the paper made about how distinguishability relates to the persistence of properties over time which I might comment on later.

[1] E. T. Jaynes, “The Gibbs Paradox”, in Maximum-Entropy and Bayesian Methods, C. R. Smith, G. Erickson, and P. Neudorfer, eds. (Kluwer, Dordrecht), p. 1-22; R. H. Swendsen, “Statistical mechanics of classical systems with distinguishable particles”, J. Stat. Phys. 107:1143 (2002); and N. G. van Kampen, “The Gibbs Paradox”, in Essays in Theoretical Physics, W. E. Parry, ed. (Pergamon, Oxford, 1984), pp. 303-312.
[2] S. Saunders, “On the explanation for quantum statistics”, Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, Vol. 37, No. 1. (March 2006), pp. 192-211.

## More Confirmation of the Impending Death of US Experimental Particle Physics

December 20, 2007

I think many people have long suspected something like this would happen sooner or later. After all, we have some precedence of Congress abruptly cutting HEP funds despite previous positive signs, and the funding trends of the last decade don’t bode well for the field. Read the gory details here.

The brutal summary:

• HEP funding for FY08 has been cut 10% from last year, despite recommendations from the Bush administration to increase it a by a few percent.
• The cuts coming abruptly three months into FY08 mean that the International Linear Collider, which had not anticipated this, has already spent all of its [revised] FY08 budget.
• NOvA was allocated no funds at all, and it has already spent some money.

I learnt many, many things as an undergrad doing an experimental particle physics project, but the lesson to avoid the field like the plague (even though it has its fun aspects) might turn out to be the most valuable of all. You could smell the negative vibes from grad students in the field — most were preparing to get out of science and become quants instead.

Update:
More from the Chicago Tribune. It seems that Dennis Hastert’s resignation had something to do with the HEP budget cuts falling squarely on Fermilab.

## Aargh, My Eyes

December 18, 2007

As if we hadn’t had enough with Paul Davies’ embarrassing attempt at philosophizing, the NYT hands us another confused article, from the usually sensible Dennis Overbye. The usual quotes from naive Platonist theoretical physicists are standard fare, but paragraphs like the following are bad even by the usual standards of science reporting:

Plato and the whole idea of an independent reality, moreover, took a shot to the mouth in the 1920s with the advent of quantum mechanics. According to that weird theory, which, among other things, explains why our computers turn on every morning, there is an irreducible randomness at the microscopic heart of reality that leaves an elementary particle, an electron, say, in a sort of fog of being everywhere or anywhere, or being a wave or a particle, until some measurement fixes it in place.

In that case, according to the standard interpretation of the subject, physics is not about the world at all, but about only the outcomes of experiments, of our clumsy interactions with that world. But 75 years later, those are still fighting words. Einstein grumbled about God not playing dice.

Hmm, I wasn’t aware that those who adhere to the ‘standard’ (Copenhagen, or some variant thereof, I assume) interpretation of quantum mechanics were all staunch empiricists. I should think there are plenty of realists or semi-realists who adhere to the ‘standard’ interpretation of QM.

Overbye goes on, commenting on the ‘intrinsic randomness’ in quantum mechanics:

I love this idea of intrinsic randomness much for the same reason that I love the idea of natural selection in biology, because it and only it ensures that every possibility will be tried, every circumstance tested, every niche inhabited, every escape hatch explored. It’s a prescription for novelty, and what more could you ask for if you want to hatch a fecund universe?

Since when does natural selection ensure that every possibility will be tried and every niche inhabited? Even natural selection with mutations can’t do that — organisms still get stuck in local fitness maxima. But never mind that — it’s patently untrue that quantum mechanical randomness would ensure that every possibility will be tried, for any reasonable reading of ‘every possibility’. QM, if we ignore those pesky Bohmians and such, says only that some events are intrinsically random. That can be true even if the universe (or multiverse, if you’re Max Tegmark), does not explore every physically possible outcome.

And to top it off, we have the ridiculous Feynman quote that first caught my eye on Brian Leiter’s blog:

These kinds of speculation are fun, but they are not science, yet. “Philosophy of science is about as useful to scientists as ornithology is to birds,” goes the saying attributed to Richard Feynman, the late Caltech Nobelist, and repeated by Dr. Weinberg.

I don’t care to dispute the claim that philosophy of science is useless to scientists, because I think it may well be true, although that doesn’t mean it isn’t a worthwhile enterprise. What I do wish to dispute, though, is that the material covered in the article is an accurate representation of philosophy of science. Tellingly, of the academics quoted in the article, 10 were physicists by profession and only 1 was a philosopher of science by profession. That isn’t a knockdown criticism of the article’s claim to be about philosophy of science, though, since it’s quite possible for scientists to do philosophy of science, and it’s certainly true that the physicists interviewed veer closer to it than do most of their colleagues. But any philosopher of science worth her salt would wince at the crude description of the nature of physical laws on the first page of the article, the complexities of the philosophical debate on the status of physical laws only hinted at by the quote from Steven Weinstein on the second page.

The crude, unjustified Platonism of Tegmark and Weinberg is also hardly philosophy of science. Bald assertions of belief are not philosophy of any sort. A Platonist philosopher of science would have plausible reasons for being a Platonist. For that same reason, Anton Zeilinger’s speculation that “reality is ultimately composed of information” is not philosophy of science either. Philosophy of science consists of a lot more than assorted metaphysical assertions; the meat of philosophy of science, and of philosophy in general, is in its arguments. Philosophy of science examines how science works and attempts to draw conclusions from logical arguments and the history and current going-ons of science. Calling assorted Platonist assertions ‘philosophy of science’ is like calling a five-year-old’s construction of a Lego sculpture civil engineering. There is some resemblance, but the label is far more misleading than enlightening.

## Slow Skim

December 16, 2007

I’ve tried Googling for instances of this problem, but can’t find anyone else who’s having it. Skim, a PDF reader-cum-annotator for Mac OS X, reads some PDFs, particularly (it seems) PDFs from JSTOR, painfully slowly. Top tells me that when I’m scrolling through a JSTOR PDF with Skim, Skim occupies up to 80% of the CPU. Skim’s annotation system is signficantly superior to Preview’s annotation system pre-Leopard, but its slowness might just compel me to return to the much leaner Preview. I’m using Tiger 10.4.11 on a 2GHz, 512MB RAM Macbook. There seems to be no reason why Skim should need so much of my CPU.

Late Update
: It turns out this is because JSTOR encrypts all its PDFs and Apple’s PDFKit is slow at decoding that encryption. No solution, sadly.

## A Compulsory Show

December 14, 2007

The third movement of Berio’s Sinfonia is on YouTube, in two parts. Beckett draped over a Mahler skeleton, interlaced with references to Ravel, Stravinsky, Debussy, and Boulez (amongst many others). What could be better?

## Mahler’s 1st Symphony With Becks Bottles

December 14, 2007

The most famous part of the symphony, at least.

For reference, here’s Abbado rehearsing the original with the Berlin Philharmonic.

## Orchestral Politics

December 11, 2007

I’ve been enjoying the blog of Michael Hovnanian, double bassist in an elite Midwestern orchestra, for some time. After one of his colleagues complained about his use of the name (and abbreviated versions thereof) of their orchestra, the blog has become even more amusing. The combined avoidance of the names of the orchestra and its former music director create a suitably Stalinesque atmosphere for the discussion of orchestra politics. And the latest ‘XSO’ burrito image was what triggered me to write this post.

## Thermodynamic Reversibility

December 4, 2007

A long-stalled project of mine was to go around collecting definitions of thermodynamic reversibility from the peer-reviewed and pedagogical literature. This came about when I started noticing that the word ‘irreversible’ was often thrown out as part of an argument when it is not at all clear that the process in question is irreversible, and in what sense it could be said to be irreversible. Jos Uffink has a lengthy review and discussion of the various definitions of reversibility amongst the pioneers of thermodynamics. But the usage of the term isn’t any more standardised in modern-day discourse. Callen’s textbook defines a reversible process as that for which the entropy of the initial and final states is the same. Fermi and Huang define it as that for which the intervening states differ infinitesimally from equilibrium states (and hence can be considered to ‘be’ equilibrium states).

A typical cavalier use of the ‘reversibility’ occurs in Van Kampen’s paper “The Gibbs Paradox” (in fact, it usually occurs most strikingly in papers that talk about the entropy of mixing).1 Van Kampen writes:

If two vessels contain different gases A, B the entropy of the combined system is defined by [the convention of the additivity of entropy of formerly isolated systems]. When a channel is opened up an irreversible mixing occurs, so that this device cannot be utilized to define the entropy S(P, T, NA, NB) of the resulting mixture.

I’m still in the process of figuring out how important this is to his larger argument, but it seems that there is probably some circularity of reasoning involved in using irreversibility to make a point about definitions of the entropy of mixing. Obviously, if he has in mind Callen’s definition of entropy, he can’t then use the fact about irreversibility to infer facts about initial and final entropies, since irreversibility is defined by differing initial and final entropies. If he has in mind Fermi’s definition of entropy, then he must be assuming that the mixing process must pass through states that are significantly far from equilibrium. And it’s not clear that the concept of ‘far from equilibrium’ can be articulated without a prior notion of the entropy of mixing.

1. The paper can be found in this volume of essays.