## Tomb Raider Quantum Mechanics

February 27, 2007

Terry Tao explains QM with a Tomb Raider metaphor.

Seven years after deciding that understanding QM was high on my list of Things to Do, I still haven’t gotten very far. Contrary to my expectations, traditional undergraduate physics education hasn’t helped. This quarter’s mathematical physics course is the first time Hilbert space was mentioned in connection with QM: the compulsory QM sequence took the mind-numbingly calculational, physically uninteresting tack of Griffiths’ popular text, which has perhaps two pages on Hilbert space. Back to knuckling down to Shankar on my own, just as I was trying to do three years ago.

## Never Write ’1=1′

February 27, 2007

In the first proof-based math course I took, the instructor passed out a guide on how to do proofs, with a cover page that had just one phrase on it: “NEVER WRITE 1=1″. At that time I had found it merely amusing, not being one of those who assumed the conclusion of a proof and proceeded to prove a tautology, but now that I have to grade homework assignments of students who have never taken proof-based courses, it’s becoming more frustrating than amusing. Why do 80% of students assume the conclusion and prove the equivalent of 1=1 (while the other 20% simply don’t even get off the ground)? I suspect the answer lies with conventional pre-college mathematical training, where equalities that are presented in questions are normally premises that students are free to use in their calculations. When questions involving proofs introduce equalities that are hypothetical and meant to be proved, their calculating mechanisms fail to flag these as different, and they happily incorporate them in their calculations, at the end thinking they have ‘achieved’ something when they conclude ’1=1′.

This would probably explain other examples of assuming the conclusions. For example, they can never differentiate between questions that ask them to prove continuity and questions that introduce continuity as a premise. In both cases they will assume that ∀ ε = ε > 0, ∃ δ > 0 s.t. if |x-a| < δ, then |f(x)-f(a)| < ε. If you’re supposed to prove continuity, then that assumption will lead you to conclude, majestically, something along the lines of ’1=1′.

These aren’t idiotic students by any means. Quite likely they are capable of sophisticated arguments in their humanities papers. It seems that when their visual system is presented with equations, they automatically plunge into math mode and start plugging everything they see on paper into their acquired stable of mechanistic calculational procedures without distinguishing between premises and conclusions. This ties in with the common perception that mathematical ability is an innate something that some people just lack — once one takes that attitude, then one frees oneself from the responsibility of actually attempting to think about math the way one thinks about other subjects.

Incidentally, I had to retype that last paragraph about three times since WordPress.com’s LaTeX system doesn’t accept ‘>’ and ‘<’ in LaTeX very well — the combination of the two kept wiping out that last paragraph, due to interference with the HTML. Good to know, so I (and hopefully others) won’t waste time trying to type them in LaTeX when HTML code can do those perfectly well.

It’s nearly 2am, and judging by the number of pages left, I’m not even halfway through this week’s morass of awful proofs. At least this time I’m not having to deal with things like $\frac{\infty}{\infty}$ or $0 \times \infty$, which the instructor unwittingly encourages when he writes stuff like that sloppily when he teaches and tells them to not write it like that in the homework (but they do anyway).

Ugh.

## Don’t Say “Quantize”

February 24, 2007

Another thought-provoking perspective from Bob Geroch, which I will attempt to paraphrase below.

Roughly speaking, given a classical system that has a configuration space $C$, $L^2\left(C\right)$ gives the Hilbert space of its quantum mechanical states. However, this gets the real situation backwards. Reality is quantum mechanical, and, neglecting historical factors, classical mechanics is really an offshoot of it. The fact that classical mechanics was established first is a historical, not physical, fact. So saying we derive a quantum mechanical description from a classical description is like “painting a smiley face on the wrong thing to get the right thing”. Therefore, it is a gross misinterpretation to say that we “quantize” classical mechanics. Classical mechanics should be taught in history class, its predominance in physics instruction is a social construct due to Newton’s fame and various other factors, and “quantize” should be eliminated from the English language.

——–

My take on this:

I find it difficult to imagine that quantum mechanics could have been developed without classical mechanics. Perhaps that says nothing about which is more “fundamental”, and is another historical artifact to do with the way humans gather information and reason. Perhaps my imagination is simply limited by my years of marination in traditional physics instruction.

Also, for all we know, QM might turn out to be another “social construct”.

## Why do we need the whole Hilbert space?

February 24, 2007

This came up in our mathematical physics class. It is often said that quantum mechanical states are described by vectors in Hilbert space. This, however, is slightly inaccurate, since any scalar multiple of a vector describes the same state. Therefore, strictly speaking, quantum mechanical states are actually described by “rays” in Hilbert space. Despite this, it seems that one still needs the whole Hilbert space to do quantum mechanics — one cannot do it by using “rays” alone. In Geroch’s words, we have to “drag the vectors through the mud” in order to do quantum mechanics, and it is not clear why this is the case.

## The Barenboim Hula, and Others

February 24, 2007

A YouTube video setting Barenboim’s characteristic hip-swaying movement to appropriate music. (Via Marc Geelhoed)

The Leipzig Gewandhaus Orchestra was here last night to perform Mahler’s 5th (with Chailly). After being treated to two separate performances of that piece by the CSO last season, I was sceptical that the Gewandhaus orchestra could better them. Indeed, their brass was more ragged than the CSO’s, but they gave one of the most sensitive accounts of the fourth movement that I’ve heard, and the cellos’ heartbreaking rendition of the introspective recitative amidst the bursts of rage that fill the second movement was enough to justify my attending yet another performance of that symphony amidst the usual crush of work and an inadvertent over-scheduling of concerts (as a result of not bothering to check my calendar before I bought tickets on a few separate occasions, I scheduled myself to attend Thibaudet’s recital on the 18th, a CSO performance on the 22nd, the Gewandhaus yesterday, and Cosi Fan Tutte at the Lyric Opera today).

I had always been quite satisfied with Bernstein’s recording of Mahler’s 5th with the Vienna Philharmonic, but in recent weeks, I’ve noticed that there is something odd about the volume. The strings sound really muffled, which is a pity since they are my favourite section of that orchestra. There also seems to be insufficient contrast — the climaxes aren’t significantly louder than the rest.

## It is worse than you think

February 21, 2007

I suspect it doesn’t help that Chicago has a [I think undeserved] reputation for being ‘theoretical’. This probably results in a greater proportion of theoretically-inclined students to apply here. It is absurd that 75% of entering grad students want to be theorists. It’s as though 75% of entering philosophy grad students expected to land an academic position at a research university. That does not happen because philosophy grad students are a lot more aware of the state of the philosophy job market. In the sciences students tend to assume that jobs are not hard to come by, but unfortunately theoretical physics is a major exception. An exception that, sadly, continues to attract the best and brightest minds in all of science into cutthroat competition.

## The Costs of Massive Modularity: a paraphrase

February 17, 2007

A more philosophical way to express the concerns I raised about evolutionary psychology’s massive modularity hypothesis:

Evolutionary psychologists argue that modules must exist because for a given problem, it is an evolutionary advantage to have a specialised function in the brain that addresses the problem according to its specific characteristics, and not just as any kind of problem out of the millions that could arise in an organism’s lifetime. Thus, to detect cheaters, it is faster and hence more ecologically rational to do so via a cheater detection module rather than a general reasoning mechanism. However, even if we grant them that for any particular problem it is better, for the purposes of solving that particular problem, to have a brain programme that specialises in solving that type of problem alone, it does not follow that, in order to solve the collection of problems an organism experiences, it is better to have a collection of brain programmes each tailored to solve a different type of problem in the array of problems. For, as I hypothesised, there could be emergent effects in having a collection of functionally separate brain programmes, effects that could themselves create new problems.

## Intuition in mathematics and physics

February 17, 2007

This post on how the best thinkers in physics are those who can most ably explain technical concepts in non-technical language encapsulates why I think the current mathematical physics course is the best math and physics course I’ve taken in my life. The best math course I’ve taken, other than this, was similar, in that the teacher excelled at instilling understanding of abstract mathematical concepts by using highly accessible, intuitive language. At the same time, rigour isn’t sacrificed — the idea is to feel one’s way towards the solution while working entirely in the land of intuition, but to write the solution entirely in non-intuitive technical language.

Although there are no course notes and no official textbook for Geroch’s course, the solutions and comments on the problem sets constitute excellent resources. They represent both ends of the spectrum that one needs to be a successful mathematician or theoretical physicist. The solutions stick to the formalism alone, providing the standard proofs one sees in math textbooks. The comments represent the other end of the spectrum — they explain what the proof really means, and how one feels one’s way to it intuitively. An example of the kind of language the comments use:

Non-measurable sets are so frothy that they have excessive measure. Here, we know that X is virtually froth-free: Since $\mu^* \left(X\right) + \mu^* \left(A-X\right) = \mu^* \left(A\right)$, where’s the froth? We want to show that X must be measurable, i.e., that frothlessness is a sufficient condition for measurability.

None of the textbooks on Lebesgue measure I’ve skimmed through explain the problem of excess outer measure in ‘frothy’ sets this way. This way, though, is infinitely more enlightening than the formal, axiom by axiom, inference by inference proofs that populate math textbooks. Eventually, one wants to express the idea of frothiness formally. But what comes first is an intuitive idea of what frothiness is like.

(Disclosure: Part of the reason I wrote this entry was to test the new implementation of LaTeX in WordPress.com posts. Now to avoid using equations as a crutch in areas where I lack sufficient intuition…)

## Smoothing out a rug

February 17, 2007

An apt metaphor describing the process of philosophizing, which Jim Conant credited to one of his advisors. Imagine a rug on a flat surface. It’s crinkled at certain points at the edges. You try to smooth the crinkles out systematically, perhaps proceeding around the rug clockwise, or doing it corner by corner. When you get to the last corner, though, you find that as a consequence of your other smoothing outs, it is a topological mess. Perhaps you somehow manage to smooth it out anyway. Look further afield, though, and you’ll find that the opposite corner has turned into a mess as a result. Perhaps the rug just isn’t the right shape. Or perhaps there is a particular sequence of smoothing outs that will do the trick.

Musing about this in the bath (where else), I realised that the rug metaphor could well describe my gravitation towards the philosophical aspects of physics. Physicists just want the rug to do a minimal job — as long as it covers a reasonable amount of floor area, and is plush and durable and whatever good rugs are, they don’t mind having a few crinkles here and there. Those make for good conversation pieces when they have non-physics company (“that bump in that corner has an interesting history, let me tell you about it…”). But by and large the crinkles don’t get in the way of doing physics.

I am like an annoying guest who notices all the crinkles and cannot help but be finicky about them. “Yes, I know it doesn’t get in the way of the physics,” I tell them, “but there is just something about crinkles that really gets under my skin.”

If we can’t have a perfect rug, can’t we at least try?

## The Costs of Massive Modularity

February 6, 2007

When sociobiology passed on its mantle to evolutionary psychology, it seemed that the emphasis shifted away from rigorous mathematical theories in population genetics to plausibility arguments for psychological mechanisms that do not incorporate selection as thoroughly as the theoretical biologists did. The clean-cut equations of Maynard Smith, Williams, Hamilton, Trivers and Price et al are taken as part of the argument for why evolutionary psychology is plausible. However, the more specific hypotheses constituting evolutionary psychology are not supported by similarly rigorous evolutionary reasoning. The standard Tooby-Cosmides argument for massive modularity in our mental architecture considers all the advantages modularity has over domain-generality, but fails to discuss any possible additional energy costs modularity might incur, or whether the details of genetics and molecular biology could possibly conspire such that it is more difficult to acquire many specific adaptational mental modules rather than one monolithic general reasoning device.

I have not found any literature discussing the energy disadvantages a modular brain might have compared to a domain-general brain. I think it is plausible that there are such energy disadvantages, simply because with massive modularity, most specialised mechanisms are not operating most of the time, coming into action only in certain environmental conditions. Therefore, in some sense, these mechanisms are lying dormant most of the time. A domain-general brain, on the other hand, would not have such specialised mechanisms lying dormant. In short, at any given point in time, a domain-general brain has less redundancy — it might not be using its full range of computational resources, but it doesn’t have a massive suite of information processing programmes that are just standing around twiddling their thumbs, as such. Since our brain is not a solid state flash drive, just maintaining these programmes in a dormant state consumes energy, so a massively modular brain would have to channel energy towards the upkeep of these programmes even when they are not working, which most of them aren’t most of the time. The domain-general brain would seem to be free of this burden. So goes my plausiblity argument for why massive modularity could be more costly than domain generality. I have tried Googling for discussions of the energy costs of massive modularity, but have come up blank. Robert Richards, too, says he doesn’t know of any arguments on this score. I suspect that sociobiologists, if they, rather than psychologists, were the ones spearheading the evolutionary psychology movement, would not tolerate having an evolutionary theory that does not take into account the costs as well as the benefits of a particular adaptation.

I also could not find any research on whether it might have been ‘easier’ (more probable) for organisms to acquire massive modularity (or to acquire domain general information processing mechanisms). My intuition is that there must be some bias, in the landscape of genotypes, towards a particular class of mental structures. In other words, I would be very surprised if, in the topology of genotypes, the measure of genotypes with massively modular mental architectures was equal to the measure of genotypes with domain general mental architectures. Now, of course, natural selection would, if we assume equal energy costs for both alternatives and accept the evolutionary psychologists’ arguments for the evolutionary advantages of massive modularity, tend to favour the genotypes with massive modularity, so measure alone, without consideration of fitness functions, cannot tell us everything. But I do not think it is entirely implausible that massive modularity could have a negligible measure compared to domain generality, and if this were so, it would be plausible that we may attain a local fitness maximum on the set of genotypes coding for domain generality and remain there, simply because the set of genotypes coding for massive modularity was, metaphorically speaking, located in an obscure, tiny and relatively inaccessible part of the landscape of possible genotypes, such that even millions of years of recombination and mutations have failed to transport humans to that location.

Such considerations I find more important than the kind of things the psychologists go on about. I do think that evolutionary psychology could do with an injection of good old fashioned mathematical population genetics. Fuzzy arguments make my head spin.

Addendum: The Sperber article I linked to above discusses the problem of allocating energy amongst modules. The idea is that we do not need all modules to be active all the time, so it makes sense to have an energy allocation algorithm whereby energy is allocated preferentially to the modules that would have the largest cognitive benefits. This brings up another possible way in which massive modularity could be more energy-consuming — it demands that the brain have a informational relevance monitoring system and energy consumption prediction system to allocate energy between modules, and these systems naturally consume energy themselves.