## Frisch on reliable theories

February 12, 2012

There’s a really confusing passage in p. 42 of Mathias Frisch’s book on inconsistency in classical electrodynamics. He suggests, in response to the “problem” of inconsistency in classical electrodynamics, that we modify our account of theory acceptance:

this problem disappears if in accepting a theory, we are committed to something weaker than the truth of the theory’s empirical consequences. I want to suggest that in accepting a theory, our commitment is only that the theory allows us to construct successful models of the phenomena in its domain, where part of what it is for a model to be successful is that it represents the phenomenon at issue to whatever degree of accuracy is appropriate in the case at issue. That is, in accepting a theory we are committed to the claim that the theory is reliable, but we are not committed to its literal truth or even just of its empirical consequences. This does not mean that we have to be instrumentalists. Our commitment might also extend to the ontology or the ‘mechanisms’ postulated by the theory. Thus, a scientific realist might be committed to the reality of electrons and of the electromagnetic field, yet demand only that electromagnetic models represent the behavior of these ‘unobservables’ reliably, while an empiricist could be content with the fact that the models are reliable as far as the theory’s observable consequences are concerned.

If acceptance involves only a commitment to the reliability of a theory, then accepting an inconsistent theory can be compatible with our standards of rationality, as long as inconsistent consequences of the theory agree approximately and to the appropriate degree of accuracy… our commitment can extend to mutually inconsistent subsets of a theory as long as predictions based on mutually inconsistent subsets agree approximately.*

What confuses me about this is that I do not know what Frisch could mean by a theory being reliable apart from its consistently producing predictions that agree with experiment. Frisch wants to avoid instrumentalism by claiming that in accepting a theory, all we are committed not just to the observable consequences of the theory, but also possibly to the reality of the ontology and mechanisms of the theory. That is, in accepting the theory of electrodynamics, we might also be committed to the claim that electromagnetic models represent the behavior of ‘unobservables’ like ontology and mechanisms reliably. But what does it mean to represent reliably, apart from being a representation that reliably leads to predictions that agree with experiment? What does Frisch mean in the excerpt above by “represents the phenomenon at issue to whatever degree of accuracy is appropriate”? How can degrees of accuracy be attributed to representations over and above the accuracy of their experimental predictions?

Incidentally, I’m appalled at how expensive Frisch’s book is now. I bought it for $9 on Amazon when OUP slashed prices after having decided to stop printing it. Now it costs$60. The Kindle Edition costs \$53.72!

* Frisch, M. (2005). Inconsistency, Asymmetry, and Non-Locality: A Philosophical Investigation of ClassicalElectrodynamics. Oxford University Press, USA.

## Tyndall on the Value of Science

February 15, 2011

The old applications vs intrinsic value debate again. But I just love the way Tyndall writes:

Thus, in brief outline, have been brought before you a few of the results of recent enquiry. If you ask me what is the use of them, I can hardly answer you, unless you define the term use. If you meant to ask whether those dark rays which clear away the Alpine snows, will ever be applied to the roasting of turkeys, or the driving of steam-engines — while affirming their power to do both, I would frankly confess that they are not at present capable of competing profitably with coal in these particulars. Still they may have great uses unknown to me; and when our coal-fields are exhausted, it is possible that a more ethereal race than we are may cook their victuals, and perform their work, in this transcendental way. But is it necessary that the student of science should have his labours tested by their possible practical applications? What is the practical value of Homer’s Iliad? You smile, and possibly think that Homer’s Iliad is good as a means of culture. There’s the rub. The people who demand of science practical uses, forget, or do not know, that it also is great as a means of culture — that the knowledge of this wonderful universe is a thing profitable in itself, and requiring no practical application to justify its pursuit.

But while the student of Nature distinctly refuses to have his labours judged by their practical issues, unless the term practical be made to include mental as well as material good, he knows full well that the greatest practical triumphs have been episodes in the search after pure natural truth. The electric telegraph is the standing wonder of this age, and the men whose scientific knowledge, and mechanical skill, have made the telegraph what it is, are deserving of all honour. In fact, they have had their reward, both in reputation and in those more substantial benefits which the direct service of the public always carries in its train. But who, I would ask, put the soul into this telegraphic body? Who snatched from heaven the fire that flashes along the line? This, I am bound to say, was done by two men, the one a dweller in Italy,* the other a dweller in England,** who never in their enquiries consciously set a practical object before them — whose only stimulus was the fascination which draws the climber to a never-trodden peak, and would have made Caesar quit his victories for the sources of the Nile. That the knowledge brought to us by those prophets, priests, and kings of science is what the world calls ‘useful knowledge’, the triumphant application of their discoveries proves. But science has another function to fulfil, in the storing and the training of the human mind; and I would base my appeal to you on the specimen which has this evening been brought before you, whether any system of education at the present day can be deemed even approximately complete, in which the knowledge of Nature is neglected or ignored.

That was from Fragments of Science, vol. 1, pp. 94-5.

*Volta.
**Faraday.

## Epistemic opacity in simulations

January 10, 2011

This post is the result of reading Wittgenstein and the philosophy of simulation literature in close temporal proximity.

Here is Paul Humphreys on epistemic opacity in computer simulations:

a process is epistemically opaque relative to a cognitive agent X at time t just in case X does not know at t all of the epistemically relevant elements of the process. A process is essentially epistemically opaque to X if and only if it is impossible, given the nature of X, for X to know all of the epistemically relevant elements of the process. For a mathematical proof, one agent may consider a particular step in the proof to be an epistemically relevant part of the justiﬁcation of the theorem, whereas to another, the step is sufﬁciently trivial to be eliminable. In the case of scientiﬁc instruments, it is a long-standing issue in the philosophy of science whether the user needs to know details of the processes between input and output in order to know that what the instruments display accurately represents a real entity.

The charge is that simulations bring something new to philosophy of science because they are epistemically opaque, unlike, say, the process of solving an equation analytically.

However, I’m not sure I understand how simulations are any more epistemically opaque than physical experiments or non-automated calculations in mathematics. First, consider experiments. It seems to me that the checks we make to ensure that the results of experiments are reliable are almost completely analogous to those we make to ensure that the results of simulations are reliable. Allan Franklin has a good list of the kinds of checks we make to ensure that experiments produce reliable results. All the seven criteria he describes there seem to be used to validate simulations as well as physical experiments. We do check that the simulation reproduces known results and artifacts. We do try to eliminate plausible sources of error. If the simulation produces a striking pattern that can’t be explained by plausible sources of error, we do use that pattern itself to argue for the validity of that pattern as a legitimate result. If multiple independently corroborated theories account for the results of a simulation, that does add to the validity of the results. Simulations are often based on well-corroborated theories. Finally, statistical arguments are used to argue that patterns seen in simulations are real.

So what is epistemically relevant in simulations that humans cannot know, that can be known in the case of physical experiments and mental or pen-and-paper mathematical calculations? I’m guessing that what Humphreys takes to be epistemically relevant in simulations but inaccessible to human knowledge is something like the results of each computational step in the simulation, or whether the mechanistic workings of the simulating apparatus produces mathematically correct results. But are the results of each computational step epistemically relevant? Here is one reason to think not. In a physical experiment, one never has a complete working theory of the apparatus that tells us the exact consequences of every step in the experiment. It seems to me that demanding that the result of every computational step in the simulation be epistemically accessible to humans is analogous to demanding that every step in the experiment be justified by a theory that describes every aspect of the apparatus.

What if Humphreys considers the reliability of the simulating apparatus, that is, whether it is producing mathematically correct results, as the epistemically relevant aspect of simulations that is essentially inaccessible to humans? As noted above, the same way one can validate the reliability of experiments without having a complete theory of the experimental setup, we have ways of validating the reliability of simulations. But they are not foolproof of course. Suppose we take seriously the possibility that our methods of validation still leave out epistemically relevant information. It is possible that even though our checks show that the results are reliable in a large variety of situations, some hocus-pocus is going on which can be discovered only by going through every single step in the simulation, which humans cannot do. But there is an analogous “problem” when it comes to mental or pen-and-paper arithmetic. One’s belief that one is calculating 2098×98723 correctly, if one is doing it for the first time, is based on one’s past success in calculating various other things correctly. Of course some hocus-pocus could be going on just this time, for the new calculation, a kind of hocus-pocus which did not show itself in previous calculations. But this possibility does not lead us to say that there is something epistemically missing from the new calculation. If one really wants to be paranoid, one could always doubt the results of mental or pen-and-paper calculations, because after all we do not know, mechanistically, how the human mind consistently applies arithmetical rules, and whether it always correctly applies them. We act as though it always consistently applies them because of prior evidence of its reliability, but these do not suffice to ensure with certainty that it will always consistently apply them. How is this different from the case of simulations? In simulations, we also only have the prior results of simulations, and the backing of mathematics and physical theories relating to the mechanics of the simulation, to assure us that this time the simulation will also be reliable.

Humphreys’ ascription of epistemic opacity to machine calculations but not human calculations is an interesting inversion of one point of view that Wittgenstein discusses at various points in his philosophy of mathematics. Wittgenstein identifies the philosopher of mathematics’ love for axiomatic reductions of mathematics due to the idea of “mechanical insurance against contradiction” (RFM, p. 107e, his emphasis). The idea is that by reducing mathematics to a set of rules that even a machine can follow, one excludes mistakes from mathematics:

We may trust ‘mechanical’ means of calculating or counting more than our memories. Why? — Need it be like this? I may have miscounted, but the machine, once constructed by us in such-and-such a way, cannot have miscounted. Must I adopt this point of view? — “Well, experience has taught us that calculating my machine is more trustworthy than by memory. It has taught us that our life goes smoother when we calculate with machines.” But must smoothness necessarily be our ideal (must it be our ideal to have everything wrapped in cellophane? (RFM, 106e)

Humphreys, P. (2008). The philosophical novelty of computer simulation methods Synthese, 169 (3), 615-626 DOI: 10.1007/s11229-008-9435-2
Wittgenstein, L. (1967). Remarks on the Foundations of Mathematics, ed. G. H. von Wright, R. Rhees, and G. E. M. Anscombe, trans. G. E. M. Anscombe. MIT Press.

## Truesdell on European versus American science

June 19, 2010

I don’t know if Clifford Truesdell was just deluded when he said this in 1966, or if things have really changed that much since then:

The European assistant who disagrees openly with his professor risks losing all chance of going on with his research, not to mention failure ever to get a decent job. In the United States, a paper is no more esteemed if it appears within covers sealed by an academy or professional society, no less so if it has been rejected by such a body before being published in a private journal, and for the young giant, trampling upon his professors is a more honorable path to fame, promotion, and such modest prosperity as the scientific trade allows than is the fawning filial piety the European professor expects and receives from his disciples as long as he lives. Our academic life presents to the foreigner a lamentable scene of chaos. No-one knows who is on top.

– C. Truesdell, Early Kinetic Theories of Gases, in Essays in the History of Mechanics, Springer-Verlag, New York, 1968.

## Discrete observations and classical confidence intervals

May 31, 2010

In particle physics, experimentalists often aim to set limits on certain physical quantities, in part to verify theories. Say a theory predicts that a particle called Gobbledygook has a 10-8 chance of decaying into two Gooks and a $1-10^{-8}$ chance of decaying into three Gobbles. Often, the ratio between these two decay modes are closely related to important parameters in the theory. Experiments that try to set limits on the ratios of these decays can therefore give us an idea of the range of values in which those parameters fall. The fraction of total decays that a particular decay mode takes up is called the branching ratio of that decay mode.

These experiments proceed by creating a huge number of Gobbledygook decays, and counting the number of these decays that (say) result in two Gooks. The eventual count is therefore a discrete quantity — one cannot count a fractional number of decays. The branching ratio itself, which is what the experimenters try to set a limit on, is not a discrete quantity. So the limits that experimenters put on branching ratios are not subject to the restriction of discreteness — they can take on a range of continuous values.

In classical statistics, confidence intervals have the following significance. A 90% confidence interval means that if I carry out a large number of experiments and set a 90% confidence interval in each experiment about the quantity I’m measuring, then 90% of those confidence intervals will contain the actual value of the quantity I’m measuring. That is, classical confidence intervals say something about the expected coverage of the actual value that is generated by a particular method of constructing confidence interval.

So let’s say I want to put an upper limit on the branching ratio of a particular decay mode. I measure the number of such decay modes in my sample of decays, $n_0$, and find that $n_0=0$. I know that the decay mode is a Poisson process with unknown true mean $u_t$, i.e. $P(n|u_t) = u_t^n e^{-u_t} / n!$. To set a 90% confidence level upper limit on $u_t$, I put $n=0, P(n|u_t)=0.1$ and solve for $u_t$. This gives me the upper limit $u_2 = 2.3$.

Up to this point, we haven’t considered uncertainties due to the experimental setup. If there are no uncertainties whatsoever, that is, if the experimental apparatus and data analysis are of infinite precision, then the above method of constructing a 90% confidence interval, if repeated, will in fact lead to 90% of confidence intervals constructed this way covering $u_t$.

However, no experiments have infinite precision, so we have to take uncertainties into account. But the classical 90% confidence interval we get when we take experimental uncertainties into account in fact leads (in the above example) to u2 < 2.3, a tighter limit than the limit that an experiment with infinite precision would lead us to set! This, as Robert Cousins writes, is unacceptable since

if two experiments each find $n_0=0$ and have the same $\hat{s}$, the poorly calibrated one will report a more restrictive limit than the superbly calibrated one.

That is, we’d expect that the “more precise” experiment would allow us to place a stricter limit on the branching ratio, yet it turns out that with classical confidence intervals, the less precise experiment gives us a stricter limit!

Here’s how that happens. For the infinitely precise experiment, the 90% confidence interval is as described above. We want to measure the branching ratio $R_t = u_t / s_t$, where $s_t$ is the true sensitivity of the experiment. In the infinitely precise experiment, there is no uncertainty in $s_t$. Thus 90% of confidence intervals about the measured branching ratio $\hat{R}$ will cover $R_t$. 10% will not.

Now suppose we don’t know the true sensitivity $s_t$. We can only estimate it by $\hat{s} \pm \sigma$. Suppose $\sigma = 0.1 \hat{s}$. Suppose further that $u_t =2.28$ or $u_t = 2.32$, that is, $u_t$ is close to 2.3 relative to $\sigma$. Then the percentage of experiments that will observe $n_0 \geq 1$ is very close to 90%. When we construct the confidence intervals about $\hat{R}$ from these experiments, their upper limit will be $3.9 / \hat{s}$ or greater, so nearly all of the 90% will cover $R_t$. In the remaining 10% of experiments where $n_0=0$, about half of the confidence intervals will cover $R_t$ — due to the $\pm \sigma$ term in the sensitivity. Thus the total coverage of $R_t$ will be approximately (90+5)%=95% — not 90%! A 90% confidence interval for the experiment with uncertainty $\sigma=0.1 \hat{s}$, according to Cousins, would result in an upper limit of $2.0/ \hat{s}$, stricter than the $2.3 / \hat{s}$ that one gets in the infinitely precise experiment!

Cousins says that this strange result is due to the discrete nature of observations in a Poisson process. I think of it intuitively this way. The discreteness of the observations means that with $u_t \approxeq 2.3$, about 10% of experiments will throw up the result $n_0=0$. Because of the symmetric uncertainty about $\hat{s}$, about half of these will cover $R_t$. Now, if $n_0$ were a continuous variable (excuse this rather dubious counterfactual), many of these incidences of $n_0=0$ would instead be spread over a range of positive values of $n_0$. These incidences would have limits higher than the $2.3 / \hat{s}$ for $n_0 = 0$, so fewer of them would cover $R_t$ compared to the discrete case. Thus, the discrete nature of the observations leads to over-coverage.

Note the occurrence of overcoverage does not depend on $u_t$ being close to 2.3. But the effect is magnified the closer $u_t$ is to 2.3.

Cousins uses this anomaly — that a “more precise” experiment can actually lead to less stringent limits on branching ratios — to argue that particle physicists should employ Bayesian statistics instead. But Bayesian statistics comes with its own collection of problems, the most obvious one being the need to choose a prior. This can sometimes be an “advantage”. In experimental particle physics, the Particle Data Group is a particularly important organisation. Every year, it publishes a Review of Particle Physics that is the “bible” for experimental particle physicists — among other things, it contains all the “accepted” values of physical constants and parameters relevant to particle physics. When Cousins wrote his paper, the PDG’s weighted average over experiments for the squared mass of the neutrino, with a central 68% classical confidence interval, was $m^2 = (-54 \pm 30) eV^2$. That is, the entire confidence interval was in an “unphysical” region! If one uses a prior that is zero for values of $m^2 <0$, then one can rule out such "unphysical" confidence intervals. But this still leaves the question of whether the prior for the "physical" region should be uniform in $m$, $m^2$, or something else. Cousins reports that "the consensus view settled on $m^2$, but the fact that the upper limit depends on this choice remains unsettling to many".

What I find most interesting about this statistical curiosity is the tensions at work in the desiderata for published limits on quantities like branching ratios. On the one hand, it would be nice to have a pithy description that is uniform for all the branching ratios listed in the Review of Particle Physics — all with a weighted average and the appropriate uncertainty associated with a standardised confidence level. That would be great utility for those looking for a quick overview of the experimental situation, say in order to jot down some rough pen-and-paper estimates in a related calculation. On the other hand, these pithy descriptions leave out the intricacies described in Cousins’ paper, imparting a perhaps misleading objectivity to the reported values. Recall that Cousins balks at accepting a method that leads to an experiment with infinite precision being less stringent with its limits than one with finite precision. I suspect that’s because he’s acknowledging the experiment as imparting authority to its reported mean value and confidence interval in its own right, not as just another statistic in the hypothetical ensemble of experiments that together satisfy the requirements of classical confidence intervals. If one takes the ensemble point of view seriously, then it’s not clear that Cousin’s worry matters. Of course, there is a whole other question about whether we should really be thinking in terms of large ensembles of experiments in experimental particle physics, given that the difficulty and expense of such experiments ensure that we do not have such large ensembles in practice.

Cousins, R. (1995). Why isn’t every physicist a Bayesian? American Journal of Physics, 63 (5) DOI: 10.1119/1.17901

## The purpose of physics graduate classes

January 26, 2010

I’m taking a graduate statistical mechanics course this semester. My first physics course in more than two years, and my first graduate physics course.

One reason I started disliking physics courses when I was an undergraduate was that class time was spent almost entirely on going through the details of the derivations in the textbook. If there’s anything guaranteed to send me to sleep, especially if it’s a 9.30am class, it’s someone at the board moving symbols here and there and reciting the arithmetical rules he’s using to move those symbols. Furthermore, the vast majority of derivations are mathematically straightforward and can be understood from a close reading of the textbook. I don’t need someone to go through what I can glean from reading the textbook on my own.

For whatever reason, I thought that graduate classes in physics would be better. Well, this one isn’t. The professor is going through nearly every single line in Pathria’s text. What’s more, he actually tells us to read the textbook beforehand because he doesn’t want us to be looking at the textbook figuring out the math while he’s “teaching”. But if I read the textbook beforehand (which I do), I understand the derivation, so I get bored when he comes to class and goes through the exact same derivation, except more slowly and in more painful detail. So far I’ve always ended up working on my problem sets during class instead, which I find a much more productive use of my time.

One point that Eriz Mazur makes in this excellent talk on science education is that in a humanities class, it’s standard to expect students to do the assigned reading. The class then proceeds with the assumption that students have done the reading. The instructor does not hold your hand and lead you through every line of the reading. It is also understood that if you don’t do the readings, it pointless to go to class because the class is going to assume you’ve at least grappled with them, and start off on that higher level.

The opposite happens in science classes. In the vast majority of physics classes I had, even if the professor recommends that you read certain parts of the textbook or lecture notes, s/he still leads you by the hand through the very material that you’ve just read. You could not read any of it and still get the contents of what you were supposed to have read through the lecture alone. (The one class I had that did not follow this mould, which was also my favourite class in any subject ever, was taught by someone who would receive negative feedback from most students about his more Socratic teaching style. People complained that he did not follow a textbook, but for me, it is exactly when someone follows a textbook that his lectures fail to add value.) The learning style encouraged by such teaching seems to be passive rather than active, compared to humanities classes.

All this may be excusable as a sop to undergraduates who are either too stupid or lazy to read textbooks on their own, but firstly, the same undergraduates are not treated as such by their humanities instructors, and secondly, why is this still going on in graduate classes? Why do graduate students have to be hand-held through a textbook? If you can’t read a textbook like Pathria on your own, should you even be in a physics PhD programme?

A useful contrast is with my philosophy graduate seminars. There, as with undergraduate humanities classes, you’re expected to do the readings before class. Discussions in class proceed on the assumption that you have done your readings. Students are typically asked to present some of the readings, and the presentations will have some sort of summary of the contents of the readings, but it’s nothing to the extent of the presenter going through step by step of the arguments in the readings, the way physics teachers go through the textbook derivations step by step. And of course, there is much discussion of the readings.

I discussed this with someone before, and the response was that in science, there is typically a definite “right answer” to questions, whereas in the humanities, most of the important answers are still unknown. The implication is that if there is a “right answer”, then this should be told to the students. In contrast, if the right answer is not known, then discussion might somehow bring one closer to it.

I suspect many people think of science education this way, and I think they’re deeply mistaken. The objective of science education as I see it is not to tell people the right answers. The objective is understanding. People may know the right answers without understanding why they are right. In physics, the analogue would be if a student could do all the problems set by her instructor by the simple expedient of applying certain formulaic rules, but does not understand why those rules hold. This was my situation for most of my physics classes, and was a huge contributor to why I became frustrated with physics. Of course, I did go to my professors outside of class to try to get a deeper understanding, but most of the time they could not answer my “conceptual” questions. They seemed to be prepared only to tell students how to apply certain rules, without being able to justify those rules themselves.

The peer discussions that Mazur and an increasing number of people who study science education are advocating go some way to helping students get to the answer on their own and, on the way, gaining a deeper understanding of why the answer is what it is. It makes science education more like humanities education.

Pushing the idea further, I see no reason why a graduate class in physics cannot be run like a graduate class in philosophy. Have the students read the relevant section of the textbook beforehand. This should be just as obligatory as doing the readings for humanities classes is. In class, ask if anyone has had any problems understanding the assigned readings. If someone has, try to straighten her out. Do not go through every step of the proofs in class, since it could (and should) well be the case that most students understand the proofs already. Class time should be used instead to discuss interesting implications of the proofs, setting the context for them, considering the assumptions they use and what implications those assumptions have, and so on. In fact, I also see no reason why the model of having students present on the assigned readings cannot be applied. Do physics professors have such low expectations of their students that they think they cannot learn on their own and have to be spoonfed just like undergraduates? Or is it I who has overly high expectations of physics graduate students?

## Reductivism, simulations and materiality

November 11, 2009

One of Jerry Fodor’s arguments against reductivism in his 1974 paper is that bridge laws reducing, say, economics to physics are undermined by the possibility of simulations of economic systems that instantiate the same economic ‘laws’ but have a very different physical basis from economic systems that are composed of interacting humans. It’s difficult to imagine what bridge laws would carry out the reduction successfully for both the simulated economy and the real human economy.

Someone in class pointed out that this would seem to be more of a problem for economics than for something like chemistry. After all, it seems like the physical basis for chemical systems has to be electrons, protons, and so on — the ontology of quantum mechanics, the science to which it allegedly reduces. What other physical basis could there be?

But then there are such things as simulations of chemical systems. If we take these to be parallel to the case with simulated economic systems, then bridge laws relating chemistry to quantum mechanics would have to cover not only the cases where the physical properties are instantiated in an actual physical system, but also those where they aren’t actually instantiated but represented in a computer simulation.

I take it that it would be an unacceptable move to say that the laws of chemistry don’t apply to simulated chemical systems, and in any case it’s difficult to see why one should be allowed to say that and not allowed to say that the laws of economics don’t apply to economic systems that aren’t composed of humans.

The only other way at present I can think of to get out of this bind is to say we should somehow regard the chemical simulation as instantiating the physical properties it represents. But the awkwardness in that phrase (what does it matter how we regard it — it either instantiates them or it doesn’t, regardless of our regard) suggests that I can’t find a way to make this sound like a good move. And in any case we run into the same problem: why should we do this for the chemistry case but not for the economics case?

Still, I have this intuition that somehow chemistry is essentially about actual atoms and molecules, while economics is more removed from the material nature of the systems it is applied to. But I can’t think of a way to justify it.

## The Purpose of Undergrad Labs

January 31, 2009

Chad Orzel asks what experimental setup he should use for an undergrad photoelectric effect lab:

…we have two different set-ups for doing a photoelectric effect experiment. One of these is a PASCO apparatus with the phototube wired to a circuit inside an actual black box. You shine light into the tube, press a button, and the output of the box rises to the stopping potential for that frequency in a more-or-less exponential manner. This gives very nice results, often within 1% of the accepted value of Planck’s Constant.

The other is an old-school lab, using a homemade monochromator and a phototube with an external voltage generator supplying the stopping potential. For each color of light, the students watch the output of the phototube on an oscilloscope, measure the output voltage for a handful of applied voltages, and extrapolate to find the stopping potential. This is much closer to the way the experiments were originally done, but it also tends to give results that differ from the accepted value by 20-30%.

Your answer would depend a lot on what you think the purpose of the lab should be. My view, like many of the commenters at Uncertain Principles, is that the purpose of labs is to let students learn how to conduct experiments. By this I don’t mean how to use the specific equipment involved (though it’s useful to do so), but how to calculate and justify experimental errors, how to explain why your data is evidence for/against a model, general principles about the points at which to take data when one is calibrating equipment versus when one is taking the actual measurements being used to test the model, etc. From this perspective, the old-school setup seems like a clear winner — it seems doable but still challenging enough, methodologically, to test the students’ experimental skills. Chad, however, says that

…the purpose of the lab is to show that experimental measurements of the photoelectric effect agree well with the Einstein model. The more complicated version doesn’t really add to that understanding, and in fact, the complication tends to obscure the physics. Students spend so much time fretting over the experimental details that they lose track of what it’s supposed to show.

You can argue that they’re learning lab skills in the process, but I’m not all that impressed. The only really useful thing they get out of it is how to use an oscilloscope, and there are other ways to teach that. There’s some fuzzy data-selection heuristic stuff going on in deciding exactly what to use as the stopping potential for any given point, but it’s hard to explain that in such a way that they don’t leave the lab thinking “it’s ok to fiddle with the data to get something closer to the target value.” That’s not only not what we’d like them to learn, but is actively harmful.

The thing is, I don’t see how doing the experiment with the PASCO black-box detectors would “show that experimental measurements of the photoelectric effect agree well with the Einstein model.” Suppose I was skeptical that Einstein’s model has been experimentally validated. Would I be convinced that it has by doing the PASCO experiment? No, because I don’t know that the apparatus accurately converts the energies it measures into stopping potentials, and that the values output by the apparatus actually are those of the stopping potentials. It’s natural to suspect that what the black box is doing isn’t what my instructor claims it’s doing, since the instructor has a vested interest in telling lies-to-children about what the box does. (This suspicion may not be justified. But we can see that inferring the model’s goodness from the black box experiment involves an extra epistemic step, so it isn’t obviously unreasonable to be less convinced by the black box version than by the old school version.)

To summarise:

1. From a teaching-the-methodology point of view, the ‘historical’ experiment wins for me.
2. From a convincing-students-the-model-is-right point of view, the historical experiment wins too, because the black box experiment isn’t any more convincing to a skeptic of the model.

Incidentally, the comments to Chad’s post highlighted to me how terrible my undergrad physics labs were. A few commenters, including Chad, say that they give the students the equipment in bits and leave them to figure out how to put them together in order to conduct the experiment. Except for introductory labs in which we did extremely simple experiments like measuring g using inclined ramps and shit, I don’t remember having to do any major assembly work for my labs. And what little assembly work we had to do would be laid out in painstaking detail in the lab manual.

## Taking physics seriously, again

September 9, 2008

Came across the following, from a book that sounds similar to Maudlin’s The Metaphysics Within Physics:

Consider also Lewis’s discussion of the distinction between internal and external relations in his (1986). He asks us at one point to ‘consider a (classical) hydrogen atom, which consists of an electron orbiting a proton at a certain distance’ (62). There are not, nor were there ever, any ‘classical hydrogen atoms’. At the same time that physicists came to believe in protons, they also became aware that the laws of classical mechanics could not apply to electrons orbiting them. Indeed the notion of an electronic orbit has about as much relation to the common-sense notion of an orbit as the mathematical notion of compactness has to the everyday notion of compactness, which is to say hardly any. Lewis thus encourages his readers to think that his metaphysics is addressed to the scientific image rather than the manifest one, but he gives the game away because ‘classical’ here means nothing other than ‘commonsensical’. Note that we are not arguing that what Lewis goes on to do with his account of internal and external relations is affected one way or the other by how he chooses to introduce the distinction; he could of course have used another example. Our point is that the rhetorical effect of his fictitious example is to suggest that his metaphysics has something to do with science when it does not.

When it comes to debates about the nature of matter in contemporary metaphysics it tends to be assumed that there are two possibilities: either there are atoms in the sense of partless particles, or there is ‘gunk’ in the sense of matter whose every part has proper parts (infinitely divisible matter). This debate is essentially being conducted in the same terms as it was by the pre-Socratic philosophers among whom the atomists were represented by Democritus and the gunkists by Anaxagoras. In early modern philosophy Boyle, Locke and Gassendi lined up for atomism against gunkists Descartes and Leibniz. It is preposterous that in spite of the developments in the scientific understanding of matter that have occurred since then, contemporary metaphysicians blithely continue to suppose that the dichotomy between atoms and gunk remains relevant, and that it can be addressed a priori. Precisely what physics has taught us is that matter in the sense of extended stuff is an emergent phenomenon that has no counterpart in fundamental ontology. Both the atoms in the void and the plenum conceptions of the world are attempts to engage in metaphysical theorizing on the basis of extending the manifest image. That metaphysicians continue to regard the world as a spatial manifold comprising material objects that must either have smallest spatial parts or be made of infinitely divisible matter is symptomatic of their failure to escape the confines of the domestic realm.

Browsing the book through Amazon reveals many other similarly juicy passages, but I have not the patience to type out more excerpts.

(I have been reading too much contemporary metaphysics lately and was getting somewhat frustrated with it.)

## More Confirmation of the Impending Death of US Experimental Particle Physics

December 20, 2007

I think many people have long suspected something like this would happen sooner or later. After all, we have some precedence of Congress abruptly cutting HEP funds despite previous positive signs, and the funding trends of the last decade don’t bode well for the field. Read the gory details here.

The brutal summary:

• HEP funding for FY08 has been cut 10% from last year, despite recommendations from the Bush administration to increase it a by a few percent.
• The cuts coming abruptly three months into FY08 mean that the International Linear Collider, which had not anticipated this, has already spent all of its [revised] FY08 budget.
• NOvA was allocated no funds at all, and it has already spent some money.

I learnt many, many things as an undergrad doing an experimental particle physics project, but the lesson to avoid the field like the plague (even though it has its fun aspects) might turn out to be the most valuable of all. You could smell the negative vibes from grad students in the field — most were preparing to get out of science and become quants instead.

Update:
More from the Chicago Tribune. It seems that Dennis Hastert’s resignation had something to do with the HEP budget cuts falling squarely on Fermilab.