This post is the result of reading Wittgenstein and the philosophy of simulation literature in close temporal proximity.

Here is Paul Humphreys on epistemic opacity in computer simulations:

a process is epistemically opaque relative to a cognitive agent X at time t just in case X does not know at t all of the epistemically relevant elements of the process. A process is essentially epistemically opaque to X if and only if it is impossible, given the nature of X, for X to know all of the epistemically relevant elements of the process. For a mathematical proof, one agent may consider a particular step in the proof to be an epistemically relevant part of the justiﬁcation of the theorem, whereas to another, the step is sufﬁciently trivial to be eliminable. In the case of scientiﬁc instruments, it is a long-standing issue in the philosophy of science whether the user needs to know details of the processes between input and output in order to know that what the instruments display accurately represents a real entity.

The charge is that simulations bring something new to philosophy of science because they are epistemically opaque, unlike, say, the process of solving an equation analytically.

However, I’m not sure I understand how simulations are any more epistemically opaque than physical experiments or non-automated calculations in mathematics. First, consider experiments. It seems to me that the checks we make to ensure that the results of experiments are reliable are almost completely analogous to those we make to ensure that the results of simulations are reliable. Allan Franklin has a good list of the kinds of checks we make to ensure that experiments produce reliable results. All the seven criteria he describes there seem to be used to validate simulations as well as physical experiments. We do check that the simulation reproduces known results and artifacts. We do try to eliminate plausible sources of error. If the simulation produces a striking pattern that can’t be explained by plausible sources of error, we do use that pattern itself to argue for the validity of that pattern as a legitimate result. If multiple independently corroborated theories account for the results of a simulation, that does add to the validity of the results. Simulations are often based on well-corroborated theories. Finally, statistical arguments are used to argue that patterns seen in simulations are real.

So what is epistemically relevant in simulations that humans cannot know, that can be known in the case of physical experiments and mental or pen-and-paper mathematical calculations? I’m guessing that what Humphreys takes to be epistemically relevant in simulations but inaccessible to human knowledge is something like the results of each computational step in the simulation, or whether the mechanistic workings of the simulating apparatus produces mathematically correct results. But are the results of each computational step epistemically relevant? Here is one reason to think not. In a physical experiment, one *never* has a complete working theory of the apparatus that tells us the exact consequences of every step in the experiment. It seems to me that demanding that the result of every computational step in the simulation be epistemically accessible to humans is analogous to demanding that every step in the experiment be justified by a theory that describes *every* aspect of the apparatus.

What if Humphreys considers the reliability of the simulating apparatus, that is, whether it is producing mathematically correct results, as the epistemically relevant aspect of simulations that is essentially inaccessible to humans? As noted above, the same way one can validate the reliability of experiments without having a complete theory of the experimental setup, we have ways of validating the reliability of simulations. But they are not foolproof of course. Suppose we take seriously the possibility that our methods of validation still leave out epistemically relevant information. It is possible that even though our checks show that the results are reliable in a large variety of situations, some hocus-pocus is going on which can be discovered only by going through every single step in the simulation, which humans cannot do. But there is an analogous “problem” when it comes to mental or pen-and-paper arithmetic. One’s belief that one is calculating 2098×98723 correctly, if one is doing it for the first time, is based on one’s past success in calculating various other things correctly. *Of course* some hocus-pocus could be going on just *this* time, for the new calculation, a kind of hocus-pocus which did not show itself in previous calculations. But this possibility does not lead us to say that there is something epistemically missing from the new calculation. If one really wants to be paranoid, one could always doubt the results of mental or pen-and-paper calculations, because after all *we do not know, mechanistically, how the human mind consistently applies arithmetical rules, and whether it always correctly applies them*. We act as though it always consistently applies them because of prior evidence of its reliability, but these do not suffice to ensure with certainty that it will always consistently apply them. How is this different from the case of simulations? In simulations, we also only have the prior results of simulations, and the backing of mathematics and physical theories relating to the mechanics of the simulation, to assure us that *this* time the simulation will also be reliable.

Humphreys’ ascription of epistemic opacity to machine calculations but not human calculations is an interesting inversion of one point of view that Wittgenstein discusses at various points in his philosophy of mathematics. Wittgenstein identifies the philosopher of mathematics’ love for axiomatic reductions of mathematics due to the idea of “*mechanical* insurance against contradiction” (RFM, p. 107e, his emphasis). The idea is that by reducing mathematics to a set of rules that even a machine can follow, one excludes mistakes from mathematics:

We may trust ‘mechanical’ means of calculating or counting more than our memories. Why? — Need it be like this? I may have miscounted, but the machine, once constructed by us in such-and-such a way, cannot have miscounted. Must I adopt this point of view? — “Well, experience has taught us that calculating my machine is more trustworthy than by memory. It has taught us that our life goes smoother when we calculate with machines.” But must smoothness necessarily be our ideal (must it be our ideal to have everything wrapped in cellophane? (RFM, 106e)

Humphreys, P. (2008). The philosophical novelty of computer simulation methods Synthese, 169 (3), 615-626 DOI: 10.1007/s11229-008-9435-2

Wittgenstein, L. (1967). *Remarks on the Foundations of Mathematics*, ed. G. H. von Wright, R. Rhees, and G. E. M. Anscombe, trans. G. E. M. Anscombe. MIT Press.

Not necessarily… even if the intermediate results are not epistemically relevant, the algorithms used to produce those results certainly are epistemically relevant. Sometimes, simulations are undertaken using software whose licence agreement contains a phrase like “You may not reverse engineer, disassemble, or decompile…”. In this case, the user does not and cannot (lawfully) know what algorithms are being employed.

A layman says…

User =/= human*s*.

Whether or not the user knows what algorithms are being employed, it does not change the fact that the software and those algorithms are based on something, known to someone somewhere.

I don’t really get what is unknown here, as a whole, to humans.

Hi,

Great blog you have. I would like to discuss some of the points you bring up later in the week, since I have to finished up some things. But you are raising certain questions on simulation and epistemic opacity that I have been thinking about in my own (still inchoate) epistemological developments, though i suspect my direction may differ from yours. I hope we can have discussions on some of the overlapping research interests we share.

Sure. Look forward to your comments!

A quibble with one of Humphreys’s examples: When one is writing down a mathematical proof and decides that a step is trivial enough to be left out, one merely leaves the step out of the description of the proof, not the logic behind it. Everyone reading the proof should know that, when you go from step A to step C directly, step B exists between them. Step B remains epistemically relevant (if I understand that term) even if you don’t explicitly state it, because your audience understands what it is.

I just found your blog, and I’m enjoying it. I like your nom de plume too.