Chapter 15: The No-Miracles Argument

Introduction

The no-miracles argument (NMA) is widely regarded as the strongest single argument for scientific realism. Its intuitive force is immediate: our best scientific theories yield spectacularly accurate predictions, enable powerful technologies, and unify disparate phenomena under common frameworks. The realist claims that the best — indeed, the only satisfactory — explanation for this success is that our theories are approximately true. To deny this, the realist argues, is to make the success of science into a cosmic miracle.

But the argument faces a formidable opponent: the pessimistic meta-induction (PMI), which marshals the history of science to show that successful theories are regularly overturned. If past successful theories were false, what grounds do we have for believing present ones are true? This chapter traces the dialectic between these two powerful arguments and examines the sophisticated responses they have generated.

Putnam’s Formulation

The classic statement of the no-miracles argument comes from Hilary Putnam in Mathematics, Matter and Method (1975):

“The positive argument for realism is that it is the only philosophy that doesn’t make the success of science a miracle. That terms in mature scientific theories typically refer (this formulation is due to Richard Boyd), that the theories accepted in a mature science are typically approximately true, that the same term can refer to the same thing even when it occurs in different theories — these statements are viewed by the scientific realist not as necessary truths but as part of the only scientific explanation of the success of science, and hence as part of any adequate scientific description of science and its relations to its objects.”

— Hilary Putnam, Mathematics, Matter and Method (1975), p. 73

The argument has a clear abductive structure — it is an inference to the best explanation (IBE):

  1. Mature scientific theories are remarkably successful (novel predictions, technological applications, unification).
  2. The best explanation of this success is that our theories are approximately true and their central theoretical terms refer.
  3. Therefore, we should believe our best theories are approximately true.

The argument is especially compelling when theories make novel predictions — predictions about phenomena the theory was not designed to accommodate. When Mendeleev’s periodic table predicted the existence and properties of then-unknown elements (gallium, germanium, scandium), and these predictions were confirmed, this seemed powerful evidence that the periodic table was “onto something real.” When general relativity predicted the precession of Mercury’s perihelion with extraordinary accuracy, or when the Higgs boson was found at roughly the predicted mass, these successes cry out for explanation.

The anti-realist must either deny that such successes require explanation or offer an alternative explanation. Van Fraassen takes the first route: the success of science no more requires explanation than the survival of organisms requires a designer. Just as evolutionary processes produce organisms “adapted” to their environments without any miraculous intervention, so the social process of science selects for empirically adequate theories without any need for truth.

Boyd’s Abductive Defense of Realism

Richard Boyd developed the no-miracles argument into a more sophisticated form. Boyd’s argument focuses not just on the predictive success of individual theories but on the instrumental reliability of scientific methodology. Scientists use background theories to design experiments, calibrate instruments, interpret data, and judge which new theories are worth pursuing. This methodology is remarkably reliable — it regularly produces successful theories. Boyd argues that the best explanation of this methodological reliability is the approximate truth of the background theories.

Boyd’s argument proceeds in stages:

  1. Theory-dependent methodology: Scientists rely on accepted theories at every stage of inquiry — in designing experiments, choosing instruments, assessing evidence, and formulating new hypotheses.
  2. Instrumental reliability: This theory-dependent methodology is instrumentally reliable; it regularly leads to empirically successful new theories.
  3. Abductive step: The best explanation of this reliability is that the background theories are approximately true. If they were systematically false, it would be miraculous that using them as guides to further inquiry would regularly produce successful results.

Critics charge that Boyd’s argument is circular: it uses IBE (inference to the best explanation) to justify realism, but IBE is a form of inference whose reliability is precisely what is at issue. If the anti-realist rejects IBE as a guide to truth about unobservables, then using IBE to argue for realism begs the question. Boyd responds that some circularity is unavoidable in the justification of fundamental epistemic principles, and that his argument exhibits only a benign “track record” circularity, not a vicious logical circularity.

The Pessimistic Meta-Induction

The most powerful challenge to the no-miracles argument is Larry Laudan’s pessimistic meta-induction, presented in his 1981 paper “A Confutation of Convergent Realism.” Laudan attacks the realist’s central assumption: that empirical success is a reliable indicator of approximate truth.

“The history of science furnishes vast evidence of empirically successful theories that were subsequently rejected, and whose central theoretical terms are now taken not to refer.”

— Larry Laudan, “A Confutation of Convergent Realism” (1981)

The argument has a simple inductive structure:

  1. Many past scientific theories were empirically successful.
  2. These theories are now regarded as false, and their central terms are taken not to refer.
  3. Therefore (by induction), our current successful theories are probably also false, and their central terms probably do not refer.

Laudan’s Historical List

Laudan provides a now-famous list of theories that were once empirically successful yet are now considered fundamentally false:

  • The crystalline spheres of ancient and medieval astronomy
  • The humoral theory of medicine
  • The effluvial theory of static electricity
  • The phlogiston theory of combustion
  • The caloric theory of heat
  • The vibratory theory of heat
  • The vital force theories of physiology
  • The electromagnetic ether
  • The optical ether
  • The theory of circular inertia
  • Theories of spontaneous generation

Each of these theories had genuine empirical successes — they made accurate predictions, guided fruitful research, and were accepted by the scientific community of their day. Yet we now believe their central theoretical terms (“phlogiston,” “caloric,” “ether”) fail to refer. If success did not indicate truth for these theories, why should we trust it as an indicator of truth for our current theories?

The meta-induction is “pessimistic” because it leads to a gloomy conclusion about our epistemic situation. It is a “meta-induction” because it is an induction over the history of science itself — an induction about the reliability of induction from success to truth.

The Base Rate Fallacy Objection

P.D. Magnus and Craig Callender (2004) raised a subtle but powerful objection to the no-miracles argument: it commits a base rate fallacy. The no-miracles argument reasons that if a theory is approximately true, it is very likely to be empirically successful. Therefore (by IBE), an empirically successful theory is probably approximately true. But this inference is fallacious unless we know the base rate of approximately true theories.

Consider an analogy. A highly reliable medical test returns a positive result. Should you believe you have the disease? Not necessarily — it depends on the base rate of the disease in the population. If the disease is extremely rare, even a very reliable test will produce mostly false positives. Similarly, even if approximate truth reliably produces success, the probability that a successful theory is approximately true depends on what fraction of theories are approximately true to begin with.

The realist argues: P(success | truth) is high. But the NMA needs P(truth | success) to be high, and by Bayes’ theorem:

$$P(\text{truth} \mid \text{success}) = \frac{P(\text{success} \mid \text{truth}) \cdot P(\text{truth})}{P(\text{success})}$$

If the prior probability of a theory being approximately true — P(truth) — is low, then P(truth | success) can be low even when P(success | truth) is high. The NMA simply assumes that P(truth) is not negligibly small, but this is precisely what is at issue in the realism debate.

The realist might respond that the base rate objection proves too much: if we take it seriously, it undermines all instances of IBE, not just the NMA. Since IBE is a legitimate and widely used form of inference across science and everyday life, we should not abandon it based on abstract base rate worries. The debate continues over whether this response is adequate.

Structural Continuity Through Theory Change

John Worrall’s structural realism offers a direct response to the pessimistic meta-induction. Worrall argues that Laudan’s list is misleading because it focuses on changes in ontology (what entities are posited) while ignoring structural continuity (what mathematical relations are preserved). When we look carefully at the history of science, we find that the mathematical structure of successful theories is typically preserved through theory change, even when the ontology changes dramatically.

Worrall’s key example is the Fresnel-to-Maxwell transition. Fresnel’s equations for the reflection and refraction of polarized light survived almost intact in Maxwell’s electromagnetic theory. The ontology changed (from mechanical ether to electromagnetic field), but the structure was preserved. This pattern, Worrall argues, is typical:

  • Fresnel → Maxwell: Equations preserved; ether replaced by electromagnetic field.
  • Newton → Einstein: Newtonian mechanics is recovered as a limiting case of special relativity at low velocities. The structural relations are preserved in the appropriate limit.
  • Classical → Quantum: The correspondence principle ensures that quantum mechanics reduces to classical mechanics in the appropriate limit.

This pattern of structural preservation explains both the success of past theories (they got the structure approximately right) and their eventual replacement (they got the ontology wrong). It vindicates the no-miracles intuition at the structural level while conceding the pessimistic meta-induction at the ontological level.

Critics ask whether structural continuity is always genuine or sometimes artificial — imposed by the historian seeking patterns. Hasok Chang has argued that the continuity between Fresnel and Maxwell is less straightforward than Worrall suggests, and that in many cases the “structural preservation” involves significant reinterpretation of the equations’ meaning and domain of application.

The Debate’s Current State

The realism debate shows no signs of resolution, but it has become considerably more nuanced. Several developments characterize the current landscape:

  • Selective realism: Most contemporary realists have abandoned “wholesale” realism in favor of selective or partial realism — identifying which specific parts of theories merit belief. Psillos’s “divide et impera” strategy, Kitcher’s “Galilean strategy,” and Saatsi’s minimal realism all exemplify this trend.
  • Stanford’s new induction: Kyle Stanford has proposed a “new induction” from the history of science — the problem of unconceived alternatives. Scientists have repeatedly failed to conceive of the theories that would eventually replace their current ones. This suggests that we are currently in the same position: there may be radically different theories, not yet conceived, that would be equally successful.
  • The deployment realism of Lyons: Timothy Lyons has argued that the no-miracles argument should focus specifically on the success of novel predictions (predictions of previously unknown phenomena), not just empirical adequacy in general.
  • Semi-realism: Anjan Chakravartty’s “semi-realism” combines entity realism with structural realism, arguing that the detection properties of entities (those involved in causal detection) are real, while auxiliary properties may not be.
  • The move to local debates: Increasingly, philosophers have turned from the global question (“should we be realists?”) to local questions (“should we be realists about this theory in this domain?”). The answer may vary across sciences and across levels of description.

Perhaps the deepest lesson of the debate is that the relationship between success and truth is more complex than either naive realism or blanket anti-realism suggests. Science does seem to be getting something right about the world — the convergence of independent methods, the stunning accuracy of novel predictions, and the power of technology all point in that direction. But what exactly science gets right, and how to articulate this in a way that withstands historical counterexamples, remains one of philosophy’s most challenging open questions.

Novel Predictions and Use-Novelty

Many philosophers have argued that the no-miracles argument is most compelling when applied to novel predictions — predictions about phenomena that were not known when the theory was formulated, or were not used in constructing the theory. The success of a theory in accommodating data it was designed to fit is relatively unsurprising; the success of a theory in predicting previously unknown phenomena is what truly demands explanation.

Several notions of novelty have been distinguished:

  • Temporal novelty: The phenomenon was not known when the theory was proposed. (Mendeleev’s prediction of undiscovered elements.)
  • Use-novelty: The phenomenon was known but was not used in constructing the theory. (General relativity’s explanation of Mercury’s perihelion precession, which was known but not used by Einstein in developing the theory.)
  • Heuristic novelty (Worrall): The phenomenon played no role in the heuristic path by which the theory was developed.

Timothy Lyons has argued for a “deployment realism” focused specifically on novel predictive success. The no-miracles argument is strongest, Lyons contends, when applied to theories that have generated successful novel predictions — not merely theories that are empirically adequate with respect to already-known data. This narrows the scope of the NMA but makes it more defensible.

However, Peter Lipton has raised a concern: if we restrict the NMA to novel predictions, we may weaken it excessively. Many theories that we want to count as approximately true do not make strikingly novel predictions but rather explain and organize existing data in illuminating ways. Darwin’s theory of evolution by natural selection, for instance, was primarily an explanation of known facts about biogeography, comparative anatomy, and embryology, rather than a source of novel predictions. Yet it would be strange to deny realism about natural selection on this basis.

Realist Responses to the Pessimistic Meta-Induction

Realists have developed several strategies for responding to Laudan’s challenge:

  • Challenge the list: Some items on Laudan’s list (the crystalline spheres, humoral medicine) were not genuinely successful in the relevant sense. They did not generate surprising novel predictions or exhibit the kind of empirical success that the NMA requires. If we restrict attention to genuinely mature, genuinely successful theories, the list shrinks dramatically.
  • Selective realism: As discussed above, the working posits of even “false” theories were typically preserved through theory change. Caloric theory’s structural features survived in thermodynamics; the ether’s structural equations survived in Maxwell’s theory. What was abandoned were idle wheels, not the parts responsible for success.
  • Approximate truth is compatible with falsity: The realist claims approximate truth, not exact truth. Newton’s theory is strictly false (Einstein showed this), but it is approximately true — it gives the right answers to an extraordinary degree of precision in its domain of application. The PMI assumes that “false” means “not even approximately true,” but this is not what the realist claims.
  • Contemporary science is qualitatively different: The methodological sophistication of contemporary science — with controlled experiments, statistical analysis, computer modeling, peer review, and systematic error correction — may make it qualitatively different from earlier science. The track record of phlogiston-era chemistry may not be a good guide to the reliability of twenty-first-century chemistry.

Each response has its strengths and weaknesses, and the anti-realist has rejoinders to each. The debate has become increasingly refined, with both sides developing more nuanced positions that acknowledge the partial validity of the other’s arguments.

The Dialectical Structure

Realist (NMA):

The success of science would be miraculous if our theories weren’t approximately true.

Anti-realist (PMI):

History shows that successful theories are regularly overthrown. Past success did not indicate truth.

Realist (Selective):

Only the working posits — the parts responsible for success — are preserved; idle wheels are discarded.

Anti-realist (Stanford):

We cannot identify the working posits in advance. Unconceived alternatives may succeed for different reasons.

Realist (Structural):

Mathematical structure is preserved through theory change, explaining both success and revision.

Anti-realist (Base Rate):

The NMA commits a base rate fallacy; success may not indicate truth without knowing the prior probability of truth.

Key Readings

  • • Putnam, H. (1975). Mathematics, Matter and Method. Cambridge University Press. [pp. 69–74]
  • • Laudan, L. (1981). “A Confutation of Convergent Realism.” Philosophy of Science, 48(1), 19–49.
  • • Boyd, R. (1985). “Lex Orandi est Lex Credendi.” In P. Churchland & C. Hooker (Eds.), Images of Science. University of Chicago Press.
  • • Worrall, J. (1989). “Structural Realism: The Best of Both Worlds?” Dialectica, 43, 99–124.
  • • Magnus, P.D. & Callender, C. (2004). “Realist Ennui and the Base Rate Fallacy.” Philosophy of Science, 71, 320–338.
  • • Stanford, K. (2006). Exceeding Our Grasp. Oxford University Press.
  • • Lyons, T. (2006). “Scientific Realism and the Strategema de Divide et Impera.” British Journal for the Philosophy of Science, 57, 537–560.
  • • Saatsi, J. (2005). “Reconsidering the Fresnel-Maxwell Theory Shift.” Studies in History and Philosophy of Science, 36, 509–538.

Timeline of the Debate

1975Putnam formulates the no-miracles argument in Mathematics, Matter and Method.
1980Van Fraassen publishes The Scientific Image, offering constructive empiricism as an alternative.
1981Laudan publishes “A Confutation of Convergent Realism,” launching the pessimistic meta-induction.
1983Boyd develops the abductive defense of realism. Hacking publishes Representing and Intervening.
1989Worrall proposes structural realism as “the best of both worlds.”
1999Psillos develops selective realism in Scientific Realism: How Science Tracks Truth.
2004Magnus & Callender raise the base rate fallacy objection to the NMA.
2006Stanford introduces the problem of unconceived alternatives.
2007Chakravartty develops semi-realism; Ladyman & Ross defend ontic structural realism.

Discussion Questions

  1. Does the no-miracles argument commit the base rate fallacy? If so, can the realist reformulate it to avoid this objection?
  2. Are the items on Laudan’s list genuinely analogous to our current best theories? Or are contemporary theories “mature” in a way that phlogiston and caloric were not?
  3. Is structural continuity a satisfying form of realism? Or is it too thin — does it give up too much of what realism originally promised?
  4. How should we understand van Fraassen’s evolutionary analogy? Is the selectionist explanation of scientific success as good as the realist’s truth-based explanation?
  5. Can the realism debate be settled empirically, or is it an inherently philosophical question?

Further Reflection: Is the Debate Resolvable?

After decades of increasingly sophisticated argument, some philosophers have begun to wonder whether the realism debate is fundamentally resolvable. Each side has powerful arguments, and the dialectic has a tendency to cycle: the realist advances a new version of the no-miracles argument, the anti-realist finds a counterexample or a fallacy, the realist refines the argument, and the cycle continues.

Anjan Chakravartty has suggested that the impasse may reflect a genuine underdetermination at the philosophical level: the evidence from scientific practice and the history of science is compatible with both realist and anti-realist interpretations. If so, the choice between realism and anti-realism may ultimately depend on one’s broader philosophical commitments — one’s views about truth, explanation, epistemic values, and the aims of inquiry.

Whether or not the debate is ultimately resolvable, engaging with it sharpens our understanding of what science is, what it achieves, and what its limits are. The no-miracles argument and the pessimistic meta-induction will continue to be studied, debated, and refined by future generations of philosophers — and this ongoing engagement is itself one of philosophy’s most valuable contributions to our understanding of science.