Chapter 16: The D-N Model of Explanation

Introduction

In 1948, Carl Hempel and Paul Oppenheim published “Studies in the Logic of Explanation,” one of the most influential papers in the history of philosophy of science. Their deductive-nomological (D-N) model provided the first rigorous, formal account of scientific explanation. Though the model has been extensively criticized and largely superseded, it remains the starting point for all subsequent discussion. Understanding its elegance and its failures is essential to understanding the philosophy of explanation.

The core idea is breathtakingly simple: to explain a phenomenon is to show that it was to be expected, given the laws of nature and the relevant initial conditions. Explanation, on this view, is a species of logical deduction — the phenomenon follows logically from laws plus conditions.

The Structure of D-N Explanation

A D-N explanation has the logical form of a deductive argument. The premises constitute the explanans (that which does the explaining), and the conclusion is the explanandum (that which is to be explained):

Explanans:

L₁, L₂, ..., Lₙ     (Laws of nature)

C₁, C₂, ..., Cₖ     (Initial/boundary conditions)

────────────────────

Explanandum:

E                 (The phenomenon to be explained)

Hempel and Oppenheim specified four conditions of adequacy:

  1. Deductive validity: The explanandum must be a logical consequence of the explanans.
  2. Lawlikeness: The explanans must contain at least one general law that is actually required for the deduction.
  3. Empirical content: The explanans must have empirical content — it must be testable.
  4. Truth: The sentences constituting the explanans must be true.

A Classic Example

Why did this particular copper rod expand when heated? The D-N explanation proceeds:

Law: All metals expand when heated (at constant pressure).

Condition: This rod is made of copper (a metal).

Condition: This rod was heated (at constant pressure).

────────────────────

Explanandum: This copper rod expanded.

The explanation subsumes the particular event under a general law. The event was to be expected — indeed, deductively inevitable — given the law and the initial conditions. This is Hempel’s key insight: explanation is nomic expectability. To explain an event is to show that it was nomically expected.

The Symmetry of Explanation and Prediction

A striking feature of the D-N model is what Hempel called the structural identity thesis (or symmetry thesis): explanation and prediction have the same logical structure. The only difference is pragmatic — temporal. If we know the explanans before E occurs, we have a prediction. If E has already occurred and we construct the argument afterward, we have an explanation. The logical structure is identical in both cases.

“An explanation is not fully adequate unless its explanans, if taken account of in time, could have served as a basis for predicting the phenomenon under consideration.”

— Carl Hempel, Aspects of Scientific Explanation (1965)

This thesis has immediate consequences. It means that every genuine explanation is potentially a prediction, and every prediction (if the premises are true) is potentially an explanation. This seems right in many cases: we can explain why the solar eclipse occurred and predict when the next one will occur, using the same astronomical laws and initial conditions.

But the symmetry thesis faces serious difficulties, as we shall see. There appear to be explanations that are not predictions and predictions that are not explanations — a problem that reveals deep flaws in the D-N model.

Problems with the D-N Model

The Flagpole Problem (Asymmetry)

Sylvain Bromberger’s flagpole problem is perhaps the most famous counterexample to the D-N model. Consider a flagpole of height h standing in sunlight. The sun is at angle θ above the horizon. The shadow has length s. Using the laws of optics and trigonometry:

Law: Light travels in straight lines.

Conditions: The flagpole has height h; the sun is at angle θ.

────────────────────

Explanandum: The shadow has length s = h/tan(θ).

This is a perfectly good D-N explanation of the shadow’s length. But now consider the reverse: given the same law and the shadow’s length, we can deduce the flagpole’s height. This also satisfies all four conditions of the D-N model. But intuitively, the shadow does not explain why the flagpole has the height it does. The flagpole’s height explains the shadow’s length, not vice versa.

The D-N model cannot capture this asymmetry because it treats explanation as a purely logical relation, and logical deduction is symmetric in the relevant sense. The problem suggests that explanation involves something beyond logical derivation — something like causal direction or explanatory asymmetry that the D-N model cannot accommodate.

The Hexed Salt Problem (Irrelevance)

Wesley Salmon devised this elegant counterexample to show that the D-N model allows irrelevant information to figure in “explanations.” Consider:

Law: All hexed salt dissolves in water. (This is true, because all salt dissolves in water, hexed or not.)

Condition: This sample of salt was hexed by a witch.

Condition: This sample was placed in water.

────────────────────

Explanandum: This sample dissolved.

This argument satisfies all four conditions of the D-N model: it is deductively valid, it contains a lawlike generalization, the premises are empirically testable, and they are true. But it is absurd to say that the hexing explains why the salt dissolved. The hexing is entirely irrelevant — the salt would have dissolved regardless.

The problem is that the D-N model has no mechanism for excluding irrelevant information. As long as the deduction goes through, the model is satisfied. This shows that logical derivability from laws is not sufficient for explanation — the premises must also be relevant to the explanandum in some substantive way.

Other Problems

The birth control pills problem: John Jones takes birth control pills regularly; no man who takes birth control pills becomes pregnant; therefore, Jones did not become pregnant. This satisfies the D-N schema but does not explain why Jones (a man) did not become pregnant.

The barometer problem: A falling barometer reading can be used (with appropriate laws) to derive that a storm is coming. The D-N model would count this as an explanation. But the falling barometer does not explain the storm — both are effects of a common cause (a drop in atmospheric pressure).

The problem of explanatory depth: The D-N model treats all valid derivations from laws as equally good explanations. But intuitively, some explanations are deeper or more illuminating than others. Explaining the ideal gas law from statistical mechanics seems deeper than explaining it from a more superficial regularity.

Inductive-Statistical (I-S) Explanation

Hempel recognized that many scientific explanations are probabilistic rather than deductive. We explain why Jones recovered from his infection by noting that he took a particular antibiotic, and that most patients who take this antibiotic for this infection recover. The explanation confers high probability on the outcome but does not entail it deductively.

The inductive-statistical (I-S) model captures this type of explanation:

Explanans:

P(R | A & I) is high     (Statistical law)

Jones took antibiotic A for infection I     (Initial condition)

──────────────── [makes highly probable]

Explanandum:

Jones recovered

Note the crucial difference: the double line represents inductive support, not deductive entailment. The explanandum is made probable, not certain, by the explanans.

The Requirement of Maximal Specificity

The I-S model faces an immediate problem: the reference class. Suppose 90% of patients with Jones’s infection who take antibiotic A recover. But suppose Jones also has a compromised immune system, and only 10% of immunocompromised patients with this infection who take antibiotic A recover. Which reference class should we use?

Hempel proposed the requirement of maximal specificity (RMS): the statistical law in the explanation must refer to the narrowest reference class for which we have statistical information. We must use all available relevant information. This requirement prevents cherry-picking favorable reference classes, but it raises difficult questions about when we have enough information and how to individuate reference classes.

Salmon’s Statistical Relevance Model

Wesley Salmon proposed a radical alternative to the D-N and I-S models: the statistical relevance (S-R) model. Salmon rejected Hempel’s assumption that explanation requires high probability. Consider: we can explain why Jones contracted paresis by noting that he had untreated latent syphilis, even though only about 25% of people with untreated syphilis develop paresis. The probability is low, but the syphilis is statistically relevant — it raises the probability of paresis above the base rate.

“Statistical relevance rather than high probability is the key explanatory relationship.”

— Wesley Salmon, Statistical Explanation and Statistical Relevance (1971)

On the S-R model, explaining an event involves:

  1. Partitioning the reference class into the most homogeneous sub-classes possible.
  2. Identifying which sub-class the event falls into.
  3. Citing the probability of the event in that sub-class.

A factor is statistically relevant to an outcome if it changes the probability of the outcome. Formally, factor C is relevant to outcome E in reference class A if P(E | A & C) ≠ P(E | A). This captures the intuition that explanatory factors must “make a difference” to the probability of the outcome.

Salmon later came to regard the S-R model as incomplete. Statistical relevance relations, he realized, are merely symptoms of underlying causal relationships. The syphilis is statistically relevant to paresis because it causes paresis. This realization led Salmon to develop his causal-mechanical model of explanation, which we examine in the next chapter.

Kitcher’s Unificationist Account

Philip Kitcher proposed a radically different approach to explanation: the unificationist account. On this view, explanation is not fundamentally about deriving phenomena from laws (Hempel) or citing causes (Salmon), but about unification — showing that diverse phenomena can be derived from a small number of argument patterns.

The basic idea is that we explain by reducing the number of independent assumptions needed to account for the phenomena. Newton’s gravitational theory was explanatory because it unified an enormous range of phenomena — falling bodies, projectile motion, planetary orbits, tides, the shape of the Earth — under a single argument pattern. Before Newton, each of these required separate explanations; after Newton, they were all consequences of universal gravitation.

“Science advances our understanding of nature by showing us how to derive descriptions of many phenomena, using the same patterns of derivation again and again, and, in demonstrating this, it teaches us how to reduce the number of types of facts we have to accept as ultimate.”

— Philip Kitcher, “Explanatory Unification” (1981)

Formally, Kitcher defines an argument pattern as a schematic argument with filling instructions specifying what can be substituted into the schema. A systematization of our knowledge is a set of derivations of accepted sentences from argument patterns. The best systematization is the one that uses the fewest argument patterns to derive the most conclusions. The explanatory store — the set of explanations — is the best systematization.

The unificationist account has several attractive features:

  • It captures the intuition that deeper explanations are more unifying.
  • It handles the asymmetry problem: deriving the flagpole’s height from the shadow’s length does not belong to the best systematization, because the reverse derivation is part of a more general, more unifying pattern.
  • It accounts for the explanatory value of theories like natural selection, which unify diverse biological phenomena under common patterns.

Critics argue that unification is neither necessary nor sufficient for explanation. Some genuine explanations (explaining a particular car crash) do not involve unification. And some unifying derivations (deriving everything from an arbitrary conjunction of all truths) are not genuinely explanatory. The unificationist must specify what counts as a “legitimate” argument pattern in a way that is not ad hoc.

Van Fraassen’s Pragmatic Theory of Explanation

Bas van Fraassen offered a radically different approach to explanation in The Scientific Image (1980). Where Hempel sought a purely logical account and Salmon a purely causal one, van Fraassen argued that explanation is fundamentally pragmatic — dependent on context, interests, and the specific question being asked.

On van Fraassen’s account, an explanation is an answer to a why-question, and a why-question has three components: a topic (the fact to be explained), a contrast class (the set of alternatives to the topic), and a relevance relation (what kind of answer is being sought). The question “Why did Jones contract paresis?” is ambiguous until we specify the contrast class: “rather than Smith” (answer: because Jones had untreated syphilis) vs. “rather than remaining healthy” (answer: because his syphilis progressed to a rare complication).

This pragmatic approach explains several puzzling features of explanatory practice:

  • The same phenomenon can receive different explanations depending on the context of inquiry.
  • An explanation can be good in one context and bad in another.
  • Explanatory relevance is not fixed by the phenomenon alone but depends on the question being asked.

Critics argue that van Fraassen’s pragmatism goes too far: it makes explanation entirely dependent on the interests of the questioner, leaving no room for objective explanatory relations. The flagpole objectively explains its shadow, not vice versa, regardless of who is asking. Van Fraassen might reply that this objectivity is itself context-dependent: in contexts where we seek causal explanations, the asymmetry holds, but in other contexts (e.g., inferring the flagpole’s height from its shadow for practical purposes), the reverse “explanation” may be perfectly appropriate.

Contemporary Developments

The philosophy of explanation continues to evolve. Several recent developments deserve mention:

Contrastive explanation: Peter Lipton developed a contrastive account of explanation inspired by van Fraassen but retaining objective causal commitments. On Lipton’s view, we explain why P rather than Q by citing a causal difference between the actual situation (where P obtains) and the counterfactual situation (where Q would obtain). This combines the pragmatic insight that explanations are contrastive with the realist commitment that explanatory factors are objective causes.

Explanatory pluralism: Many philosophers now embrace explanatory pluralism — the view that there are irreducibly different modes of explanation (causal, constitutive, mathematical, functional, historical) and that no single model captures all of them. Different sciences and different phenomena may call for different types of explanation.

Explanation in the age of big data: Machine learning and data-driven science raise new questions about explanation. Neural networks can make accurate predictions without providing humanly intelligible explanations. Does prediction without explanation represent a new mode of science? Or does the lack of explanation represent a genuine epistemic deficit? The growing field of “Explainable AI” (XAI) suggests that explanation remains an important scientific value even in the age of algorithmic prediction.

The D-N Model’s Legacy

Despite its problems, the D-N model’s influence cannot be overstated. It established explanation as a central topic in philosophy of science, provided a precise framework that could be rigorously analyzed and criticized, and generated a research program that has produced some of the deepest philosophical work of the past seventy years. Every subsequent account of explanation defines itself in relation to the D-N model — either refining it, rejecting it, or replacing it.

The model’s failures are as instructive as its successes. The flagpole problem showed that explanation is asymmetric in a way that logical deduction is not — pointing toward causal accounts. The hexed salt problem showed that relevance matters — pointing toward statistical relevance models. The barometer problem showed that common causes must be distinguished from common effects — pointing toward causal process accounts. Together, these failures map the terrain that any adequate account of explanation must navigate.

Key Readings

  • • Hempel, C. & Oppenheim, P. (1948). “Studies in the Logic of Explanation.” Philosophy of Science, 15(2), 135–175.
  • • Hempel, C. (1965). Aspects of Scientific Explanation. Free Press. [Title essay, pp. 331–496]
  • • Salmon, W. (1971). Statistical Explanation and Statistical Relevance. University of Pittsburgh Press.
  • • Kitcher, P. (1981). “Explanatory Unification.” Philosophy of Science, 48(4), 507–531.
  • • Van Fraassen, B. (1980). The Scientific Image. Oxford University Press. [Chapter 5]
  • • Lipton, P. (2004). Inference to the Best Explanation. 2nd ed. Routledge.
  • • Strevens, M. (2008). Depth: An Account of Scientific Explanation. Harvard University Press.

Summary: Models of Explanation

ModelCore IdeaStrengthWeakness
D-NDerive from laws + conditionsPrecise, formalAsymmetry, irrelevance
I-SHigh probability from statistical lawsHandles indeterminismReference class problem
S-RCite statistically relevant factorsNo high probability neededMerely symptomatic
UnificationistReduce argument patternsCaptures explanatory depthOverly global
PragmaticAnswer context-dependent why-questionsCaptures context-sensitivityToo subjective?

Discussion Questions

  1. Is explanation fundamentally a logical relation (as the D-N model claims) or a causal relation? Can there be non-causal explanations?
  2. Does the flagpole problem show that causation is essential to explanation? Or can it be solved within a non-causal framework?
  3. Is Kitcher right that explanation is unification? Are there explanations that do not unify?
  4. Should we require high probability for statistical explanation, or is statistical relevance sufficient?
  5. Can there be a single, unified account of scientific explanation, or do different sciences explain in fundamentally different ways?