Chapter 20: The Problem of Induction

The deepest challenge to the rational foundations of empirical science: can the leap from observed regularities to universal laws ever be justified?

Every time a scientist generalises from a finite sample to a universal law, every time we predict that the sun will rise tomorrow on the basis of its having risen every day in the past, we rely on inductive reasoning. Unlike deduction, where the conclusion follows necessarily from the premises, induction involves an ampliative leap — the conclusion goes beyond what is strictly contained in the evidence. The problem of induction asks whether this leap can ever be rationally justified.

David Hume, writing in the 18th century, argued with devastating clarity that it cannot. His argument is disarmingly simple and, many philosophers believe, irrefutable. Every attempt to justify induction either appeals to deductive principles (which cannot yield ampliative conclusions) or appeals to induction itself (which is circular). If Hume is right, the entire edifice of empirical science rests on a foundation that reason cannot secure.

The philosophical responses to Hume span three centuries and constitute one of the richest debates in all of epistemology. From Kant’s transcendental idealism to Goodman’s new riddle, from Popper’s deductivism to Bayesian probabilism, the problem of induction has been a crucible in which theories of knowledge are forged and tested.

Hume’s Original Argument

Hume’s argument, presented in the Treatise of Human Nature (1739–40) and the Enquiry Concerning Human Understanding (1748), proceeds in two stages. First, he distinguishes betweenrelations of ideas (demonstrative reasoning) and matters of fact (probable reasoning). All inductive inferences concern matters of fact, not relations of ideas. The negation of any matter of fact is always conceivable and hence logically possible.

“That the sun will not rise to-morrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise. We should in vain, therefore, attempt to demonstrate its falsehood.”— David Hume, An Enquiry Concerning Human Understanding, Section IV (1748)

Second, Hume asks what grounds our expectation that the future will resemble the past. All inductive reasoning, he argues, rests on the principle of the uniformity of nature (PUN): that the course of nature will continue uniformly the same. But how do we know that PUN is true?

The argument can be formalised as a dilemma:

1. The justification for induction must be either deductive or inductive.

2. It cannot be deductive, because the conclusion of an inductive argument is not a logical consequence of its premises (the denial of the conclusion is logically consistent with the premises).

3. It cannot be inductive, because that would be circular — we would be using induction to justify induction.

4. Therefore, induction has no rational justification.

Hume’s conclusion is not that we should stop using induction — he recognises that we cannot help doing so. Rather, inductive reasoning is a product of custom or habit, not of reason. The mind is naturally disposed to expect regularity, but this disposition has no rational foundation.

The Circularity of Inductive Justification

The most natural response to Hume is to point out that induction has worked in the past. The laws of physics discovered by induction have enabled us to build bridges, cure diseases, and send spacecraft to other planets. Surely this track record justifies continued reliance on induction?

Hume’s devastating reply is that this argument is itself inductive:

Premise: Induction has worked in the past.

Conclusion: Induction will work in the future.

This is an inductive argument for the reliability of induction — a flagrant circularity.

Max Black (1954) attempted to defend this kind of “inductive justification of induction” by distinguishing between premise-circularity (where the conclusion appears among the premises) andrule-circularity (where the rule of inference used is the one being justified). Black argued that rule-circularity is not vicious, because the rule is being used, not asserted as a premise.

However, as Wesley Salmon (1957) showed, an exactly parallel argument could be constructed for any rival inductive rule, including counter-induction (the rule that infers the opposite of what has been observed). Counter-induction has failed in the past, so by counter-induction, it will succeed in the future. If rule-circular arguments can justify any rule, they justify none.

Strawson’s Dissolution: Induction as Constitutive of Rationality

P.F. Strawson (1952) offered a radically different response: the demand for a justification of induction is itself confused. To ask “Is induction rational?” is like asking “Is the law legal?” Induction is not a practice that stands in need of justification by reference to some independent standard of rationality. Rather, induction is what we mean by rational reasoning about matters of fact.

“It is an analytic proposition that it is reasonable to have a degree of belief in a statement which is proportional to the strength of the evidence in its favour; and it is an analytic proposition, though not a proposition of mathematics, that, other things being equal, the evidence for a generalisation is strong in proportion as the number of favourable instances, and the variety of circumstances in which they have been found, is great.”— P.F. Strawson, Introduction to Logical Theory (1952)

On Strawson’s view, asking for a justification of induction is a category mistake. Standards of evidence and rational belief are constituted by inductive practice; they cannot be applied to evaluate that practice from outside.

Critics have found this response unsatisfying. Salmon (1957) objected that Strawson’s argument proves too much: a soothsayer could equally claim that crystal-gazing is constitutive of her epistemic practice, and thus self-justifying by her own standards. The question is why our standards are better than hers — and Strawson’s dissolution seems to block us from answering.

Moreover, Strawson’s argument seems to conflate two questions: (1) Is it rational to use induction? and (2) Will induction lead to true beliefs? Even if we grant that induction is constitutive of rationality, we might still want to know whether rational methods are truth-conducive. Strawson’s dissolution offers no answer to this second question.

Reichenbach’s Pragmatic Vindication

Hans Reichenbach (1938) offered an ingenious pragmatic defence of induction that concedes Hume’s point but argues that induction is nonetheless our best bet. The argument proceeds as follows:

1. Either nature is uniform (in the sense that observed frequencies converge to stable limiting relative frequencies) or it is not.

2. If nature is uniform, the “straight rule” of induction (setting the probability equal to the observed frequency) will converge to the true probability in the long run.

3. If nature is not uniform, no method will succeed.

4. Therefore, if any method will succeed, the straight rule will. We lose nothing by using it.

Formally, if the limiting relative frequency of $A$s among $B$s exists, then $\lim_{n \to \infty} f_n(A|B) = p$, where $f_n$ is the observed frequency after $n$ trials. The straight rule converges to $p$ by definition. No other method can be guaranteed to do better.

The vindication is pragmatic rather than epistemic: it does not show that induction willwork but only that it is the rational strategy given our uncertainty. It is analogous to Pascal’s wager — a dominance argument showing that induction weakly dominates all alternatives.

Critics have raised several objections. Salmon himself later noted that infinitely many rules converge to the correct limit (for example, the straight rule plus any correction term that vanishes in the limit), so the argument does not uniquely vindicate the straight rule. Moreover, the vindication offers no guidance for finite samples — and all actual scientific practice involves finite samples.

Goodman’s “New Riddle of Induction”

Nelson Goodman’s Fact, Fiction, and Forecast (1955) transformed the problem of induction by showing that even if we could justify induction in general, we would still face the problem of determiningwhich inductive inferences are legitimate. Goodman introduced the now-famous predicate “grue”:

An object is grue if and only if it is examined before some future time $t$ and found to be green, or is not so examined and is blue.

Similarly, “bleen” applies to things examined before $t$ and found to be blue, or not examined and green. Every emerald we have ever observed is both green and grue. Therefore, the evidence equally supports:

$H_1$: All emeralds are green.

$H_2$: All emeralds are grue.

Yet these hypotheses make incompatible predictions about emeralds examined after $t$:$H_1$ predicts they will be green, while $H_2$ predicts they will be blue. The problem is not that induction is unjustified, but that it is too easy to justify: the same evidence confirms incompatible hypotheses.

“Regularities are where you find them, and you can find them anywhere.”— Nelson Goodman, Fact, Fiction, and Forecast (1955)

Goodman’s point is that mere regularity is not enough for induction — we need a criterion that distinguishes projectible regularities from non-projectible ones. His own solution appeals toentrenchment: predicates that have a long history of use in successful inductions (like “green”) are projectible; novel, gerrymandered predicates (like “grue”) are not.

Many philosophers have found the entrenchment solution unsatisfying because it is conservative and potentially circular — we project the predicates we have always projected. It offers no independent criterion for projectibility and seems to privilege linguistic habit over rational justification. Nevertheless, Goodman’s problem has proved extraordinarily fruitful, stimulating work on natural kinds, similarity, and the metaphysics of properties that continues to this day.

Popper’s Deductivism: Dissolving or Evading the Problem?

Karl Popper (1959, 1972) claimed to have solved the problem of induction by showing that science does not use induction at all. On Popper’s falsificationist account, scientific reasoning is purely deductive: scientists propose bold conjectures and then attempt to refute them. The logic of refutation ismodus tollens, which is deductively valid:

$$\text{If } H \text{ then } E. \quad \neg E. \quad \therefore \neg H.$$

“I have solved the problem of induction. ... The solution is that there is no induction, because universal theories are not derivable from singular statements. They are never verifiable, but they can be falsified.”— Karl Popper, Objective Knowledge (1972)

On this view, we never have reason to believe that a universal theory is true; we can only have reason to believe that it has not yet been falsified and that it has survived severe tests. The most we can say of a well-tested theory is that it is corroborated — but corroboration is explicitly not a measure of probability or degree of confirmation.

Critics have argued that Popper merely relocates the problem rather than solving it. Even granting that falsification is deductive, scientists must still make practical decisions based on unfalsified theories. Why should we rely on well-corroborated theories rather than poorly corroborated ones? Popper’s answer — that corroboration measures past performance, not future reliability — seems to require precisely the inductive assumption it was meant to avoid: that past performance is a guide to the future.

Moreover, as Putnam (1974) and Salmon (1981) argued, Popper needs some account of why we should acton well-corroborated theories. If a bridge is designed using Newtonian mechanics (a well-corroborated theory), why is it rational to drive across it? Popper’s answer that we should “prefer” well-corroborated theories for action seems indistinguishable from an inductive inference about their likely reliability.

Bayesian Approaches to Induction

Bayesianism offers a distinctive approach to the problem of induction. The Bayesian does not attempt to justify induction in the way Hume demands; rather, the framework explicates inductive reasoning as conditionalization on evidence using Bayes’ theorem.

The key insight is that the probability calculus itself constrains how evidence bears on hypotheses. Given a prior probability distribution and new evidence, Bayes’ theorem determines the posterior. There is no separate “inductive rule” that needs justification — conditionalization follows from the axioms of probability and the definition of conditional probability:

$$P_{\text{new}}(H) = P_{\text{old}}(H|E) = \frac{P(E|H) \cdot P_{\text{old}}(H)}{P(E)}$$

However, the Bayesian approach does not fully dissolve Hume’s problem. The choice of prior remains unjustified (or justified only by pragmatic or aesthetic considerations). And the convergence theorems that promise long-run agreement depend on assumptions (like absolute continuity) that themselves require something like an inductive justification.

Perhaps the most honest Bayesian response to Hume is that of Howson (2000): Bayesianism does not solve the problem of induction but shows that, given a prior probability distribution, there is a uniquely rational way to update beliefs in light of evidence. The problem of where the priors come from remains — but this, Howson argues, is not a problem for the logic of induction so much as for the metaphysics of rational belief.

The Problem of Underdetermination

Closely related to the problem of induction is the underdetermination of theory by evidence. The Duhem-Quine thesis holds that any body of evidence is logically compatible with multiple — indeed, infinitely many — mutually incompatible theories. If this is correct, then even perfect inductive reasoning cannot uniquely determine which theory to accept.

Pierre Duhem (1906) argued that no hypothesis can be tested in isolation; every test involves auxiliary assumptions (about instruments, background conditions, and so on). When a prediction fails, logic alone cannot determine whether to blame the hypothesis or an auxiliary assumption. W.V.O. Quine (1951) radicalised this into the claim that “any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system.”

“The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges.”— W.V.O. Quine, “Two Dogmas of Empiricism” (1951)

Larry Laudan and Jarrett Leplin (1991) have argued that the underdetermination thesis is weaker than it appears. While it is true that any finite body of evidence is logically compatible with multiple theories, the evidential relevance of data changes as science progresses. New auxiliary assumptions, new experimental techniques, and new background theories can break previously existing ties between rival hypotheses.

The Bayesian responds to underdetermination by noting that, even if evidence does not logically determine a unique theory, it does differentially support theories via the likelihood function. Two theories may both be compatible with the evidence, but one may make the evidence far more probable than the other. Prior probabilities and theoretical virtues (simplicity, unification, fertility) can further discriminate among empirically equivalent theories.

Contemporary Perspectives

The problem of induction remains open, but contemporary philosophy has developed a richer understanding of its contours. Several important developments deserve mention:

  • ◆
    Reliabilism: Alvin Goldman and others argue that induction is justified because it is a reliable belief-forming process — one that produces a high proportion of true beliefs. Critics object that we can only know induction is reliable by using induction.
  • ◆
    The computational turn: Formal learning theory (Gold, Putnam, Kelly) reconceptualises the problem as one of convergence to the truth. On this view, inductive methods are justified not by a priori reasoning but by their formal property of converging to the correct answer in the limit.
  • ◆
    Natural kinds: The discovery that nature comes pre-carved into natural kinds (water, gold, electrons) may help explain why some predicates are projectible and others are not. If “green” picks out a natural property and “grue” does not, there is a metaphysical basis for the distinction.
  • ◆
    Evolutionary epistemology: Our inductive capacities may be explained (though perhaps not justified) by evolutionary selection. Organisms whose inductive dispositions tracked environmental regularities were more likely to survive and reproduce.

Perhaps the deepest lesson of the problem of induction is that the demand for a foundationaljustification of our most basic cognitive practices may itself be misguided. As Wittgenstein suggested inOn Certainty, some beliefs are not conclusions from evidence but rather the framework within which evidence is assessed. The uniformity of nature may be one such framework belief — not provable, not empirical, but constitutive of the practice of empirical inquiry itself.

← Chapter 19: Bayesian Confirmation TheoryChapter 21: Statistical Inference →