Chapter 8: Corroboration & Verisimilitude
Having rejected induction, confirmation, and the probability of hypotheses, Popper needed alternative concepts to account for the rationality of scientific practice and the progressiveness of scientific change. His answers were corroboration— a backward-looking measure of how well a theory has survived testing — and verisimilitude — the idea that science progresses by replacing theories with ones that are closer to the truth. Both concepts face serious philosophical difficulties that have occupied epistemologists for decades.
Why Popper Rejected “Confirmation”
The standard inductivist picture holds that scientific theories are confirmed by evidence: the more confirming instances a theory accumulates, the more probable it becomes, and the more warranted we are in believing it. Carnap's inductive logic, Hempel's confirmation theory, and contemporary Bayesianism all share this basic structure, differing only in the formal details.
Popper objected to this picture on multiple grounds. First, he held that Hume's problem of induction had never been solved: no amount of confirming evidence can logically guarantee the truth of a universal statement, and no formal apparatus can bridge the gap between “observed” and “unobserved.” The so-called solutions to the problem of induction (pragmatic vindications, ordinary language dissolutions, probabilistic reformulations) all either beg the question or change the subject.
Second, Popper argued that “confirmation” is far too easy to obtain. Any theory, no matter how absurd, can be “confirmed” by evidence if we are allowed to seek out favorable instances. Astrology, for example, produces innumerable “confirmations” every day: astrologers can always find cases where their predictions appear to have come true. What makes a theory scientific is not that it can be confirmed but that it can be refuted.
“It is easy to obtain confirmations, or verifications, for nearly every theory — if we look for confirmations. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.”— Karl Popper, Conjectures and Refutations (1963), p. 36
Third, and most fundamentally, Popper argued that the concept of “probable hypotheses” is incoherent when applied to universal scientific theories. As we saw in the previous chapter, the logical probability of a universal theory (given any finite body of evidence) is zero. Since good scientific theories are those with high empirical content, and high empirical content entails low logical probability, seeking “highly probable” theories is the opposite of what science should do.
Corroboration: Past Performance, Not Future Reliability
In place of confirmation, Popper introduced the concept of corroboration. A theory is corroborated to the extent that it has survived severe tests — tests that were genuinely capable of refuting it. Corroboration is emphatically not a measure of the probability that the theory is true, nor a guarantee of its future performance. It is a report on how the theory has fared so far, nothing more.
This might seem like a distinction without a difference: if a theory has survived many severe tests, isn't that reason to think it is likely to continue surviving tests? Popper insisted that it is not. To infer future reliability from past performance would be to invoke induction, which Popper had rejected. We prefer well-corroborated theories not because we believe they are more likely to be true (that would be an inductive inference) but because they are the best-tested theories available — the ones that have, so far, withstood the most serious attempts at refutation.
The Formal Definition of Degree of Corroboration
Popper attempted to give a formal measure of the degree of corroboration of a theory. Let C(h, e, b) represent the degree of corroboration of hypothesis h by evidence e, given background knowledge b. Popper's definition, in simplified form, is:
C(h, e, b) = [P(e|h,b) - P(e|b)] / [P(e|h,b) - P(e·h|b) + P(e|b)]
The key features of this measure are: (1) corroboration is higher when the evidence was improbable given the background knowledge alone (“P(e|b)” is low) but probable given the hypothesis (“P(e|h,b)” is high); (2) the maximum degree of corroboration increases with the severity of the test; (3) a theory that is compatible with any evidence has a corroboration of zero.
Several features distinguish Popper's corroboration measure from Bayesian confirmation measures. Most importantly, corroboration does not obey the probability calculus. It is not a probability, and it cannot be used to calculate the probability that a hypothesis is true. This is by design: Popper wanted to capture the idea that surviving a severe test is epistemically significant without committing himself to the view that theories can be assigned probabilities.
Critics have questioned whether Popper's formal measure of corroboration succeeds in its aims. David Miller has argued that the measure is essentially a variant of Bayesian confirmation and that Popper cannot, in practice, avoid the probabilistic reasoning he officially rejects. Others have argued that the concept of corroboration, stripped of any inductive import, cannot do the work Popper needs it to do: if corroboration tells us nothing about the future behavior of a theory, why should we prefer well-corroborated theories for practical decisions?
The Pragmatic Problem of Corroboration
The most pressing objection to Popper's concept of corroboration was articulated by several critics, including Wesley Salmon, Hilary Putnam, and J.W.N. Watkins. The objection is simple but devastating: if corroboration provides no reason to think that a well-tested theory will continue to succeed in the future, then it provides no rational basis for relying on well-tested theories in practical decision-making.
When an engineer builds a bridge using Newtonian mechanics, or a doctor prescribes a drug that has survived clinical trials, they are betting that the theory will continue to work in the future. If Popper is right that corroboration says nothing about future performance, then the engineer and the doctor have no more reason to rely on well-tested theories than on untested or refuted ones. This seems absurd.
“If ‘corroboration’ is no indication of future performance, then it is of no use to the engineer, the physician, or to anyone who wishes to apply scientific knowledge. And if it is an indication of future performance, then it is simply a different name for ‘confirmation’.”— Wesley Salmon, “Rational Prediction” (1981)
Popper responded to this challenge in various ways over the years, none entirely satisfactory. At times he argued that we prefer well-corroborated theories because they have survived the most severe criticism and are therefore the best candidates for truth — but this seems to smuggle in inductive reasoning. At other times he appealed to a “metaphysical research programme” that assumes the world has regularities and that well-tested theories are more likely to have latched onto them — but this too has an inductive flavor.
The pragmatic problem of corroboration remains one of the most serious difficulties for Popper's philosophy. It suggests that Popper's attempt to avoid induction entirely may be too radical: some form of inductive reasoning seems to be indispensable for connecting science to practical life.
Verisimilitude: Truth-Likeness and Scientific Progress
Popper's falsificationism, taken strictly, seems to imply a bleak picture of scientific knowledge: we can never know that a theory is true, and every theory will eventually be falsified and replaced. How, then, can we account for the intuition that science progresses — that Einstein's physics is somehow better than Newton's, which in turn was better than Aristotle's?
Popper's answer was the concept of verisimilitude or truth-likeness. Even though we may never reach the truth, we can approach it: a later theory can be “closer to the truth” than an earlier one, even if both are strictly false. Scientific progress consists in replacing less verisimilar theories with more verisimilar ones.
Popper defined verisimilitude in terms of the truth content and falsity content of a theory. The truth contentof a theory is the set of its true logical consequences; the falsity content is the set of its false logical consequences. A theory A has greater verisimilitude than a theory B if:
- The truth content of A is greater than or equal to the truth content of B, and
- The falsity content of A is less than or equal to the falsity content of B
with at least one of these inequalities being strict. In other words, a more verisimilar theory captures more truths and fewer falsehoods than its predecessor.
“The idea of verisimilitude is most important in cases where we know that we have to work with theories which are at best approximations — that is to say, theories of which we actually know that they cannot be true. (This is often the case in the social sciences.) In these cases we can still speak of better or worse approximations to the truth (and we therefore do not need to interpret these cases in an instrumentalist sense).”— Karl Popper, Conjectures and Refutations (1963), p. 235
The concept of verisimilitude allowed Popper to maintain a realist position — to hold that science aims at truth and that it makes progress toward truth — without claiming that we can ever know that we have reached the truth. It provided a framework for saying that Newton's theory, though false, was closer to the truth than Aristotle's physics, and that Einstein's theory is closer to the truth than Newton's.
The Tichý-Miller Refutation
In 1974, Pavel Tichý and David Miller independently proved that Popper's formal definition of verisimilitude is fatally flawed. Their result, which came as a shock to Popper and his followers, demonstrated that on Popper's definition, no false theory can have greater verisimilitude than any other false theory.
The proof exploits a feature of Popper's definition involving the comparison of truth content and falsity content. Consider two false theories A and B, where A is supposed to be closer to the truth than B. Tichý and Miller showed that if A has greater truth content than B, then A must also have greater falsity content than B — and vice versa. The two conditions in Popper's definition can never be jointly satisfied for false theories.
The Core of the Argument
The argument proceeds as follows. Suppose theory A is false but has greater truth content than theory B (also false). Let p be a true statement that is a consequence of A but not of B. Now consider the statement “p or q”, where q is any false consequence of A. Since p is true, “p or q” is true, and it is a consequence of A but not of B — so A has greater truth content. But now consider the false statement q itself. The statement “q and not p” is false and is a consequence of A but not of B — showing that A also has greater falsity content.
Miller put the result even more starkly: given any false theory A, one can always construct another false theory that has exactly the same truth content as A but different falsity content — and vice versa. The concepts of truth content and falsity content, as Popper defined them, do not behave in the way his definition of verisimilitude requires.
“Popper's qualitative account of verisimilitude is untenable. For any false theory, there exist rival false theories with more truth-content; but these same rivals always have more falsity-content as well. The notion of ‘being closer to the truth’ cannot be cashed out in terms of a simple comparison of truth-content and falsity-content.”— David Miller, “Popper's Qualitative Theory of Verisimilitude” (1974)
The Tichý-Miller result does not show that the intuitive idea of one theory being closer to the truth than another is incoherent. It shows only that Popper's particular formalization of this idea fails. But it does deprive Popper's philosophy of a crucial formal tool, and the search for an adequate definition of verisimilitude has occupied many philosophers since.
Later Developments: Niiniluoto, Oddie, and Beyond
The failure of Popper's definition did not end work on verisimilitude. On the contrary, it stimulated a rich research programme in formal epistemology and philosophy of science. Several alternative approaches have been developed, each attempting to capture the intuitive idea of truth-likeness in a technically adequate way.
Niiniluoto's Truthlikeness
Ilkka Niiniluoto developed a sophisticated quantitative account of truthlikeness based on the distance between possible states of affairs. Rather than comparing truth content and falsity content (as Popper did), Niiniluoto defines truthlikeness as a measure of the “distance” between a theory and the truth in a suitably defined logical space. A theory is more truthlike the closer it brings us to the true state of the world — where closeness is measured using a metric on the set of possible worlds or state-descriptions.
Niiniluoto's approach avoids the Tichý-Miller problem because it does not rely on the problematic decomposition of theories into truth content and falsity content. Instead, it directly compares theories with respect to their overall fit with the truth. The framework is technically demanding, requiring a prior specification of the relevant logical space and a distance metric, but it provides a rigorous foundation for comparative judgments of truthlikeness.
Oddie's Likeness to Truth
Graham Oddie proposed a related but distinct approach based on the idea of “average distance” between a theory and the truth. Where Niiniluoto measures the distance from a theory to the truth by focusing on the closest possible world compatible with the theory, Oddie argues that we should consider the average distance of all worlds compatible with the theory from the actual world. This gives a more balanced picture: a theory that is compatible with worlds very close to the truth and worlds very far from it may be less truthlike than one compatible only with worlds moderately close to the truth.
The Content-Likeness Approach
A more recent approach, developed by Theo Kuipers and others, distinguishes between two dimensions of verisimilitude: likeness (how close the theory's claims are to the truth) and content (how much the theory says). On this view, a theory can become more verisimilar either by making more accurate claims (increasing likeness) or by making more claims that are at least approximately true (increasing content). This two-dimensional picture captures important aspects of scientific progress that the simpler approaches miss.
Despite these advances, the problem of verisimilitude remains open. There is no consensus on the correct formal definition of truth-likeness, and some philosophers have argued that the project of defining verisimilitude in purely logical or formal terms is misguided. Perhaps what we mean by “closer to the truth” is irreducibly context-dependent and cannot be captured by a single formal measure.
Can We Measure Progress Without Verisimilitude?
The difficulties with verisimilitude have led some philosophers to seek alternative accounts of scientific progress that do not rely on the concept of truth-likeness at all. Several such alternatives have been proposed.
Problem-solving effectiveness: Larry Laudan argued that scientific progress consists in the increasing ability of theories to solve empirical and conceptual problems. A theory is progressive if it solves more problems than its predecessor, and a research tradition is progressive if its latest theories are more problem-solving effective than its earlier ones. This account avoids the difficulties of verisimilitude but at the cost of severing the connection between scientific progress and truth.
Predictive success: Some philosophers have argued that progress should be measured simply by the increasing predictive accuracy of scientific theories. A theory that makes more accurate predictions across a wider range of phenomena is progressive relative to one that makes less accurate or less wide-ranging predictions. This approach is attractive in its simplicity but raises questions about the relationship between predictive success and truth.
Structural preservation: A more sophisticated approach, associated with structural realism, argues that what is preserved across theory change — and what constitutes progress — is the mathematical structure of theories rather than their specific ontological claims. When Newton's theory was superseded by Einstein's, the mathematical relationships it described were approximately preserved (as a limiting case), even though the ontology changed dramatically.
The debate over how to characterize scientific progress remains one of the central questions in the philosophy of science. Verisimilitude, despite its technical difficulties, captures a powerful intuition: that science gets progressively closer to the truth about the natural world. Whether this intuition can be vindicated formally, or whether it must be replaced by a different account of progress, is an open question.
Summary and Key Takeaways
- Popper rejected confirmation and the probability of hypotheses, replacing them with corroboration — a measure of how well a theory has survived severe testing.
- Corroboration is backward-looking: it reports past performance without guaranteeing future reliability. This creates a serious pragmatic problem for the application of scientific knowledge.
- Popper introduced verisimilitude to account for scientific progress: science advances by replacing theories with ones closer to the truth.
- Tichý and Miller proved that Popper's formal definition of verisimilitude fails: no false theory can be shown to be closer to the truth than another using Popper's criteria.
- Later work by Niiniluoto, Oddie, and others has produced more sophisticated accounts of truthlikeness, but no consensus has been reached.
- Alternative accounts of progress (problem-solving, predictive success, structural preservation) avoid the difficulties of verisimilitude but raise questions of their own.
- The fundamental tension remains: can we make sense of scientific progress without some notion of approaching the truth?