Chapter 9: Criticisms of Falsificationism

Falsificationism, for all its elegance and influence, has faced sustained and powerful criticism from multiple directions. Philosophers of science have challenged both its logical foundations and its descriptive adequacy as an account of scientific practice. This chapter surveys the major criticisms, from the Duhem-Quine problem that threatens the very possibility of conclusive falsification, through Kuhn's and Lakatos's historical challenges, to Feyerabend's radical methodological anarchism.

The Duhem-Quine Problem

The most fundamental logical challenge to falsificationism is the Duhem-Quine thesis, which holds that no scientific hypothesis can be tested in isolation. Every empirical test of a hypothesis involves a complex web of auxiliary assumptions — about the reliability of instruments, the correctness of calibration, the absence of interfering factors, and much else. When a prediction derived from a hypothesis fails, the failure might be due to the hypothesis itself or to any one of the auxiliary assumptions.

Pierre Duhem articulated this point in The Aim and Structure of Physical Theory (1906), arguing that in physics, a hypothesis is never tested alone but always in conjunction with a large body of theoretical and practical assumptions:

“The physicist can never subject an isolated hypothesis to experimental test, but only a whole group of hypotheses; when the experiment is in disagreement with his predictions, what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed.”— Pierre Duhem, The Aim and Structure of Physical Theory (1906), p. 187

W.V.O. Quine radicalized Duhem's thesis in “Two Dogmas of Empiricism” (1951), arguing that any statement can be held true “come what may, if we make drastic enough adjustments elsewhere in the system.” For Quine, the unit of empirical significance is not the individual statement but the whole of science taken as a single interconnected web of beliefs.

The implications for falsificationism are severe. If no hypothesis can be tested in isolation, then no hypothesis can be conclusively falsified by observation. When a predicted observation fails to materialize, the scientist always has the logical option of blaming one of the auxiliary assumptions rather than the hypothesis under test. The asymmetry between verification and falsification, which is the foundation of Popper's philosophy, breaks down in practice.

Popper's Response

Popper was aware of the Duhem-Quine problem and attempted to address it through methodological conventions. He argued that while it is always logically possible to save a hypothesis by modifying auxiliary assumptions, such modifications are often ad hoc — that is, they are introduced solely to save the hypothesis and have no independent testable consequences. Popper proposed that ad hoc modifications should be prohibited by methodological convention: a modification of auxiliary hypotheses is acceptable only if it leads to new, independently testable predictions.

Critics have found this response insufficient. The distinction between ad hoc and non-ad hoc modifications is not always clear, and Popper's own criterion for ad hocness (that the modification must generate new testable predictions) is itself difficult to apply in practice. Moreover, some apparently ad hoc modifications have turned out to be genuinely progressive: the postulation of the planet Neptune to save Newtonian mechanics from the anomalous orbit of Uranus was “ad hoc” in the sense that it was introduced solely to explain the anomaly, yet it turned out to be correct.

Lakatos: Naive vs. Sophisticated Falsificationism

Imre Lakatos, Popper's most brilliant student, developed the most nuanced internal critique of falsificationism. In his influential paper “Falsification and the Methodology of Scientific Research Programmes” (1970), Lakatos distinguished three versions of falsificationism, each more sophisticated than the last.

Dogmatic (or naturalistic) falsificationism holds that theories can be conclusively falsified by observation because observation statements are infallible. Lakatos showed that this position is untenable because all observation statements are theory-laden and fallible. No observation statement is immune from revision.

Naive methodological falsificationism (which Lakatos attributed to Popper's early work) acknowledges the theory-ladenness of observation but maintains that certain “basic statements” can be accepted by conventional agreement as the basis for falsification. A theory is falsified when an accepted basic statement contradicts it. Lakatos argued that this version is too simplistic: it cannot account for the fact that scientists regularly ignore apparently falsifying instances and continue to work on theories that have been “falsified” by this criterion.

Sophisticated methodological falsificationism (which Lakatos developed as his own position) holds that a theory is “falsified” only when a better theory is available — one that explains everything the old theory explained plus some additional facts. On this view, falsification is not a two-place relation between a theory and an observation but a three-place relation between two rival theories and a body of evidence.

“For the sophisticated falsificationist a scientific theory T is falsified if and only if another theory T′ has been proposed with the following characteristics: (1) T′ has excess empirical content over T: that is, it predicts novel facts, that is, facts improbable in the light of, or even forbidden, by T; (2) T′ explains the previous success of T, that is, all the unrefuted content of T is included (within the limits of observational error) in the content of T′; and (3) some of the excess content of T′ is corroborated.”— Imre Lakatos, “Falsification and the Methodology of Scientific Research Programmes” (1970), p. 116

Lakatos's sophisticated falsificationism represents a significant improvement over naive falsificationism, but it comes at a cost: it is no longer clear that it deserves the name “falsificationism” at all. If theories are only “falsified” relative to better alternatives, then the core Popperian idea — that theories can be refuted by observation — has been substantially diluted.

Kuhn: Scientists Don't Abandon Paradigms at First Anomaly

Thomas Kuhn's The Structure of Scientific Revolutions (1962) presented a historical challenge to falsificationism that complemented the logical challenges of Duhem and Quine. Kuhn argued that the actual history of science reveals a pattern fundamentally at odds with Popper's methodology.

During periods of “normal science,” Kuhn observed, scientists do not attempt to falsify their paradigm. On the contrary, they assume the paradigm is correct and treat anomalies as puzzles to be solved within the framework, not as potential falsifiers. When a prediction fails, the scientist's first response is to look for errors in the experimental setup, to modify auxiliary hypotheses, or simply to set the anomaly aside as a problem for future work. Only rarely, and only after a prolonged period of crisis, do scientists consider abandoning their paradigm.

“No process yet disclosed by the historical study of scientific development at all resembles the methodological stereotype of falsification by direct comparison with nature... If any and every failure to fit were ground for theory rejection, all theories ought to be rejected at all times.”— Thomas Kuhn, The Structure of Scientific Revolutions (1962), p. 77

Kuhn's point is not merely that scientists are psychologically reluctant to abandon their theories (though they are). His deeper point is that normal science requires the tenacious commitment to a paradigm: without it, the detailed, puzzle-solving work that constitutes the bulk of scientific activity could not proceed. If scientists abandoned their theories at every anomaly, as a strict falsificationist methodology would require, no theory would ever be developed in sufficient detail to reveal its true potential.

This critique strikes at the heart of Popper's methodology. If good science requires the very tenacity that Popper condemns as “dogmatic” and “unscientific,” then falsificationism as a normative methodology is fundamentally misguided. Kuhn did not conclude that science is irrational; rather, he argued that the rationality of science is more complex and historically situated than Popper's philosophy allows.

The Role of Ad Hoc Hypotheses

Popper condemned ad hoc hypotheses as the enemy of good science. A modification of a theory that serves only to accommodate a recalcitrant observation, without generating new testable predictions, reduces the theory's falsifiability and is therefore methodologically illegitimate. But the history of science reveals a more complex picture.

Consider the following cases where apparently “ad hoc” modifications turned out to be scientifically progressive:

  • Neptune: When Uranus's orbit deviated from the predictions of Newtonian mechanics, Adams and Le Verrier independently hypothesized the existence of an unseen planet whose gravitational influence was perturbing Uranus. This was “ad hoc” in the sense that it was introduced solely to save the theory, but it led to the dramatic confirmation of Neptune's existence in 1846.
  • The neutrino: When beta decay appeared to violate conservation of energy, Pauli postulated an undetectable particle (later named the neutrino) to save the conservation law. This was maximally ad hoc — the particle was literally defined by its ability to save the theory — yet the neutrino was eventually detected and is now a central part of particle physics.
  • Prout's hypothesis: William Prout hypothesized in 1815 that all atomic weights are whole-number multiples of hydrogen's weight. When Stas and others produced precise measurements showing that chlorine's atomic weight was approximately 35.5, the hypothesis appeared falsified. But Prout's supporters argued that the anomalous values were due to mixtures of isotopes — an ad hoc defense that turned out to be essentially correct.

These examples suggest that the distinction between legitimate and illegitimate theory-saving modifications cannot be drawn in advance by methodological rules. Sometimes saving a theory from apparent falsification is the right thing to do; sometimes it is not. The judgment depends on contextual factors — the overall track record of the research programme, the availability of alternatives, the nature of the anomaly — that cannot be captured by a simple prohibition on ad hoc hypotheses.

Lakatos's Refinement

Lakatos attempted to refine Popper's prohibition on ad hoc hypotheses by distinguishing three types. An ad hoc1 modification has no new empirical content; an ad hoc2 modification has new content but none of it has been confirmed; an ad hoc3 modification is not motivated by the heuristic of the research programme. While this taxonomy is more nuanced than Popper's blanket prohibition, it still faces the problem that the legitimacy of a modification often cannot be determined until long after it has been made.

Feyerabend's Critique: Methodological Anarchism

Paul Feyerabend, who had been a student of Popper's before becoming his most flamboyant critic, developed the most radical challenge to falsificationism (and to all other methodologies of science). In Against Method(1975), Feyerabend argued that the history of science reveals no universal methodological rules, and that every proposed rule has been violated at some point in ways that proved scientifically productive.

Feyerabend's central example was Galileo's advocacy of the Copernican system. He argued that Galileo violated virtually every methodological rule that Popper (and other methodologists) would endorse. Galileo used propaganda, aesthetic arguments, and outright distortion of the evidence to advance his cause. He ignored apparently falsifying observations (the absence of stellar parallax), relied on a theory of the telescope that was itself unconfirmed, and introduced auxiliary hypotheses that were ad hoc by any criterion. Yet Galileo was right, and his methods were productive.

“The only principle that does not inhibit progress is: anything goes... Given any rule, however ‘fundamental’ or ‘necessary’ for science, there are always circumstances when it is advisable not only to ignore the rule, but to adopt its opposite.”— Paul Feyerabend, Against Method (1975), pp. 23–24

Feyerabend drew from this analysis a sweeping conclusion: there are no universal methodological rules of science, and any attempt to impose such rules will impede scientific progress. Falsificationism, like inductivism and all other methodological doctrines, is a Procrustean bed that distorts the rich, complex, and often messy reality of scientific practice.

Feyerabend's “methodological anarchism” provoked outrage among many philosophers and scientists, who accused him of irrationalism and relativism. But Feyerabend's position is more subtle than his rhetoric sometimes suggests. He did not deny that certain methods are useful in certain contexts; he denied only that any method is universally applicable. His target was not rationality as such but the pretension of philosophers to legislate rules that all scientists must follow.

The debate between Feyerabend and the methodologists raises a fundamental question: is there a fixed scientific method, or do the methods of science change with its theories and its historical context? If the latter, then the philosophy of science cannot be a purely a priori discipline but must engage with the history and sociology of science in ways that Popper's approach does not.

Historical Test Cases

Mercury's Perihelion

The anomalous precession of Mercury's perihelion provides a rich case study for evaluating falsificationism. By the mid-nineteenth century, it was known that Mercury's orbit precessed by about 43 arc-seconds per century more than Newtonian theory predicted. Was this a falsification of Newtonian mechanics?

On a strict falsificationist reading, the answer is yes: the theory made a prediction, the prediction was wrong, and the theory should have been abandoned. But this is not what happened. Instead, scientists proposed various ad hoc modifications to save Newton's theory: an unseen planet (Vulcan) orbiting between Mercury and the Sun; a slightly non-spherical shape of the Sun; interplanetary dust affecting Mercury's orbit. None of these proposals succeeded, but they were legitimate scientific responses to an anomaly, not the dogmatic theory-saving that Popper condemned.

The anomaly was eventually explained by Einstein's general relativity, which predicted exactly the observed precession. But the resolution came fifty years after the anomaly was identified. During that half-century, the scientific community was right to continue working with Newtonian mechanics despite the apparent falsification. A strict application of Popper's methodology would have required abandoning Newton's theory long before a viable alternative was available — a methodological disaster.

Prout's Hypothesis Revisited

The case of Prout's hypothesis illustrates another difficulty for falsificationism. Prout proposed in 1815 that all elements are composed of hydrogen, implying that all atomic weights should be whole-number multiples of hydrogen's weight. Increasingly precise measurements by Stas and others revealed that many atomic weights were not whole numbers (chlorine's weight of 35.5 being the most famous example), apparently falsifying the hypothesis.

Yet Prout's hypothesis was essentially correct. The non-integer atomic weights turned out to be the result of isotopic mixtures, as Soddy showed in 1913. If scientists had followed Popper's methodology and abandoned Prout's hypothesis at the first “falsification,” they would have thrown away a fundamental insight. The moral is that apparently falsifying evidence is not always what it seems, and the decision to persist with a “falsified” theory can sometimes be vindicated by later developments.

Is Falsificationism Itself Falsifiable?

A favorite objection to falsificationism, often raised in introductory philosophy courses, is that falsificationism is self-refuting: the claim that “a theory is scientific only if it is falsifiable” is not itself a scientific claim, and it is not clear what observations could falsify it. If falsificationism is not falsifiable, then by its own criterion it is not scientific; and if it is not scientific, why should we accept it?

This objection, while clever, rests on a misunderstanding. Popper never claimed that falsifiability is a universal criterion of rational acceptability; he claimed that it is a criterion of scientific status. Falsificationism itself is not a scientific theory but a philosophical or methodological proposal — a meta-scientific claim about the nature of science. As such, it is not subject to its own criterion any more than the rules of chess are subject to the rules of chess.

However, a more sophisticated version of this objection has real force. If falsificationism is a philosophical proposal rather than a scientific theory, then how should it be evaluated? Popper suggested that it should be judged by its fruitfulness — its ability to illuminate the history of science and to provide useful methodological guidance. But this opens the door to precisely the kind of evaluation that has led many philosophers to conclude that falsificationism is inadequate: the history of science does not, in fact, conform to the falsificationist model.

“If the game of science is defined as the game of falsification, then some of the greatest achievements of science are not part of the game of science. If we define the game differently, then Popper's arguments against inductivism lose their force.”— Hilary Putnam, “The ‘Corroboration’ of Theories” (1974)

In sum, while the simple self-refutation objection can be deflected, the deeper question remains: what kind of justification can a philosophy of science offer for itself? If it appeals to the history of science, it must contend with the historical evidence that scientists do not follow falsificationist methodology. If it appeals to logic alone, it must acknowledge the Duhem-Quine problem and the other logical difficulties examined in this chapter. The status of methodological norms in the philosophy of science remains one of the field's most challenging problems.

The Constructive Legacy of These Criticisms

The criticisms surveyed in this chapter have not destroyed falsificationism so much as refined and contextualized it. The core insight — that science progresses through the critical examination and rejection of inadequate theories — remains powerful, even if the details of Popper's formulation are problematic.

Several constructive lessons emerge from the debate:

  • Falsification is holistic: We cannot test hypotheses in isolation; we can only test them as parts of larger theoretical systems. The Duhem-Quine thesis shows that falsification is always directed at a conjunction of hypotheses, never at a single one.
  • Methodology must be historically informed: Kuhn and Feyerabend showed that a priori prescriptions for scientific method must be tested against the actual history of science. A methodology that condemns the greatest achievements of science as methodologically deficient has something wrong with it.
  • The unit of appraisal is larger than the individual theory: Lakatos's research programmes, Kuhn's paradigms, and Laudan's research traditions all suggest that scientific evaluation operates at the level of extended theoretical frameworks, not individual hypotheses.
  • Tenacity has a role in science: The decision to persist with a theory in the face of anomalies is not always dogmatic; it can be a rational response to the difficulties of premature theory abandonment.
  • Ad hocness is contextual: Whether a modification is ad hoc depends on contextual factors that cannot be specified in advance by methodological rules.

These lessons have shaped the post-Popperian philosophy of science, leading to more sophisticated and historically nuanced accounts of scientific rationality. The debates between Popper, Kuhn, Lakatos, and Feyerabend remain among the most illuminating in the history of the discipline, and their legacy continues to influence how we think about the nature and methods of science.