Chapter 7: Karl Popper & Falsifiability

Karl Raimund Popper (1902–1994) is widely regarded as one of the greatest philosophers of science of the twentieth century. His doctrine of falsificationism — the idea that what distinguishes science from non-science is the possibility of refutation, not the accumulation of confirming instances — transformed how philosophers, scientists, and the general public understand the nature of scientific knowledge. This chapter traces Popper's intellectual development, examines the logical foundations of falsifiability, and explores its implications for scientific methodology.

Popper's Intellectual Biography

Vienna: The Formative Years

Karl Popper was born in Vienna in 1902 to a prosperous and intellectually vibrant family. His father, Simon Siegmund Carl Popper, was a lawyer with a large personal library; his mother, Jenny Schiff, was a talented musician. Young Karl grew up in an atmosphere of intense intellectual engagement, absorbing the cultural richness of fin-de-siècle Vienna.

In the turbulent years after World War I, Popper was exposed to a remarkable range of intellectual movements. He briefly flirted with Marxism as a teenager before becoming disillusioned after a violent clash between communist demonstrators and police in which several young workers were killed. He attended Alfred Adler's lectures on individual psychology and worked in Adler's child-guidance clinics. He heard Einstein lecture in Vienna and was profoundly impressed by the physicist's willingness to specify conditions under which his general theory of relativity would be refuted.

These early experiences shaped Popper's philosophical outlook in decisive ways. The contrast between Einstein's approach and those of Marx, Freud, and Adler became the genesis of his demarcation criterion. As Popper later recalled:

“I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory.”— Karl Popper, Conjectures and Refutations (1963), p. 34

The Vienna Circle and Logik der Forschung

Although Popper was never a member of the Vienna Circle, he engaged intensively with their ideas. Members of the Circle, including Rudolf Carnap and Herbert Feigl, recognized the originality of his thought and facilitated the publication of his first major work, Logik der Forschung (1934), in a series edited by Moritz Schlick and Philipp Frank. The book presented Popper's falsificationist philosophy as an alternative to the verificationism of the Circle, though the positivists tended to interpret Popper as a sort of fellow traveler who differed from them only on points of detail.

Popper himself was at pains to distinguish his position from that of the positivists. He rejected the verification principle of meaning, denied that metaphysics was meaningless, and criticized the positivists' reliance on induction. His demarcation criterion was intended not as a criterion of meaning (as the verification principle was) but as a criterion of the scientific status of theories.

New Zealand, London, and The Open Society

With the rise of Nazism, Popper fled Austria in 1937, taking a position at Canterbury University College in Christchurch, New Zealand. During the war years, he wrote The Open Society and Its Enemies (1945), a magisterial attack on historicism and totalitarianism in political philosophy that traced the intellectual roots of authoritarianism from Plato through Hegel to Marx.

In 1946, Popper moved to the London School of Economics (LSE), where he spent the rest of his career. At LSE, he built a remarkable department that included Imre Lakatos, Joseph Agassi, Paul Feyerabend (who had been a student of Popper's in Vienna), and Alan Musgrave. The debates between Popper and his students — particularly Lakatos and Feyerabend — produced some of the most important work in twentieth-century philosophy of science.

The Asymmetry of Verification and Falsification

The logical foundation of Popper's philosophy rests on a simple but profound asymmetry between universal statements and existential statements. A universal statement of the form “All swans are white” (formally: ∀x(Sx → Wx)) cannot be conclusively verified by any finite number of observations. No matter how many white swans we observe, the next swan might be black. However, the statement can be conclusively falsified by a single observation of a non-white swan: the existential statement “There exists a swan that is not white” (∃x(Sx ∧ ¬Wx)) contradicts the universal claim.

This asymmetry is a straightforward consequence of deductive logic. From the premises “All swans are white” and “This bird is a swan,” we can deduce “This bird is white.” If we then observe that the bird is black, we have a contradiction, and by modus tollens we can conclude that at least one of our premises is false. This is the logical backbone of falsification.

“My proposal is based upon an asymmetry between verifiability and falsifiability; an asymmetry which results from the logical form of universal statements. For these are never derivable from singular statements, but can be contradicted by singular statements. Consequently it is possible by means of purely deductive inferences (with the help of the modus tollens of classical logic) to argue from the truth of singular statements to the falsity of universal statements.”— Karl Popper, The Logic of Scientific Discovery (1959), p. 19

Popper took this asymmetry to be the key to the demarcation problem. Whereas the positivists had tried (and failed) to characterize science in terms of verification and confirmation, Popper proposed that science is characterized by falsifiability. A theory is scientific to the extent that it makes claims that could, in principle, be shown to be false by observation.

It is crucial to note that falsifiability, for Popper, is a logical property of theories, not a practical one. A theory is falsifiable if there exist possible observation statements that would contradict it. Whether anyone has actually attempted to falsify the theory, or whether the technology exists to do so, is irrelevant to its scientific status. What matters is the logical relationship between the theory and potential observations.

Falsifiability as a Criterion of Demarcation

Popper's demarcation criterion must be carefully distinguished from the positivists' verification principle. The verification principle was a criterion of meaning: it held that a non-tautological statement is meaningful only if it is verifiable by experience. Statements that fail this test — metaphysical, theological, ethical — were declared meaningless, mere “pseudo-sentences.”

Popper rejected this approach entirely. He held that many metaphysical claims are perfectly meaningful — indeed, that some metaphysical ideas (such as atomism) have been enormously fruitful in the history of science. His criterion of falsifiability was not a criterion of meaning but a criterion of demarcation: it distinguishes scientific theories from non-scientific ones without passing judgment on the meaningfulness or importance of the latter.

Examples: Einstein vs. Freud

Popper's favorite example of a genuinely scientific theory was Einstein's general theory of relativity. In 1919, Arthur Eddington led an expedition to observe a solar eclipse and test Einstein's prediction that light from distant stars would be bent by the sun's gravitational field by a precisely specified amount. Had the observations not matched the prediction, the theory would have been falsified. This is what Popper meant by a “risky prediction”: the theory stuck its neck out and could have been refuted.

By contrast, Popper argued that Freudian psychoanalysis and Adlerian individual psychology were unfalsifiable. Whatever behavior a person exhibited, it could be explained within the framework of the theory. A man who pushes a child into the water to drown might be explained by the Freudian as suffering from repression, while a man who sacrifices his life to save a drowning child might be explained as having achieved sublimation. The theory, Popper claimed, could accommodate any observation whatsoever and therefore had no empirical content.

Falsifiability Is Not Falsification

A common misunderstanding of Popper's position conflates falsifiability with actual falsification. Popper did not claim that a theory must have been falsified to be scientific; he claimed that it must be falsifiable in principle. A theory that has survived every test thrown at it is still scientific, provided there exist tests that could have refuted it.

Nor did Popper claim that falsified theories are worthless. A theory that has been refuted may still contain important insights and may serve as the foundation for better theories. Newton's mechanics was falsified by the perihelion of Mercury and by relativistic effects, but it remains enormously useful and approximately true in a wide range of applications. The point is that science progresses by learning from its mistakes — by discovering where theories fail and developing better ones.

Degrees of Falsifiability and Testability

Falsifiability is not an all-or-nothing property for Popper. Theories can be more or less falsifiable, and Popper regarded higher falsifiability as a scientific virtue. A theory is more falsifiable when it excludes a greater range of possible observations — when it says more about the world and therefore exposes itself to more potential refutation.

Popper formalized this intuition through the notion of potential falsifiers. The set of potential falsifiers of a theory consists of all the basic statements (observation reports) that would contradict it. A theory T1 is more falsifiable than a theory T2 if the set of potential falsifiers of T1 is a proper superset of the set of potential falsifiers of T2.

For example, the statement “All planets move in ellipses” is more falsifiable than “All planets move in conic sections” because everything that could falsify the latter could also falsify the former, but not vice versa. Any observation of a non-elliptical orbit falsifies the first statement, but only an observation of a non-conic-section orbit falsifies the second.

“The more a theory forbids, the more it tells us about the world of experience. A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.”— Karl Popper, Conjectures and Refutations (1963), p. 36

This framework leads naturally to a methodological prescription: scientists should prefer bolder, more falsifiable theories over weaker, less falsifiable ones. A bold theory that survives a severe test tells us more about the world than a timid theory that survives an easy one. This is why Popper valued Einstein's precise numerical predictions over the vague qualitative “predictions” of psychoanalysis.

Basic Statements and the Empirical Basis

Falsification requires that we identify basic statements — singular observation statements that can serve as potential falsifiers. But what counts as a basic statement? This is what Popper called “the problem of the empirical basis,” and his solution reveals a surprising element of conventionalism in his otherwise rigorously logical framework.

Popper recognized that observation statements are themselves theory-laden. When a physicist reports that “the pointer on the ammeter reads 4.7,” this observation presupposes a theoretical understanding of ammeters, electrical current, measurement scales, and much else. There is no theory-free language of pure observation from which to construct basic statements.

Popper's solution was to acknowledge that the acceptance of basic statements involves a conventional element. The scientific community must decide, by agreement, which observation statements to accept as the basis for testing theories. These decisions are provisional and revisable — a basic statement accepted today may be questioned tomorrow — but at any given time, science requires a stable empirical basis against which theories can be tested.

“The empirical basis of objective science has thus nothing ‘absolute’ about it. Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or ‘given’ base; and if we stop driving the piles deeper, it is not because we have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to carry the structure, at least for the time being.”— Karl Popper, The Logic of Scientific Discovery (1959), p. 94

This metaphor of science as a building on piles driven into a swamp is one of Popper's most memorable images. It acknowledges that science has no absolutely certain foundation while maintaining that it can still be objective and rational. The objectivity of science lies not in the certainty of its foundations but in the intersubjective testability of its claims — the fact that different scientists can independently check and criticize each other's work.

Bold Conjectures and Severe Tests

Popper's methodology can be summarized in a single prescription: propose bold conjectures and subject them to severe tests. The bolder the conjecture — the more it risks, the more it forbids — the more informative it is if it survives testing. And the more severe the test — the more likely it was to falsify the theory if the theory were false — the more impressive the theory's survival.

A severe test is one that the theory is unlikely to pass unless it is at least approximately true. The Eddington eclipse observations of 1919 constituted a severe test of general relativity because the predicted light-bending effect was specific and quantitative: Einstein predicted a deflection of 1.75 arc-seconds for light grazing the sun, exactly twice the value predicted by Newtonian theory. The observation could easily have shown no deflection, or a deflection of the Newtonian value, or some other value entirely.

By contrast, a test is not severe if the theory could easily pass it even if it were false. Observing that the sun rises every morning does not constitute a severe test of celestial mechanics, because virtually any theory of the heavens would predict (or at least be compatible with) this observation. Severe tests are those that discriminate between the theory under test and its competitors.

The Method of Conjectures and Refutations

Popper described the method of science as a process of “conjectures and refutations”:

  1. Begin with a problem or puzzle that demands explanation.
  2. Propose a bold conjecture — a tentative theory that goes beyond the available evidence.
  3. Deduce testable predictions from the conjecture, especially predictions that are “risky” in the sense that they are unlikely to be true unless the theory is at least approximately correct.
  4. Attempt to refute the theory by testing these predictions against observation and experiment.
  5. If the theory is refuted, learn from the failure and propose a new, improved conjecture.
  6. If the theory survives the test, it is “corroborated” — but never confirmed or proven.
“The way in which knowledge progresses, and especially our scientific knowledge, is by unjustified (and unjustifiable) anticipations, by guesses, by tentative solutions to our problems, by conjectures. These conjectures are controlled by criticism; that is, by attempted refutations, which include severely critical tests.”— Karl Popper, Conjectures and Refutations (1963), p. vii

This method is explicitly non-inductive. We do not arrive at our conjectures by generalizing from observations; rather, we invent them creatively and then test them deductively. The source of a hypothesis is irrelevant to its scientific status — it may come from a dream, a metaphysical speculation, or a mathematical analogy. What matters is whether it is testable and whether it survives severe tests.

Comparison with Logical Positivism

Although Popper's philosophy is sometimes conflated with logical positivism, the differences are fundamental. Understanding these differences is essential for grasping the distinctiveness of Popper's contribution.

IssueLogical PositivismPopper's Falsificationism
Criterion of demarcationVerifiability (also criterion of meaning)Falsifiability (not a criterion of meaning)
Status of metaphysicsMeaningless pseudo-statementsMeaningful but not scientific
InductionCentral to scientific methodRejected entirely; science is deductive
ConfirmationTheories gain probability through confirming instancesTheories are only “corroborated,” never confirmed
Goal of scienceVerified or highly probable knowledgeBold conjectures, approaching verisimilitude
Problem of inductionAttempt to justify or dissolveBypass: science does not use induction

Perhaps the most fundamental difference concerns the role of induction. The positivists regarded induction as the backbone of scientific reasoning: we observe regularities in nature and generalize from them to form scientific laws. Hume's problem of induction was a challenge to be solved or dissolved, not a reason to abandon induction altogether.

Popper, by contrast, embraced Hume's critique and drew the radical conclusion that science does not and should not use induction. Scientific theories are not generalizations from experience but conjectures — creative inventions of the human mind — that are tested by deducing their consequences and comparing them with observation. The logic of science is not inductive but deductive: we deduce predictions from theories and use modus tollens to eliminate those theories whose predictions fail.

Popper on Probability and Scientific Theories

Popper made a counterintuitive but logically compelling argument about the relationship between probability and scientific content. A universal theory with great empirical content is logically improbable — the more a theory asserts, the more ways it can go wrong, and thus the lower its logical probability. A tautology has probability 1 but says nothing about the world; a bold universal theory has low probability but high informative content.

This means that the goal of science, on Popper's view, is not to find highly probable theories but to find highly improbable ones that nonetheless survive testing. The best scientific theories are those that are logically improbable (because they make very specific claims about the world) but empirically well-corroborated (because they have survived severe attempts at refutation).

“We do not seek highly probable theories but explanations; that is to say, powerful and improbable theories.”— Karl Popper, The Logic of Scientific Discovery (1959), p. 399

This argument is central to Popper's rejection of Bayesian and inductivist approaches to confirmation theory. If good scientific theories are logically improbable, then an account of science based on the probability of hypotheses given evidence will systematically favor uninformative theories over informative ones — precisely the opposite of what science actually does.

Legacy and Assessment

Popper's falsificationism remains one of the most widely known and discussed positions in the philosophy of science. Among working scientists, “Is it falsifiable?” has become a standard criterion for evaluating claims to scientific status. The demarcation of science from pseudoscience via falsifiability has been invoked in legal proceedings, most notably in the U.S. Supreme Court's Daubert v. Merrell Dow Pharmaceuticals decision (1993), which cited falsifiability as one criterion for the admissibility of scientific expert testimony.

However, Popper's philosophy faces serious challenges, which will be explored in the subsequent chapters. The Duhem-Quine thesis threatens the possibility of conclusive falsification; Kuhn's historical work suggests that scientists do not in fact abandon theories at the first sign of falsification; and the failure of Popper's formal definition of verisimilitude undermines his account of scientific progress.

Nevertheless, the core insight of falsificationism — that scientific theories gain their empirical content from what they forbid, not from what they allow; that the willingness to risk refutation is the mark of genuine science; and that knowledge grows through the critical examination and elimination of error — continues to shape how we think about science and its place in human knowledge.