Chapter 13: Scientific Realism

What Is Scientific Realism?

Scientific realism is the view that science aims at — and to a significant extent succeeds in providing — a literally true account of the world, including its unobservable aspects. When physicists speak of electrons, biologists of genes, and geologists of tectonic plates, the realist maintains that these terms refer to real entities with the properties science ascribes to them. Our best scientific theories are not merely useful fictions or convenient instruments for prediction: they are approximately true descriptions of a mind-independent reality.

The realist position seems like common sense to many working scientists. As the physicist Steven Weinberg has remarked, it would be very strange if the success of our theories had nothing to do with their being on the right track about the underlying structure of reality. Yet scientific realism is a philosophically substantive thesis that requires careful articulation and defense.

Stathis Psillos, one of realism’s most thorough defenders, characterizes the position through three interlocking commitments: a metaphysical thesis about the existence of a mind-independent world, a semantic thesis about the literal interpretation of scientific theories, and an epistemic thesis about our justified belief in those theories.

The Three Dimensions of Realism

1. The Metaphysical Dimension

The world has a definite mind-independent structure. Objects, properties, and relations exist regardless of whether anyone observes or theorizes about them. The electron existed before J.J. Thomson discovered it in 1897; the structure of DNA existed before Watson and Crick modeled it in 1953. Science discovers rather than invents.

This metaphysical commitment distinguishes realism from idealism (the world is mind-dependent) and certain forms of social constructivism (scientific "facts" are socially constructed). It is the most widely shared commitment — even many anti-realists accept that there is a mind-independent world. The real disagreements concern the semantic and epistemic dimensions.

2. The Semantic Dimension

Scientific theories should be taken at face value — they are literally true or false descriptions of the world. When quantum mechanics says electrons have spin-1/2, or when evolutionary theory says species share common ancestors, these claims should be understood as straightforward assertions about reality, not as convenient shorthand for claims about observations.

This commitment opposes instrumentalism, which treats theoretical claims as mere tools for deriving predictions about observables. It also opposes the logical positivist program of "rational reconstruction," which sought to reduce theoretical statements to statements about observations through correspondence rules.

The semantic commitment carries a crucial corollary: theoretical terms genuinely refer. "Electron" picks out a real entity in the world, just as "table" does. This referential commitment is what makes the realism debate philosophically deep — it connects philosophy of science to philosophy of language and the theory of reference.

3. The Epistemic Dimension

Mature, successful scientific theories are approximately true, and successive theories in the same domain represent greater approximations to the truth. We are epistemically justified in believing what our best theories tell us about the world, including the unobservable parts.

This is the most contested dimension. Even if one grants that the world is mind-independent and that theories should be literally interpreted, the question remains: are we justified in believing our current theories are approximately true? The anti-realist argues we are not — the history of science shows that even our best theories are eventually overturned. The realist responds that approximate truth is compatible with revision, and that later theories typically preserve the successes of earlier ones.

Entity Realism vs Theory Realism

Not all realists make the same claims. An important distinction separates entity realism from theory realism. Theory realism holds that our best theories are approximately true as a whole — their laws, mechanisms, and theoretical descriptions are roughly correct. Entity realism makes the more modest claim that we have good reason to believe in the existence of certain unobservable entities, even if our theories about them may be significantly wrong.

The most famous defense of entity realism comes from Ian Hacking in his landmark work Representing and Intervening (1983). Hacking argues that we have the strongest grounds for believing in entities that we can manipulate and use as tools in experimental investigation. His slogan captures the idea memorably:

“If you can spray them, they are real.”

— Ian Hacking, Representing and Intervening (1983)

Hacking’s argument focuses on experimental practice rather than theoretical success. When physicists use a beam of electrons to investigate the properties of other entities — for example, spraying positrons with electrons to study the weak neutral current — they must believe the electrons are real. One does not use a fiction as a tool. The experimenter’s confidence in the reality of electrons does not depend on the truth of any particular theory of the electron (quantum electrodynamics, the Standard Model). It depends on the practical ability to manipulate electrons reliably.

Nancy Cartwright offers a related argument: we should infer the existence of a theoretical entity when it provides the best causal explanation of an observed phenomenon. She calls this “inference to the most probable cause.” Like Hacking, Cartwright is skeptical of high-level theoretical laws while remaining realist about the entities those theories invoke.

Hacking’s Experimental Argument in Detail

Hacking’s argument unfolds through a close analysis of actual experimental physics. Consider the PEGGY II experiment at SLAC (Stanford Linear Accelerator Center), which Hacking discusses at length. The experiment used polarized electrons to investigate parity violation in weak neutral currents. The physicists needed to produce beams of electrons with specific polarization properties.

Hacking’s key insight is that the experimenters used several different, theoretically independent techniques to produce and detect polarized electrons. Each technique depended on different aspects of electron behavior — different theoretical descriptions. Yet they all converged on the same results. This convergence of independent methods provides powerful evidence that there is a real entity — the electron — that all these methods are latching onto, even if no single theory gets everything right about it.

The structure of Hacking’s argument can be formalized:

  1. Scientists routinely manipulate putative unobservable entities to investigate other phenomena.
  2. Successful manipulation requires a real entity to manipulate — one cannot causally interact with a fiction.
  3. Multiple independent methods converge on the same causal properties of the entity.
  4. Therefore, we have strong grounds for believing in the reality of such entities.

Critics have raised several objections. Van Fraassen argues that Hacking’s criterion is too vague — what counts as "manipulation" or "causal interaction" already presupposes a theoretical framework. Alan Musgrave has questioned whether entity realism is a stable position: if we believe entities are real, don’t we need some true theoretical description of them to pick them out?

Structural Realism

Worrall’s Compromise: Epistemic Structural Realism

In a celebrated 1989 paper, John Worrall proposed structural realism as “the best of both worlds” — a position that respects both the no-miracles intuition and the pessimistic meta-induction. His key example is the transition from Fresnel’s ether theory of light to Maxwell’s electromagnetic theory.

Fresnel’s theory posited a mechanical ether through which light propagated as a wave. The theory was spectacularly successful: it predicted the bright spot at the center of a circular shadow (the Poisson/Arago spot), a startling prediction confirmed by experiment. Yet the ether does not exist. The pessimistic meta-induction seems vindicated.

But Worrall noticed something crucial: Fresnel’s equations were retained — almost exactly — in Maxwell’s theory. The mathematical structure was preserved even as the ontology (the nature of the entities) changed radically. What was "carried over" from Fresnel to Maxwell was not a description of the ether but a set of structural relations captured in the mathematics.

“On the structural realist view, what Newton really discovered are the relationships between phenomena expressed in the mathematical equations of his theory.”

— John Worrall, “Structural Realism: The Best of Both Worlds?” (1989)

Epistemic structural realism thus claims: we can know the structure of the unobservable world (the relations between entities, as expressed in mathematical equations) but not the nature or intrinsic properties of the entities that instantiate that structure. This explains both why theories are successful (they get the structure right) and why ontologies change (they get the natures wrong).

Ontic Structural Realism

James Ladyman and Steven French have radicalized Worrall’s position into ontic structural realism (OSR). Where Worrall claims we can only know the structure, Ladyman and French argue that structure is all there is. There are no individual objects with intrinsic properties underlying the relations; the relations are ontologically fundamental.

OSR draws motivation from modern physics, particularly quantum mechanics. The indistinguishability of quantum particles (fermions and bosons) and quantum entanglement suggest that the notion of an "individual object" with an intrinsic identity may be physically untenable. In quantum field theory, particles are excitations of fields; it is the field structure, not the individual "particles," that is fundamental.

Critics have raised what Chakravartty calls the “nature of structure” problem: can we make sense of relations without relata? Doesn’t a relation require something to stand in the relation? OSR proponents respond that relations can be ontologically prior to their relata — a view supported by structuralist approaches in mathematics and certain interpretations of quantum mechanics.

The Success of Science as Evidence for Realism

The realist’s most powerful resource is the remarkable predictive and explanatory success of mature science. Consider some striking examples:

  • Novel predictions: General relativity predicted the bending of light by gravitational fields, confirmed by Eddington’s 1919 eclipse expedition. The theory also predicted gravitational waves, detected by LIGO in 2015 — a century after the prediction.
  • Convergence: The value of Avogadro’s number has been independently determined by over a dozen different methods (Brownian motion, X-ray crystallography, electrolysis, radioactive decay, etc.), all converging on the same value. This is what Perrin called “the miracle of convergence.”
  • Technological applications: Semiconductor physics, quantum mechanics, and molecular biology have enabled technologies (transistors, lasers, genetic engineering) that would be inexplicable if the underlying theories were fundamentally wrong.
  • Unification: Maxwell’s unification of electricity and magnetism, the electroweak unification, the Standard Model — these theoretical unifications successfully predict phenomena at the intersection of previously separate domains.

The realist argues that the best explanation of this success is approximate truth. Richard Boyd develops this as an abductive argument: realism is the best explanation of the instrumental reliability of scientific methodology. Scientists use background theories to design experiments, choose instruments, and interpret data. That this methodology reliably produces successful theories is best explained by the approximate truth of the background theories.

Selective Realism: Which Parts Should We Believe?

One of the most sophisticated contemporary realist strategies is selective realism — the view that we should believe the parts of our theories that are genuinely responsible for their empirical success, while remaining agnostic about the rest. Not all parts of a successful theory contribute equally to its success. Some elements are “idle wheels” that play no essential role in generating predictions.

Psillos develops this approach by distinguishing between the “working posits” and the “presuppositional posits” of a theory. Working posits are those that play an indispensable role in the derivation of successful predictions. Presuppositional posits are background assumptions that are not essential to the theory’s success. We should be realists about the working posits but not necessarily about the presuppositional ones.

Consider caloric theory. The theory’s success in predicting heat conduction and specific heats depended on certain structural features of the caloric — its conservation and its ability to flow from hot bodies to cold. These features were approximately preserved in thermodynamics. What was not preserved was the posit that caloric is a material substance. But this latter claim, Psillos argues, was a presuppositional posit that was not essential to the theory’s empirical success.

Philip Kitcher’s “Galilean Strategy” makes a similar point: identify the “working posits” that genuinely drive success, and you will find that these are typically preserved through theory change. The pessimistic meta-induction trades on the false assumption that entire theories stand or fall together.

Critics, notably Kyle Stanford, object that selective realism faces a “problem of unconceived alternatives.” Even if we can identify the working posits of current theories, we have no guarantee that radically different theoretical frameworks — not yet conceived — might account for the same successes with entirely different working posits. History shows that revolutionary new theories often succeed for reasons their predecessors could not have anticipated.

The Challenge of Theory Change

The realist must grapple with the fact that scientific theories change — sometimes radically. If our current best theories are approximately true, what does it mean when they are eventually replaced by different theories? The realist has several options:

Convergent realism (Boyd, Putnam) holds that later theories are typically better approximations to the truth than earlier ones. Science converges on truth through a process of successive approximation. Newton’s theory was approximately true, and Einstein’s theory is a better approximation. This view requires showing that there is a meaningful notion of “approximate truth” and that it can be operationalized.

The notion of approximate truth (or “verisimilitude”) has proved technically challenging. Karl Popper attempted to define verisimilitude as a measure of a theory’s closeness to the truth, but Pavel Tichý and David Miller independently showed that Popper’s definition failed: no false theory can be closer to the truth than any other false theory, on Popper’s account. Subsequent attempts by Oddie, Niiniluoto, and others have developed more sophisticated measures of approximate truth, but the problem remains technically formidable.

Referential continuity provides another realist strategy. Even when theories change, the central terms may continue to refer to the same entities. “Electron” referred to the same entity in Thomson’s plum pudding model, Bohr’s planetary model, and modern quantum mechanics, despite radical changes in the theories. The causal theory of reference (Kripke, Putnam) supports this claim: “electron” was introduced by a baptismal event involving causal contact with electrons, and subsequent uses of the term inherit this reference regardless of changes in theory.

Philip Kitcher develops a nuanced position with his notion of reference potentials. Different uses of a theoretical term may have different reference potentials — some securing reference through description, others through causal contact. Some reference potentials are “working” (contributing to the theory’s empirical success) and others are “idle.” Referential continuity of the working reference potentials explains the success of science, while referential failure of idle potentials explains the changes in ontology.

The Convergence Argument

One of the most compelling arguments for scientific realism is the convergence of independent methods. When multiple independent experimental techniques, each based on different theoretical principles, converge on the same result, this provides powerful evidence that they are all detecting the same underlying reality.

Jean Perrin’s classic work on Brownian motion provides the paradigmatic example. Perrin used thirteen different methods to determine Avogadro’s number — methods based on Brownian motion, radioactive decay, blackbody radiation, X-ray diffraction, electrolysis, and other phenomena. All converged on approximately the same value (around 6 × 10²³). The anti-realist must explain why thirteen independent methods would all converge on the same “fictional” number. The realist has a simple explanation: atoms are real, and Avogadro’s number is the actual number of atoms in a mole.

“The concordance of values obtained by entirely different methods is perhaps the most convincing evidence we have for the existence of atoms.”

— Wesley Salmon, Scientific Explanation and the Causal Structure of the World (1984)

This argument generalizes: whenever independent lines of evidence converge — independent measurements of the electron charge, independent confirmations of the age of the universe, independent evidence for plate tectonics — the realist has a natural explanation (they are all tracking the same truth), while the anti-realist must treat the convergence as coincidence or explain it in other terms.

Realism and Contemporary Physics

Contemporary physics poses distinctive challenges for scientific realism. Quantum mechanics, in particular, raises deep questions about what it means for a theory to be “approximately true.”

The measurement problem, the role of the observer, quantum entanglement, and the multiplicity of interpretations (Copenhagen, many-worlds, Bohmian, collapse theories) all complicate the realist picture. If we cannot even agree on what quantum mechanics says about the world, how can we claim it is approximately true? The realist might respond that the mathematical formalism of quantum mechanics — the Hilbert space structure, the Schrödinger equation, the Born rule — constitutes the structural content that is approximately true, regardless of which interpretation is correct.

String theory and other speculative programs in fundamental physics raise further questions. Theories that are mathematically elegant but lack empirical testability challenge the realist’s claim that success is the hallmark of truth. If string theory is not empirically testable (at present energies), can the realist be warranted in believing it? These challenges have led some philosophers to distinguish between “confirmed realism” (about empirically successful theories) and “aspirational realism” (about promising but untested theoretical frameworks).

Summary of Realist Positions

PositionCore ClaimKey Advocate
Full realismBest theories are approximately true in their entiretyBoyd, Psillos
Entity realismManipulable entities are real; theories may be wrongHacking, Cartwright
Epistemic structural realismWe know the structure, not the nature, of unobservablesWorrall
Ontic structural realismStructure is all there is; no objects with intrinsic naturesLadyman, French
Selective realismOnly the “working posits” of theories merit beliefKitcher, Psillos

Realism and Epistemic Values

The realism debate connects to broader questions about the role of values in science. Realists argue that truth is the central epistemic value — the aim of science is to discover truths about the world. Anti-realists respond that other values — empirical adequacy, predictive power, simplicity, fruitfulness, explanatory power — are more appropriate evaluative criteria for scientific theories.

Larry Laudan has argued that the focus on truth is misguided; science aims at empirical adequacy and problem-solving effectiveness, and these aims can be achieved without approximate truth. Bas van Fraassen holds that the “theoretical virtues” that guide theory choice (simplicity, elegance, unification) are pragmatic rather than epistemic — they make theories more useful but do not indicate truth. The realist responds that the persistent, robust success of theories that exemplify these virtues is best explained by their being truth-tracking features of the world.

Key Readings

  • • Boyd, R. (1983). “On the Current Status of the Issue of Scientific Realism.” Erkenntnis, 19, 45–90.
  • • Hacking, I. (1983). Representing and Intervening. Cambridge University Press. [Chapters 1, 5, 6, 16]
  • • Psillos, S. (1999). Scientific Realism: How Science Tracks Truth. Routledge. [Chapters 4, 6, 7]
  • • Worrall, J. (1989). “Structural Realism: The Best of Both Worlds?” Dialectica, 43, 99–124.
  • • Ladyman, J. (1998). “What is Structural Realism?” Studies in History and Philosophy of Science, 29, 409–424.
  • • Chakravartty, A. (2007). A Metaphysics for Scientific Realism. Cambridge University Press. [Chapters 2–4]
  • • Kitcher, P. (1993). The Advancement of Science. Oxford University Press. [Chapter 5]
  • • Stanford, K. (2006). Exceeding Our Grasp. Oxford University Press. [Chapters 1–3]

Discussion Questions

  1. Is entity realism a stable position? Can we believe in entities without believing some theory about them is approximately true?
  2. Does structural realism genuinely capture what we care about when we say science discovers truths about the world? Or is “knowing only the structure” too thin?
  3. Can we identify the “working posits” of a theory in a non-circular way — that is, without already knowing which parts of the theory are true?
  4. What would it take to convince you that an unobservable entity is real? Is Hacking’s manipulation criterion sufficient?
  5. How should the realist respond to Stanford’s problem of unconceived alternatives?

Historical Note: The Realism Revival

Scientific realism experienced a dramatic revival in the 1960s and 1970s after the collapse of logical positivism. Several factors contributed. The failure of the observational/theoretical distinction (as shown by Putnam, Achinstein, and others) undermined the positivist strategy of reducing theoretical claims to observational ones. Kuhn’s work on scientific revolutions, while often read as anti-realist, actually helped realism by showing that theory change involved genuine cognitive content rather than mere shifts in convention.

J.J.C. Smart’s Philosophy and Scientific Realism (1963) was an early manifesto, arguing that it would be a “cosmic coincidence” if the entities posited by science did not exist. Hilary Putnam’s articulation of the no-miracles argument in 1975 gave realism its most powerful tool. Richard Boyd provided the philosophical infrastructure with his sophisticated defense of abductive reasoning about unobservables. By the late 1970s, scientific realism was the dominant position in anglophone philosophy of science — a dominance challenged but never overthrown by van Fraassen’s constructive empiricism.

Today, the debate continues to evolve, with most participants adopting nuanced positions that acknowledge the partial validity of both sides. The “wholesale” realism of the 1970s has given way to selective, structural, and semi-realist positions that are more cautious about which aspects of our theories warrant belief. Similarly, contemporary anti-realism is more sophisticated than the instrumentalism of Mach and Duhem, acknowledging the genuine achievements of science while resisting the claim that it delivers truth about unobservable reality.