Chapter 23: Interpretations of Quantum Mechanics
The measurement problem, the nature of the wave function, and the competing visions of quantum reality that have divided physicists and philosophers for a century.
Quantum mechanics is the most empirically successful theory in the history of science. Its predictions have been confirmed to extraordinary precision â the anomalous magnetic moment of the electron, for instance, is predicted to better than ten parts per billion. Yet nearly a century after its formulation, there is no consensus on what quantum mechanics tells us about reality. The mathematical formalism is agreed upon; its interpretation is not.
At the heart of the interpretive problem lies the measurement problem: quantum mechanics describes physical systems evolving deterministically according to the Schrödinger equation, yet measurements appear to produce definite outcomes drawn randomly from the possible values. The formalism seems to require two incompatible dynamical rules â smooth, deterministic evolution between measurements and discontinuous, stochastic âcollapseâ upon measurement. But what counts as a measurement? Where does the boundary lie? These questions have generated a remarkable proliferation of interpretive frameworks.
This chapter surveys the major interpretations of quantum mechanics, from the Copenhagen orthodoxy through many-worlds and Bohmian mechanics to QBism and collapse theories, examining the philosophical commitments and challenges of each.
The Measurement Problem: Schrödingerâs Cat
The measurement problem can be stated precisely. Quantum mechanics represents the state of a system by a vector $|\psi\rangle$ in a Hilbert space. The Schrödinger equation governs its time evolution:
$$i\hbar \frac{\partial}{\partial t}|\psi\rangle = \hat{H}|\psi\rangle$$
This equation is linear, which means that if $|\psi_1\rangle$ and $|\psi_2\rangle$ are solutions, so is any superposition $\alpha|\psi_1\rangle + \beta|\psi_2\rangle$. Consider an electron in a superposition of spin-up and spin-down:
$$|\psi\rangle = \frac{1}{\sqrt{2}}(|\uparrow\rangle + |\downarrow\rangle)$$
When this electron interacts with a measuring device, the linearity of quantum mechanics implies that the combined system evolves into an entangled superposition:
$$\frac{1}{\sqrt{2}}(|\uparrow\rangle|\text{device reads up}\rangle + |\downarrow\rangle|\text{device reads down}\rangle)$$
But we never observe measuring devices in superpositions â we always see a definite outcome. Schrödinger dramatised this with his famous cat thought experiment (1935): couple the quantum system to a catâs survival, and quantum mechanics predicts a superposition of alive and dead cat.
âOne can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following diabolical device ... one would, according to the $\psi$-function of the entire system, have the living and the dead cat mixed or smeared out in equal parts.ââ Erwin Schrödinger (1935), translation by John D. Trimmer
The measurement problem is the problem of explaining how and why definite outcomes emerge from quantum superpositions. Every interpretation of quantum mechanics is, fundamentally, an attempt to solve (or dissolve) this problem.
The Copenhagen Interpretation: Bohrâs Complementarity
The âCopenhagen interpretation,â associated primarily with Niels Bohr and Werner Heisenberg, has been the textbook orthodoxy for most of the history of quantum mechanics. In reality, Bohr and Heisenberg held somewhat different views, and âthe Copenhagen interpretationâ is more a family of related positions than a single doctrine.
The core commitments include:
- âComplementarity: Wave and particle descriptions are complementary aspects of quantum phenomena that cannot be simultaneously applied. The experimental arrangement determines which aspect is manifested.
- âClassical/quantum divide: There is a necessary division between the quantum system (described by the wave function) and the classical measuring apparatus (described in ordinary language). Bohr insisted that measurement results must be expressed in classical terms.
- âAnti-realism about the quantum state: The wave function does not describe the objective state of the system; it is a tool for calculating the probabilities of measurement outcomes.
âThere is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can sayabout nature.ââ Attributed to Niels Bohr (as reported by Aage Petersen)
Critics object that the Copenhagen interpretation does not solve the measurement problem so much as stipulate it away. The division between classical and quantum domains is vague and ad hoc. And the refusal to provide an ontology for quantum systems leaves the interpretation explanatorily empty. As John Bell memorably asked: âWhat exactly qualifies some physical systems to play the role of âmeasurerâ?â
The EPR Argument and Bellâs Theorem
In 1935, Einstein, Podolsky, and Rosen (EPR) published an argument intended to show that quantum mechanics is incomplete â that there exist âelements of realityâ not captured by the wave function. Their argument exploits quantum entanglement: when two particles are prepared in an entangled state, a measurement on one particle instantaneously determines the outcome of a corresponding measurement on the other, regardless of their spatial separation.
EPR assumed two principles: locality (no instantaneous action at a distance) and realism(physical quantities have definite values independent of measurement). Given these assumptions, the correlations between entangled particles imply that the outcomes are predetermined by âhidden variablesâ not included in the quantum-mechanical description. Hence quantum mechanics is incomplete.
In 1964, John Bell proved a theorem of extraordinary philosophical significance. He showed that anylocal hidden variable theory makes predictions satisfying an inequality â Bellâs inequalityâ that quantum mechanics violates:
$$|E(a,b) - E(a,c)| \leq 1 + E(b,c)$$
where $E(a,b)$ is the correlation between measurements along directions $a$and $b$. Experimental tests â from Alain Aspectâs pioneering experiments (1982) to the loophole-free tests of 2015 â have consistently confirmed the quantum-mechanical predictions and violated Bellâs inequality.
âBellâs theorem is the most profound discovery of science.ââ Henry Stapp (1975)
The philosophical upshot is that at least one of the EPR assumptions must be abandoned. Either localityfails (there is genuine nonlocal influence), or realism fails (physical quantities do not have definite values prior to measurement), or both. Different interpretations of quantum mechanics make different choices: Bohmian mechanics sacrifices locality; Copenhagen-style interpretations sacrifice realism; some approaches question the very framework of the argument.
The Many-Worlds Interpretation
Hugh Everett III proposed the ârelative stateâ formulation of quantum mechanics in his 1957 doctoral thesis. The key move is radical: take the Schrödinger equation literally and universally. There is no collapse; the wave function of the universe evolves unitarily at all times. What appears to be collapse is actually branching: when a measurement occurs, the universe splits into branches, one for each possible outcome. In each branch, a copy of the observer sees a definite result.
On the many-worlds interpretation (MWI), after the spin measurement:
$$\frac{1}{\sqrt{2}}(|\uparrow\rangle|\text{observer sees up}\rangle + |\downarrow\rangle|\text{observer sees down}\rangle)$$
Both branches are equally real. In one branch, the observer sees spin-up; in the other, spin-down. There is no fact of the matter about which outcome âreallyâ occurred â both did, in different branches.
The MWI has two principal virtues: it takes the formalism at face value (no ad hoc collapse postulate), and it is fully deterministic. But it faces serious challenges:
- âThe probability problem: If every outcome occurs, what does it mean to say that one outcome is more probable than another? The Born rule assigns probabilities $|\alpha|^2$ and $|\beta|^2$ to outcomes, but in a deterministic multiverse where all outcomes are actual, the meaning of these probabilities is obscure.
- âThe preferred basis problem: What determines the âbranchesâ? Why does the universe split into spin-up and spin-down branches rather than some other decomposition? Decoherence is widely invoked to solve this, but whether it fully succeeds is debated.
- âOntological extravagance: The MWI postulates an enormous (possibly infinite) number of unobservable branches. Is this a violation of Ockhamâs razor? Defenders respond that the MWI is parsimonious in its laws, even if profligate in its ontology.
David Deutsch and David Wallace have developed sophisticated arguments that Bayesian decision theory, when applied to agents in an Everettian multiverse, recovers the Born rule. Whether these arguments succeed remains one of the most actively debated questions in the foundations of quantum mechanics.
Bohmian Mechanics: Hidden Variables and Nonlocality
David Bohm (1952) developed a deterministic, hidden-variable interpretation of quantum mechanics (building on earlier work by de Broglie in 1927). In Bohmian mechanics, particles always have definite positions, and their trajectories are guided by the wave function through the guidance equation:
$$\frac{d\mathbf{Q}_k}{dt} = \frac{\hbar}{m_k} \text{Im} \frac{\nabla_k \psi}{\psi}(\mathbf{Q}_1, \ldots, \mathbf{Q}_N, t)$$
The wave function $\psi$ evolves according to the Schrödinger equation as usual, but particles have definite positions $\mathbf{Q}_k$ at all times. If the initial positions are distributed according to $|\psi|^2$ (the âquantum equilibrium hypothesisâ), Bohmian mechanics reproduces all the statistical predictions of standard quantum mechanics.
Bohmian mechanics has several philosophical virtues: it provides a clear ontology (particles with definite positions), it is fully deterministic, and the measurement problem does not arise (measurements simply reveal pre-existing particle positions). There is no collapse; the wave function always evolves unitarily.
The principal cost is nonlocality. The guidance equation makes the velocity of each particle depend on the instantaneous positions of all other particles, regardless of distance. This is required by Bellâs theorem: any hidden-variable theory that reproduces quantum-mechanical predictions must be nonlocal.
âIs it not clear from the smallness of the scintillation on the screen that we have to do with aparticle? And is it not clear, from the diffraction and interference patterns, that the motion of the particle is directed by a wave?ââ John S. Bell, Speakable and Unspeakable in Quantum Mechanics (1987)
Bell himself was sympathetic to Bohmian mechanics, viewing it as proof that the measurement problem could be solved with a clear ontology, and that the widespread dismissal of hidden variables was based on confusions rather than genuine impossibility theorems.
Decoherence and Its Philosophical Significance
Decoherence is the process by which a quantum system becomes entangled with its environment, causing the interference terms in the density matrix to decay exponentially fast. For a system in a superposition of states $|a\rangle$ and $|b\rangle$, interaction with the environment$|E\rangle$ produces:
$$(\alpha|a\rangle + \beta|b\rangle)|E_0\rangle \to \alpha|a\rangle|E_a\rangle + \beta|b\rangle|E_b\rangle$$
When the environment states are (approximately) orthogonal ($\langle E_a | E_b \rangle \approx 0$), the reduced density matrix of the system becomes (approximately) diagonal:
$$\rho_{\text{system}} \approx |\alpha|^2 |a\rangle\langle a| + |\beta|^2 |b\rangle\langle b|$$
This looks like a classical probability distribution over definite outcomes. Decoherence explains why macroscopic superpositions are never observed in practice: the environment âmeasuresâ the system on fantastically short timescales (of order $10^{-20}$ seconds for a macroscopic object).
However, decoherence alone does not solve the measurement problem. The diagonal density matrix is mathematically equivalent to a classical mixture only if one of the outcomes actually obtains. Decoherence explains why interference is unobservable, but it does not explain why one outcome occurs rather than another (or whether all outcomes occur in different branches). As Zurek himself acknowledges, decoherence must be supplemented by an interpretation â whether Everettian branching, Bohmian trajectories, or something else.
QBism: Quantum Bayesianism
QBism (originally âQuantum Bayesianism,â now sometimes construed as âQuantum Bettabilitarianismâ), developed by Christopher Fuchs, RĂŒdiger Schack, and Carlton Caves, offers a radically subjectivist interpretation. On the QBist view, the quantum state is not a description of an external physical system but a representation of an individual agentâs degrees of belief about the outcomes of future measurements.
The Born rule probabilities are personal probabilities in the Bayesian sense â they express the agentâs expectations, not objective features of the world. Measurement outcomes are personal experiences of the agent, not objective events in an external world that all agents must agree upon.
âA quantum state is not something out there in nature. A quantum state is a mathematical object that an agent uses to assign probabilities to his or her future experiences.ââ Christopher Fuchs (2010)
QBism dissolves the measurement problem by denying that collapse is a physical process. When an agent obtains a measurement result, they simply update their quantum state (their beliefs), just as a Bayesian agent updates probabilities upon learning new information. There is no physical mystery about âwhat happens during measurementâ any more than there is a mystery about what happens to a probability when you learn that a coin landed heads.
Critics charge that QBism is solipsistic or idealist, that it renders the remarkable empirical success of quantum mechanics inexplicable, and that it abandons the scientific aspiration to describe an objective, mind-independent reality. QBists respond that their view is neither solipsistic (the external world exists; it just does not have quantum states) nor instrumentalist (quantum mechanics is used not merely for prediction but to guide the agentâs engagement with reality). The debate continues.
Collapse Theories: GRW
Ghirardi, Rimini, and Weber (1986) proposed a spontaneous collapse theory (GRW) that modifies the Schrödinger equation to include random, spontaneous localisation events. Each particle has a small probability per unit time ($\sim 10^{-16}$ per second) of undergoing a spontaneous âhitâ that localises its wave function around a random point.
For a single particle, hits are so rare as to be unnoticeable. But a macroscopic object contains$\sim 10^{23}$ particles, so hits occur $\sim 10^{7}$ times per second. Since the particles are entangled in a macroscopic superposition, a hit on any one particle collapses the entire superposition. This naturally explains why macroscopic superpositions are never observed without requiring a measurement/observer to trigger collapse.
The GRW theory is a genuine rival to standard quantum mechanics, not merely an interpretation. It makes slightly different empirical predictions â in particular, the collapse process introduces a tiny amount of energy non-conservation and a slight broadening of wave packets. These effects are currently below experimental sensitivity, but ongoing experiments are approaching the relevant regime.
Philosophically, collapse theories have the advantage of providing a clear, precise, and observer-independent dynamics. Their cost is the need for new physical constants (the collapse rate and localisation width) that are put in by hand rather than derived from more fundamental principles. Whether the ontology involves a mass-density field (GRWm) or âflashâ events in spacetime (GRWf) is a further philosophical question explored by Allori, Goldstein, Tumulka, and ZanghĂŹ.
The Ontology of the Wave Function
A cross-cutting question in the foundations of quantum mechanics concerns the ontological status of the wave function itself. Is $\psi$ a real physical entity (an ontic state), or is it merely a tool for calculation (an epistemic state)?
The wave function realist holds that $\psi$ is a real field, but one that lives not in ordinary three-dimensional space but in the $3N$-dimensional configuration space of all$N$ particles. This is a startling metaphysical conclusion: fundamental reality is not three-dimensional but has as many dimensions as there are degrees of freedom in the universe.
The primitive ontology approach (advocated by Goldstein, DĂŒrr, ZanghĂŹ, Allori, and others) resists this conclusion. On this view, the fundamental ontology consists of entities in three-dimensional space (particles in Bohmian mechanics, mass-density fields in GRWm, flashes in GRWf). The wave function is a law-like entity â part of the nomological structure that governs the behaviour of the primitive ontology, analogous to the Hamiltonian in classical mechanics.
The PBR theorem (Pusey, Barrett, and Rudolph, 2012) showed that, under certain assumptions, $\psi$-epistemic models â in which the wave function represents incomplete knowledge of an underlying reality â are inconsistent with quantum-mechanical predictions. This does not settle the debate, as the assumptions can be questioned, but it narrows the space of viable interpretations.