Chapter 16: Lattice QFT
Lattice quantum field theory replaces continuous spacetime with a discrete lattice, providing the only known rigorous non-perturbative definition of quantum field theories like QCD. Path integrals become finite-dimensional integrals amenable to Monte Carlo simulation. This approach has yielded precision calculations of hadron masses, decay constants, and the QCD coupling — and provided first-principles evidence for confinement.
Derivation 1: Discretizing the Path Integral
Replace continuous spacetime with a hypercubic lattice of spacing $a$ and volume$V = (Na)^4$. Fields are defined at lattice sites $x = na$ (with$n_\mu = 0, 1, \ldots, N-1$). Derivatives become finite differences:
$\partial_\mu\phi(x) \to \frac{\phi(x + a\hat{\mu}) - \phi(x)}{a}$
For a scalar field with mass $m$ and $\lambda\phi^4$ interaction, the lattice action is:
$S = a^4 \sum_x \left[\sum_\mu \frac{(\phi(x+a\hat{\mu}) - \phi(x))^2}{2a^2} + \frac{m^2}{2}\phi^2(x) + \frac{\lambda}{4}\phi^4(x)\right]$
The path integral becomes a finite-dimensional (ordinary) integral:
$Z = \prod_x \int d\phi(x) \, e^{-S_E[\phi]}$
where $S_E$ is the Euclidean action (obtained by Wick rotation $t \to -i\tau$). The Euclidean path integral has the form of a statistical mechanics partition function with $e^{-S_E}$ playing the role of the Boltzmann weight.
The lattice serves as a UV regulator: the lattice spacing $a$ provides a natural cutoff $\Lambda \sim \pi/a$. Crucially, the lattice preserves exact gauge invariance at every lattice spacing — unlike other regularization schemes (e.g., dimensional regularization) which modify the theory in more subtle ways. The lattice also makes the path integral manifestly finite (a product of ordinary integrals), providing a mathematically rigorous definition of the quantum field theory.
Kenneth Wilson (1974): Wilson proposed the lattice formulation of gauge theories, introducing the link variable and plaquette action. This provided the first non-perturbative framework for studying confinement and earned him the 1982 Nobel Prize in Physics.
Derivation 2: The Wilson Gauge Action
Gauge fields on the lattice are not placed at sites but on links between sites. The fundamental variable is the link variable:
$U_\mu(x) = \mathcal{P}\exp\left(ig\int_x^{x+a\hat{\mu}} A_\mu(y)\,dy^\mu\right) \approx e^{igaA_\mu(x)} \in SU(N)$
The link variable is a group element, not a Lie algebra element. The smallest gauge-invariant object is the plaquette — the product of links around an elementary square:
$U_{\mu\nu}(x) = U_\mu(x) U_\nu(x+a\hat{\mu}) U_\mu^\dagger(x+a\hat{\nu}) U_\nu^\dagger(x)$
The Wilson gauge action is:
$S_W = \beta \sum_{x,\mu<\nu} \left(1 - \frac{1}{N}\text{Re}\,\text{Tr}\,U_{\mu\nu}(x)\right)$
where $\beta = 2N/g^2$. In the continuum limit ($a \to 0$), expanding$U_\mu \approx 1 + igaA_\mu + \ldots$ recovers the Yang-Mills action:
$S_W \to \frac{1}{4}\int d^4x \, F_{\mu\nu}^a F^{a\mu\nu} + \mathcal{O}(a^2)$
Exact gauge invariance: The lattice formulation preserves exact gauge invariance at every lattice spacing — not just in the continuum limit. Under a gauge transformation $\Omega(x) \in SU(N)$,$U_\mu(x) \to \Omega(x)U_\mu(x)\Omega^\dagger(x+a\hat{\mu})$, and the plaquette transforms as $U_{\mu\nu}(x) \to \Omega(x)U_{\mu\nu}(x)\Omega^\dagger(x)$, keeping the trace invariant.
Derivation 3: Monte Carlo Sampling
Expectation values on the lattice are computed as importance-sampled integrals:
$\langle\mathcal{O}\rangle = \frac{1}{Z}\int \mathcal{D}U \, \mathcal{O}[U] \, e^{-S[U]} \approx \frac{1}{N_\text{conf}}\sum_{i=1}^{N_\text{conf}} \mathcal{O}[U^{(i)}]$
where configurations $U^{(i)}$ are drawn from the probability distribution$P[U] \propto e^{-S[U]}$ using Markov Chain Monte Carlo.
Metropolis Algorithm
The Metropolis algorithm generates a Markov chain of configurations:
1. Propose: Select a site/link and propose a random update $\phi \to \phi'$
2. Compute: Calculate the change in action $\Delta S = S[\phi'] - S[\phi]$
3. Accept/reject: Accept with probability $\min(1, e^{-\Delta S})$
This satisfies detailed balance and ergodicity, guaranteeing convergence to the correct equilibrium distribution. For gauge theories, the Hybrid Monte Carlo (HMC) algorithm, which combines molecular dynamics with Metropolis accept/reject, is the method of choice.
Statistical errors: Monte Carlo estimates have statistical errors that decrease as $1/\sqrt{N_\text{conf}}$. Autocorrelations between successive configurations reduce the effective number of independent samples. Critical slowing down near phase transitions can severely impede convergence.
Derivation 4: Confinement on the Lattice
The Wilson loop provides a non-perturbative probe of confinement. For a rectangular loop of size $R \times T$:
$W(R,T) = \frac{1}{N}\langle\text{Tr}\,\mathcal{P}\prod_{(x,\mu)\in C} U_\mu(x)\rangle$
For large $T$, the Wilson loop measures the static quark-antiquark potential:
$W(R,T) \sim e^{-V(R)T}$
Wilson's confinement criterion: if $\langle W(R,T)\rangle \sim e^{-\sigma RT}$ (area law), then $V(R) = \sigma R$ — the potential grows linearly, indicating confinement. Lattice simulations of SU(3) gauge theory confirm the area law with string tension$\sigma \approx (440 \text{ MeV})^2$.
Strong Coupling Expansion
At strong coupling ($\beta \ll 1$), the area law can be proven analytically:
$\langle W(R,T)\rangle \approx \left(\frac{\beta}{2N}\right)^{RT/a^2}$
giving $\sigma a^2 = -\ln(\beta/2N)$. Numerical simulations show that confinement persists to weak coupling (the continuum limit) with no phase transition in between, providing strong evidence that QCD confines.
Precision results: Modern lattice QCD calculations reproduce the hadron spectrum to percent-level accuracy. The proton mass, the light hadron spectrum, and the QCD coupling constant $\alpha_s(m_Z) = 0.1185 \pm 0.0008$(FLAG 2021) are all determined from first-principles lattice simulations.
Derivation 5: The Continuum Limit
Physical results are obtained in the continuum limit $a \to 0$. This requires tuning the bare coupling to a critical point. For asymptotically free theories, the continuum limit is at $g \to 0$ ($\beta \to \infty$):
$a(\beta) \propto \Lambda_\text{lat}^{-1} \exp\left(-\frac{1}{2\beta_0 g^2}\right) \to 0 \text{ as } g \to 0$
Lattice artifacts (discretization errors) appear as corrections in powers of $a$:
$m_\text{lat} = m_\text{phys}\left(1 + c_1 (a\Lambda)^2 + c_2 (a\Lambda)^4 + \ldots\right)$
Symanzik Improvement
The leading $\mathcal{O}(a^2)$ artifacts can be reduced by adding higher-order terms to the lattice action. The Symanzik improvement program systematically constructs improved actions:
$S_\text{improved} = S_W + c_\text{SW} a^5 \sum_x \bar{\psi}\sigma_{\mu\nu}F^{\mu\nu}\psi + \ldots$
With the Sheikholeslami-Wohlert (clover) term tuned appropriately, the leading discretization errors are reduced from $\mathcal{O}(a)$ to $\mathcal{O}(a^2)$for fermion actions, dramatically accelerating convergence to the continuum.
Computational challenge: State-of-the-art lattice QCD calculations use lattices of size $64^3 \times 128$ or larger, with lattice spacings down to $a \approx 0.04$ fm. Including dynamical fermions (sea quarks) requires inverting the Dirac operator — the dominant computational cost — making lattice QCD one of the largest consumers of supercomputer time worldwide.
Computational Analysis
This simulation implements a full Metropolis Monte Carlo for a 1D $\lambda\phi^4$scalar field on the lattice. It generates thermalized configurations, measures the two-point correlator to extract the effective mass, monitors the Monte Carlo history for equilibration, and illustrates the continuum extrapolation procedure.
Lattice QFT: Monte Carlo Simulation & Continuum Limit
PythonClick Run to execute the Python code
Code will be executed with Python 3 on the server
Applications of Lattice QFT
Hadron Spectrum from First Principles
Lattice QCD reproduces the observed hadron masses to percent-level precision. The proton mass $m_p = 938$ MeV arises almost entirely from QCD dynamics (gluon energy and quark kinetic energy) rather than the Higgs mechanism — the up and down quark masses contribute only ~10 MeV. A landmark calculation by the BMW collaboration (2008) determined the light hadron spectrum with sub-percent accuracy:
Proton: $m_p = 938.0 \pm 1.5$ MeV (lattice) vs 938.3 MeV (experiment)
Pion: $m_\pi = 135 \pm 2$ MeV (lattice) vs 135.0 MeV (experiment)
Omega baryon: $m_\Omega = 1672 \pm 3$ MeV (lattice) vs 1672.5 MeV (experiment)
CKM Matrix Elements
Lattice calculations of form factors and decay constants enable extraction of CKM matrix elements from experimental decay rates. Key quantities include:
$f_\pi = 130.2 \pm 0.8 \text{ MeV}, \quad f_K = 155.7 \pm 0.3 \text{ MeV}, \quad f_{B_s} = 230.3 \pm 1.3 \text{ MeV}$
The ratio $f_K/f_\pi$ determines $|V_{us}|$, and $B$ meson form factors determine$|V_{cb}|$ and $|V_{ub}|$. Tensions between inclusive and exclusive determinations of these CKM elements remain an active area of research.
QCD Phase Diagram
At finite temperature, lattice simulations probe the deconfinement transition. For physical quark masses, the transition is a smooth crossover at $T_c \approx 155$ MeV. The equation of state $p(T)$ is a crucial input for hydrodynamic simulations of the quark-gluon plasma created in heavy-ion collisions at RHIC and the LHC.
Muon $g-2$
The anomalous magnetic moment of the muon $a_\mu = (g-2)/2$ is sensitive to all sectors of the Standard Model. The hadronic vacuum polarization (HVP) contribution is the dominant source of theoretical uncertainty and is now being computed on the lattice with competitive precision. The BMW collaboration's result for the HVP contribution is consistent with the experimental measurement, though there remains tension with the data-driven determination.
Computational resources: A single gauge configuration on a $64^3 \times 128$ lattice with physical pion mass requires ~10,000 core-hours. A typical calculation needs $\sim 1000$ configurations, with analysis of systematic errors requiring multiple lattice spacings and volumes. Lattice QCD is one of the largest consumers of supercomputer time globally, and progress closely tracks advances in high-performance computing and algorithmic development.
The Fermion Doubling Problem
Naively discretizing the Dirac action on a lattice produces 16 fermion species in 4D instead of one — the fermion doubling problem. The Nielsen-Ninomiya no-go theorem (1981) states that a lattice fermion action cannot simultaneously be: (1) local, (2) have the correct continuum limit, (3) preserve chiral symmetry, and (4) have no doublers.
Different fermion discretizations sacrifice different properties:
Wilson fermions: Add a term $\sim a\bar{\psi}\Box\psi$ that lifts doublers to mass $\sim 1/a$, but explicitly breaks chiral symmetry ($\mathcal{O}(a)$ errors)
Staggered fermions: Distribute the 16 species among lattice sites, reducing them to 4 "tastes." Preserves a remnant chiral symmetry. The fourth root trick ($\det(D)^{1/4}$) removes extra tastes but is theoretically controversial.
Domain wall / overlap fermions: Use a fifth dimension (or its mathematical equivalent) to achieve exact chiral symmetry on the lattice at the cost of significantly higher computational expense
Ginsparg-Wilson relation (1982): The exact chiral symmetry condition on the lattice is $\{\gamma_5, D\} = aD\gamma_5 D$, where$D$ is the lattice Dirac operator. This is satisfied by the overlap operator (Neuberger, 1998), providing a mathematically rigorous definition of chiral fermions on the lattice. This resolved a decades-old problem in lattice field theory.
Modern Developments and Future Directions
Sign Problem and Finite Density
At finite baryon chemical potential $\mu_B$, the fermion determinant becomes complex, making the path integral weight $e^{-S}$ non-positive — the infamous sign problem. This prevents direct Monte Carlo simulation of the QCD phase diagram at finite density, relevant for neutron stars and heavy-ion collisions. Approaches to circumvent the sign problem include Taylor expansion, analytic continuation from imaginary $\mu$, complex Langevin dynamics, and the Lefschetz thimble method.
Machine Learning on the Lattice
Machine learning techniques are increasingly applied to lattice field theory: normalizing flows for more efficient configuration generation, neural network estimators for observables, and generative models for reducing autocorrelation times. These methods show particular promise for theories suffering from critical slowing down near phase transitions.
Quantum Computing for Lattice Gauge Theories
Quantum computers offer a fundamentally different approach to simulating quantum field theories. While classical Monte Carlo works in Euclidean time (imaginary time), quantum computers can simulate real-time dynamics directly — potentially solving the sign problem and enabling calculations of scattering amplitudes, jet fragmentation, and other dynamical processes that are inaccessible to classical methods.
Current quantum hardware can simulate simple lattice models ($1+1$D Schwinger model, small lattice QED) on NISQ devices. Fault-tolerant quantum computers would be needed for realistic $3+1$D QCD simulations, likely requiring millions of logical qubits.
The big picture: Lattice field theory represents one of the most remarkable achievements in theoretical physics — a framework where the predictions of QCD can be computed from first principles with controlled uncertainties. From the hadron spectrum to the QCD coupling constant, from CKM matrix elements to the muon $g-2$, lattice methods provide essential non-perturbative input that connects fundamental theory to experimental observation.
Summary: Lattice QFT Essentials
Discretization
Fields on sites, gauge fields on links ($U_\mu(x) \in SU(N)$). Wilson plaquette action: $S = \beta\sum(1 - \frac{1}{N}\text{Re Tr}\,U_{\mu\nu})$.
Monte Carlo Methods
Importance sampling with $e^{-S_E}$ as weight. Metropolis or HMC algorithms. Statistical errors $\sim 1/\sqrt{N_\text{conf}}$.
Confinement
Area law for Wilson loops: $\langle W\rangle \sim e^{-\sigma RT}$ with string tension $\sqrt{\sigma} \approx 440$ MeV. Proven at strong coupling, confirmed numerically at all couplings.
Continuum Limit
$a \to 0$ at fixed physics via asymptotic freedom. Symanzik improvement reduces discretization errors. Physical results from $a^2 \to 0$ extrapolation.