Neural Circuits
Feedforward and recurrent architectures, excitatory-inhibitory balance, and neural oscillations
Circuit-Level Computation in the Brain
Individual neurons are computational units, but the real power of the brain emerges from circuits — interconnected populations of neurons that perform computations impossible for single cells. The cerebral cortex alone contains ~16 billion neurons connected by ~150 trillion synapses, forming circuits that implement sensory processing, motor control, decision-making, and cognition.
This chapter analyzes the fundamental circuit motifs — feedforward, recurrent, lateral inhibition — and their emergent dynamics. We derive the conditions for excitatory-inhibitory (E/I) balance that enable stable yet flexible computation, and explore how oscillatory dynamics arise from circuit interactions to coordinate neural activity across brain regions.
1. Feedforward Circuits
Feedforward circuits process information in a single direction: input → processing → output. The simplest example is the three-layer circuit in the retina (photoreceptors → bipolar cells → ganglion cells). Feedforward architectures implement feature detection, signal amplification, and coincidence detection.
Derivation 1: Feedforward Amplification and Signal-to-Noise Ratio
Consider a feedforward chain of $L$ layers, each with $N$ neurons. Layer$l$ receives input from all neurons in layer $l-1$ with synaptic weight $w/N$ (balanced scaling). The mean activity in layer $l$ follows:
$$\mu_l = f\left(w \cdot \mu_{l-1}\right)$$
where $f$ is the neuronal transfer function (e.g., $f(x) = [x]_+$ for a rectified linear unit). The variance propagation including both signal and noise gives:
$$\sigma_l^2 = f'(w\mu_{l-1})^2 \left(\frac{w^2 \sigma_{l-1}^2}{N} + \sigma_{\text{noise}}^2\right)$$
The signal-to-noise ratio at layer $l$ is:
$$\text{SNR}_l = \frac{\mu_l^2}{\sigma_l^2} \approx \frac{N}{\sigma_{\text{noise}}^2 / \sigma_{\text{signal}}^2 + w^2/N} \cdot \text{SNR}_{l-1}$$
For $N \gg 1$, the SNR improves by a factor of $N$ per layer (averaging effect), but this is counteracted by noise injection at each stage. Synfire chains — feedforward networks with precisely time-locked volleys — can propagate signals across many layers when group size exceeds a critical threshold $N_c \sim \sigma_{\text{noise}}^2 / w^2$.
1.1 Lateral Inhibition
Lateral inhibition sharpens stimulus representations by suppressing responses to neighboring, similar stimuli. First described in the horseshoe crab (Limulus) eye by Hartline, the mechanism creates center-surround receptive fields and enhances contrast. The steady-state response of neuron $i$ in a laterally-inhibited network is:
$$r_i = f\left(s_i - \beta \sum_{j \neq i} W_{ij} r_j\right)$$
where $s_i$ is the external input, $\beta$ controls inhibition strength, and$W_{ij}$ typically decreases with distance (e.g., $W_{ij} = e^{-|i-j|^2 / 2\sigma_W^2}$). This performs a spatial high-pass filter, enhancing edges and boundaries in the input.
2. Recurrent Circuits
Recurrent circuits, where neurons feed back onto themselves or their inputs, are ubiquitous in cortex. Recurrence provides amplification, temporal integration, working memory, and attractor dynamics. The Wilson-Cowan model describes the mean-field dynamics of coupled excitatory and inhibitory populations.
Derivation 2: Wilson-Cowan Equations and Stability Analysis
The Wilson-Cowan (1972) model describes the dynamics of excitatory ($E$) and inhibitory ($I$) population firing rates:
$$\tau_E \frac{dE}{dt} = -E + f_E(w_{EE}E - w_{EI}I + I_{\text{ext}})$$
$$\tau_I \frac{dI}{dt} = -I + f_I(w_{IE}E - w_{II}I)$$
where $f(x) = 1/(1 + e^{-a(x-\theta)})$ is a sigmoidal activation function. Fixed points satisfy $dE/dt = dI/dt = 0$. Linearizing around a fixed point$(E^*, I^*)$, the Jacobian matrix is:
$$\mathbf{J} = \begin{pmatrix} -1/\tau_E + w_{EE}f_E'/\tau_E & -w_{EI}f_E'/\tau_E \\ w_{IE}f_I'/\tau_I & -1/\tau_I - w_{II}f_I'/\tau_I \end{pmatrix}$$
Stability requires: (1) $\text{Tr}(\mathbf{J}) < 0$ and (2) $\det(\mathbf{J}) > 0$. The trace condition gives:
$$w_{EE}f_E' < 1 + \tau_E(1 + w_{II}f_I')/\tau_I$$
When the trace condition is violated with $\det(\mathbf{J}) > 0$, a Hopf bifurcation occurs, giving rise to oscillations. When $\det(\mathbf{J}) < 0$, a saddle-node bifurcation creates bistability — the circuit can act as a switch or memory element. The system can exhibit winner-take-all dynamics, persistent activity, and hysteresis.
3. Excitatory-Inhibitory Balance
Cortical circuits maintain a tight balance between excitation and inhibition, with inhibitory currents closely tracking excitatory currents. This "balanced state" has profound implications for neural computation: it produces irregular, Poisson-like firing, fast response times, and high sensitivity to small changes in input.
Derivation 3: The Balanced Network and Fluctuation-Driven Firing
Consider a network of $N_E$ excitatory and $N_I$ inhibitory neurons, each receiving $K$ random connections. In the balanced regime, the mean excitatory and inhibitory inputs to an excitatory neuron are:
$$\mu_E = K(J_{EE}r_E - J_{EI}r_I + J_{\text{ext}}r_{\text{ext}})$$
Balance requires $\mu_E \sim O(1)$ even though individual terms scale as$O(K)$. This is achieved when excitation and inhibition cancel to leading order:
$$J_{EE}r_E + J_{\text{ext}}r_{\text{ext}} \approx J_{EI}r_I$$
The residual fluctuations have standard deviation $\sigma \sim \sqrt{K} \cdot J$. Neurons fire due to these fluctuations crossing threshold, producing irregular firing with rates determined by the diffusion approximation:
$$r = \left[\tau_{\text{ref}} + \tau_m \sqrt{\pi} \int_{(V_{\text{reset}}-\mu)/\sigma}^{(V_{\text{th}}-\mu)/\sigma} e^{u^2}(1 + \text{erf}(u)) \, du \right]^{-1}$$
This is the Siegert formula for the firing rate of a leaky integrate-and-fire neuron driven by Gaussian white noise. The balanced state is self-organizing: if excitation exceeds inhibition, inhibitory rates increase (driven by the excess excitation) until balance is restored, with a correction timescale of $\sim \tau_m / K$.
Disruption of E/I balance is implicated in numerous neurological conditions, including epilepsy (excess excitation), autism spectrum disorders (altered E/I ratio), and schizophrenia (reduced inhibition). The ratio of excitatory to inhibitory neurons in cortex is approximately 4:1, maintained precisely across regions and species.
4. Neural Oscillations
Rhythmic activity is a hallmark of brain function, spanning frequencies from slow oscillations (~0.1 Hz during sleep) to ultra-fast ripples (~200 Hz in hippocampus). Oscillations coordinate neural activity across time and brain regions, implementing "communication through coherence" (Fries, 2005).
Derivation 4: PING (Pyramidal-Interneuron Network Gamma) Model
Gamma oscillations (30–100 Hz) arise from the interplay between excitatory pyramidal cells (E) and inhibitory interneurons (I) in the PING mechanism:
- E cells fire, exciting I cells
- I cells fire with a delay $\tau_d \approx 3\text{--}5$ ms, inhibiting E cells
- E cells are silenced for the duration of inhibition ($\tau_I \approx 10\text{--}20$ ms)
- As inhibition decays, E cells fire again, restarting the cycle
The oscillation period is approximately:
$$T_{\gamma} \approx \tau_d + \tau_I \cdot \ln\left(\frac{g_I}{g_I - g_{\text{th}}}\right)$$
where $g_I$ is the peak inhibitory conductance and $g_{\text{th}}$ is the conductance level at which E cells can fire. For typical parameters, $T_{\gamma} \approx 15\text{--}30$ ms (33–67 Hz), matching observed gamma frequencies.
The power and frequency of gamma oscillations depend on drive to E cells (stronger drive$\to$ higher frequency) and inhibitory synaptic kinetics (GABA$_A$ decay time sets the oscillation period). This provides a mechanism for stimulus-dependent modulation of oscillatory dynamics.
Derivation 5: Kuramoto Model of Neural Synchronization
The Kuramoto model describes synchronization among $N$ coupled oscillators with natural frequencies $\omega_i$:
$$\frac{d\theta_i}{dt} = \omega_i + \frac{K}{N}\sum_{j=1}^{N} \sin(\theta_j - \theta_i)$$
where $K$ is the coupling strength. The order parameter measuring synchronization is:
$$r \cdot e^{i\psi} = \frac{1}{N}\sum_{j=1}^{N} e^{i\theta_j}$$
where $r \in [0, 1]$ ($r = 0$: desynchronized, $r = 1$: fully synchronized). For a Lorentzian frequency distribution with half-width $\Delta$, synchronization emerges at a critical coupling:
$$K_c = 2\Delta$$
Above $K_c$, the order parameter grows as $r = \sqrt{1 - K_c/K}$. This phase transition from incoherence to synchrony has been observed in neural oscillation data, where coupling between brain regions (via long-range axonal connections) enables coherent oscillations that may support communication and binding of distributed neural representations.
5. Historical Development
- • 1906: Cajal's neuron doctrine establishes that circuits are composed of discrete neurons communicating at synapses (Nobel Prize with Golgi).
- • 1929: Berger records the first human EEG, discovering alpha oscillations.
- • 1956: Hartline describes lateral inhibition in the Limulus eye, explaining contrast enhancement.
- • 1972: Wilson and Cowan develop their mean-field model of interacting excitatory and inhibitory populations.
- • 1996: van Vreeswijk and Sompolinsky develop the theory of balanced networks.
- • 2005: Fries proposes "communication through coherence" — that oscillatory synchrony gates information flow between brain areas.
- • 2010s: Optogenetic circuit dissection reveals cell-type-specific roles in E/I balance and oscillation generation.
6. Applications
Epilepsy Treatment
Understanding E/I balance enables targeted interventions. Anti-epileptic drugs enhance GABAergic inhibition or reduce glutamatergic excitation to restore balance. Closed-loop neurostimulation detects oscillatory biomarkers of seizure onset.
Deep Brain Stimulation
Pathological oscillations (beta band in Parkinson's disease) are disrupted by DBS, which desynchronizes abnormally coupled circuit elements. Understanding oscillatory dynamics guides optimal stimulation parameters.
Neural Network Architectures
Recurrent neural networks (RNNs, LSTMs, Transformers) in AI are inspired by biological recurrent circuits. E/I balance principles inform the design of stable recurrent architectures for sequence processing.
Brain-Machine Interfaces
Oscillatory signals (local field potentials) provide robust control signals for neuroprosthetics. Understanding circuit dynamics enables better decoding of population activity for motor control.
7. Computational Exploration
Neural Circuits: E/I Balance, Oscillations, and Synchronization
PythonClick Run to execute the Python code
Code will be executed with Python 3 on the server
Chapter Summary
- • Feedforward circuits with lateral inhibition enhance contrast and implement feature detection; synfire chains propagate temporally precise signals.
- • Wilson-Cowan dynamics of E/I populations exhibit fixed points, limit cycles, and bistability depending on connection strengths and external drive.
- • E/I balance produces fluctuation-driven firing: mean excitation and inhibition cancel, and neurons fire due to residual fluctuations scaling as $\sigma \sim \sqrt{K}$.
- • Gamma oscillations arise from the PING mechanism with period $T \approx \tau_d + \tau_I \ln(g_I/(g_I - g_{\text{th}}))$.
- • Kuramoto synchronization exhibits a phase transition at $K_c = 2\Delta$, modeling coherent oscillations between brain regions.