Synaptic Transmission
Quantal release, postsynaptic potentials, short-term and long-term plasticity, and spike-timing-dependent plasticity
The Synapse: Where Neurons Communicate
Synaptic transmission is the fundamental mechanism by which neurons communicate. At chemical synapses, an action potential arriving at the presynaptic terminal triggers the release of neurotransmitter-filled vesicles, which bind to postsynaptic receptors to generate electrical signals. Katz and colleagues established the quantal hypothesis in the 1950s, showing that transmitter release occurs in discrete packets corresponding to individual vesicles.
This chapter develops the quantitative theory of synaptic transmission: the binomial model of vesicle release, the biophysics of postsynaptic potentials, and the remarkable plasticity mechanisms that allow synapses to strengthen or weaken depending on patterns of activity. These plastic changes underlie learning and memory throughout the nervous system.
1. Quantal Neurotransmitter Release
Bernard Katz's Nobel Prize-winning work at the neuromuscular junction (NMJ) established that neurotransmitter is released in discrete quanta, each corresponding to the contents of a single synaptic vesicle. Miniature end-plate potentials (mEPPs) occur spontaneously and represent single quantum events, while evoked EPPs are multiples of the quantal unit.
Derivation 1: The Binomial Model of Vesicle Release
At a synapse with $N$ readily releasable vesicles, each with release probability $p$ per action potential, the number of released quanta $k$ follows a binomial distribution:
$$P(k) = \binom{N}{k} p^k (1-p)^{N-k}$$
The mean quantal content is $m = Np$ and the variance is$\sigma^2 = Np(1-p)$. The coefficient of variation is:
$$\text{CV} = \frac{\sigma}{m} = \sqrt{\frac{1-p}{Np}}$$
When $N$ is large and $p$ is small (as in central synapses), the distribution approaches Poisson:
$$P(k) \approx \frac{m^k e^{-m}}{k!}, \quad \text{where } m = Np$$
The relationship between mean $m$ and variance allows estimation of $N$ and$p$ from the mean-variance relationship across conditions that change $p$(e.g., varying extracellular calcium):
$$\sigma^2 = m \cdot q - \frac{m^2}{N} \cdot q$$
where $q$ is the quantal amplitude. Plotting $\sigma^2$ vs $m$ yields a parabola from which $N$ and $q$ can be extracted. At the NMJ,$N \sim 200\text{--}300$ and $p \sim 0.1\text{--}0.3$; at central synapses,$N \sim 1\text{--}20$ and $p \sim 0.1\text{--}0.9$.
1.1 Calcium-Dependent Release
Vesicle release is triggered by calcium influx through voltage-gated Ca$^{2+}$ channels. The relationship between calcium concentration and release rate follows a highly nonlinear (cooperative) relationship first characterized by Dodge and Rahamimoff (1967):
$$P_{\text{release}} \propto [\text{Ca}^{2+}]^n, \quad n \approx 3\text{--}4$$
This steep calcium dependence arises from the requirement that multiple Ca$^{2+}$ ions must bind the vesicle sensor (synaptotagmin) to trigger fusion. The local calcium concentration at the release site reaches 10–100 $\mu$M during an action potential, much higher than the resting concentration of ~100 nM.
2. Postsynaptic Potentials
Released neurotransmitter binds to ionotropic or metabotropic receptors on the postsynaptic membrane. Ionotropic receptors (ligand-gated ion channels) produce fast postsynaptic potentials, while metabotropic receptors (G-protein coupled) produce slower, modulatory effects.
Derivation 2: Synaptic Conductance and the Reversal Potential
The postsynaptic current through a ligand-gated channel is:
$$I_{\text{syn}} = g_{\text{syn}}(t) \cdot (V - E_{\text{syn}})$$
where $g_{\text{syn}}(t)$ is the time-dependent synaptic conductance and$E_{\text{syn}}$ is the synaptic reversal potential. For glutamatergic AMPA receptors,$E_{\text{syn}} \approx 0$ mV (excitatory); for GABA$_A$ receptors,$E_{\text{syn}} \approx -70$ mV (inhibitory). The conductance time course is commonly modeled as a difference of exponentials:
$$g_{\text{syn}}(t) = \bar{g} \cdot \frac{\tau_d \tau_r}{\tau_d - \tau_r}\left(e^{-t/\tau_d} - e^{-t/\tau_r}\right)$$
where $\tau_r$ is the rise time and $\tau_d$ is the decay time. For AMPA:$\tau_r \approx 0.2$ ms, $\tau_d \approx 2$ ms. For NMDA:$\tau_r \approx 5$ ms, $\tau_d \approx 50\text{--}100$ ms.
The postsynaptic potential is found by integrating the cable equation. For a point neuron with membrane time constant $\tau_m$ and input resistance $R_{\text{in}}$:
$$V(t) = E_L + R_{\text{in}} \int_0^t g_{\text{syn}}(t') \cdot (V(t') - E_{\text{syn}}) \cdot e^{-(t-t')/\tau_m} \, dt'$$
In the subthreshold regime where $g_{\text{syn}} \ll g_L$, the PSP amplitude is approximately $\Delta V \approx \bar{g} \cdot R_{\text{in}} \cdot (E_L - E_{\text{syn}}) \cdot \tau_{\text{syn}}$. Typical EPSP amplitudes at central synapses are 0.1–2 mV, requiring summation of many inputs to reach threshold.
2.1 NMDA Receptor Biophysics
NMDA receptors have unique properties critical for synaptic plasticity: they are both ligand-gated (requiring glutamate and glycine) and voltage-dependent (Mg$^{2+}$ block at resting potential). The voltage-dependent Mg$^{2+}$ block is described by:
$$g_{\text{NMDA}}(V) = \bar{g}_{\text{NMDA}} \cdot \frac{1}{1 + [\text{Mg}^{2+}]_o \cdot \exp(-\gamma V) / K_d}$$
where $\gamma \approx 0.062$ mV$^{-1}$ and $K_d \approx 3.57$ mM. This makes NMDA receptors coincidence detectors: they open only when the presynaptic neuron releases glutamate and the postsynaptic neuron is sufficiently depolarized, implementing a Hebbian-type mechanism at the molecular level.
3. Short-Term Synaptic Plasticity
Synaptic efficacy changes on timescales of milliseconds to minutes in response to recent activity. Short-term facilitation and depression shape neural computation in real time, acting as dynamic filters on neural signals.
Derivation 3: The Tsodyks-Markram Model of Short-Term Plasticity
Tsodyks and Markram (1997) developed a phenomenological model with three state variables representing the fraction of synaptic resources in recovered ($x$), active ($y$), and inactive ($z$) states, with $x + y + z = 1$:
$$\frac{dx}{dt} = \frac{z}{\tau_{\text{rec}}} - u \cdot x \cdot \delta(t - t_{\text{sp}})$$
$$\frac{dy}{dt} = -\frac{y}{\tau_{\text{in}}} + u \cdot x \cdot \delta(t - t_{\text{sp}})$$
$$\frac{dz}{dt} = \frac{y}{\tau_{\text{in}}} - \frac{z}{\tau_{\text{rec}}}$$
where $u$ is the utilization parameter (analogous to release probability),$\tau_{\text{rec}}$ is the recovery time from depression, and $\tau_{\text{in}}$is the inactivation time. The effective synaptic strength is proportional to $y$.
For facilitation, $u$ itself becomes dynamic:
$$\frac{du}{dt} = -\frac{u - U}{\tau_{\text{fac}}} + U(1-u) \cdot \delta(t - t_{\text{sp}})$$
where $U$ is the baseline utilization and $\tau_{\text{fac}}$ is the facilitation decay time. At steady state for regular firing at frequency $f$, the average response amplitude is $\bar{y} = U \cdot x_{\text{ss}} / (1 - (1-U) e^{-1/(f\tau_{\text{rec}})})$. Depression dominates at high frequencies when $\tau_{\text{rec}}$ is long; facilitation dominates when $\tau_{\text{fac}} > \tau_{\text{rec}}$.
4. Long-Term Plasticity: LTP and LTD
Long-term potentiation (LTP) and long-term depression (LTD) are persistent changes in synaptic strength that last hours to months. Bliss and Lømo (1973) first described LTP at hippocampal synapses. NMDA receptor-dependent LTP requires coincident pre- and postsynaptic activity, providing a cellular substrate for Hebb's learning rule.
Derivation 4: Calcium-Based Plasticity Model (BCM Theory)
The Bienenstock-Cooper-Munro (BCM) theory proposes that the sign and magnitude of synaptic change depend on postsynaptic activity relative to a sliding threshold. The synaptic weight$w$ evolves according to:
$$\frac{dw}{dt} = \eta \cdot \phi(c) \cdot r_{\text{pre}}$$
where $\eta$ is the learning rate, $r_{\text{pre}}$ is the presynaptic rate, and$\phi(c)$ is the plasticity function that depends on postsynaptic calcium concentration $c$:
$$\phi(c) = \begin{cases} -A_{\text{LTD}} & \text{if } \theta_{\text{LTD}} < c < \theta_{\text{LTP}} \\ A_{\text{LTP}}(c - \theta_{\text{LTP}}) & \text{if } c > \theta_{\text{LTP}} \\ 0 & \text{otherwise} \end{cases}$$
The LTP threshold $\theta_{\text{LTP}}$ slides with average postsynaptic activity:
$$\theta_{\text{LTP}} = \theta_0 + \alpha \langle c \rangle^2$$
This sliding threshold is essential for stability: it prevents runaway potentiation by raising the bar for LTP when a neuron is very active, and lowering it when inactive. The BCM rule has been confirmed by experiments varying stimulation frequency at hippocampal synapses, showing LTD at low frequencies (1–5 Hz) and LTP at high frequencies (>10 Hz), with the crossover frequency shifting with prior activity history.
5. Spike-Timing-Dependent Plasticity
STDP, discovered by Markram et al. (1997) and Bi and Poo (1998), shows that the precise timing of pre- and postsynaptic spikes determines the sign and magnitude of synaptic change. When the presynaptic spike precedes the postsynaptic spike (pre→post), the synapse is potentiated; the reverse order (post→pre) produces depression.
Derivation 5: The STDP Learning Rule and Its Stability
The classic STDP window is described by:
$$\Delta w = \begin{cases} A_+ \exp(-\Delta t / \tau_+) & \text{if } \Delta t > 0 \text{ (pre before post)} \\ -A_- \exp(\Delta t / \tau_-) & \text{if } \Delta t < 0 \text{ (post before pre)} \end{cases}$$
where $\Delta t = t_{\text{post}} - t_{\text{pre}}$, and typical parameters are$\tau_+ \approx 20$ ms, $\tau_- \approx 20$ ms,$A_+ \approx 0.01$, $A_- \approx 0.012$. The slight asymmetry ($A_- > A_+$) ensures net depression for uncorrelated firing, providing stability.
For Poisson pre- and postsynaptic neurons firing at rates $r_{\text{pre}}$ and$r_{\text{post}}$, the expected weight change per unit time is:
$$\frac{d\bar{w}}{dt} = r_{\text{pre}} r_{\text{post}} (A_+ \tau_+ - A_- \tau_-)$$
If pre- and postsynaptic spikes have a correlation function $C(\Delta t)$, then:
$$\frac{d\bar{w}}{dt} = \int_{-\infty}^{\infty} W(\Delta t) \cdot C(\Delta t) \, d(\Delta t)$$
where $W(\Delta t)$ is the STDP window function. This integral formulation shows that STDP selectively strengthens synapses whose presynaptic activity consistently predicts postsynaptic firing (positive correlations at short lags), implementing a causal Hebbian learning rule at the millisecond timescale.
6. Historical Development
- • 1897: Sherrington coins the term "synapse" to describe the junction between neurons.
- • 1921: Loewi demonstrates chemical neurotransmission via the vagus nerve ("Vagusstoff").
- • 1952: Fatt and Katz discover miniature end-plate potentials, establishing quantal transmission.
- • 1973: Bliss and Lømo describe long-term potentiation at hippocampal synapses.
- • 1982: Bienenstock, Cooper, and Munro propose the BCM theory of synaptic plasticity with a sliding threshold.
- • 1997: Tsodyks and Markram develop the resource-based model of short-term plasticity.
- • 1998: Bi and Poo publish quantitative measurements of the STDP learning window in hippocampal neurons.
- • 2004: Bhatt et al. demonstrate that STDP window shape varies across brain regions, reflecting computational requirements.
7. Applications
Neuromorphic Computing
STDP rules are implemented in neuromorphic chips (Intel Loihi, IBM TrueNorth) for on-chip learning. Short-term plasticity dynamics enable adaptive filtering and temporal pattern recognition in hardware.
Drug Development
Understanding synaptic pharmacology guides development of treatments for neurological disorders. NMDA receptor modulators target synaptic plasticity in Alzheimer's disease (memantine) and depression (ketamine).
Transcranial Stimulation
TMS and tDCS protocols exploit LTP/LTD-like mechanisms to modulate cortical excitability. Theta-burst stimulation mimics natural plasticity-inducing patterns to produce lasting changes in neural circuit function.
Artificial Intelligence
Biologically inspired learning rules based on STDP and short-term plasticity dynamics are being explored as alternatives to backpropagation, potentially enabling more energy-efficient and continual learning in AI systems.
8. Computational Exploration
Synaptic Transmission: Quantal Release, Plasticity, and STDP
PythonClick Run to execute the Python code
Code will be executed with Python 3 on the server
Chapter Summary
- • Quantal release follows binomial statistics: $P(k) = \binom{N}{k}p^k(1-p)^{N-k}$ with mean $m = Np$ and highly nonlinear Ca$^{2+}$ dependence.
- • Postsynaptic potentials are shaped by receptor kinetics; NMDA receptors serve as coincidence detectors through voltage-dependent Mg$^{2+}$ block.
- • Short-term plasticity (Tsodyks-Markram model) implements dynamic gain control: depression filters sustained input, facilitation amplifies transient signals.
- • BCM theory provides a stable learning rule with sliding threshold: $\phi(c)$ transitions from LTD to LTP as postsynaptic calcium increases.
- • STDP implements causal Hebbian learning at millisecond precision: $\Delta w = A_+ e^{-\Delta t/\tau_+}$ for pre→post, enabling temporal sequence learning.