Part III

Continuous Channels

From discrete bits to the analog world: differential entropy, the Gaussian channel, and optimal power allocation in MIMO systems.

Part Overview

Shannon’s original theory was framed for discrete sources and channels. Real communication systems, however, operate in continuous time and amplitude. Part III extends information theory to the continuous setting, culminating in the Shannon-Hartley theorem β€” the formula that tells engineers the fundamental limit of every wired and wireless channel.

The key insight is that Gaussian noise is the worst-case additive noise for a given power level, and a Gaussian input distribution achieves capacity. This leads directly to the famous formula \( C = B\log_2(1 + S/N) \), where every decibel of SNR and every hertz of bandwidth has a measurable effect on the maximum achievable data rate.

We then tackle MIMO: by deploying multiple antennas, a channel can be decomposed into parallel sub-channels via singular value decomposition (SVD). The water-filling algorithm then solves the constrained power allocation problem, distributing power where the channel is strong and withholding it where it is weak.

Key Equations of Part III

Differential Entropy

\( h(X) = -\int f(x)\log f(x)\,dx \)

Shannon-Hartley

\( C = B\log_2\!\left(1+\tfrac{S}{N}\right) \)

Water-Filling

\( P_i^* = \bigl(\mu - \sigma_i^2/\lambda_i^2\bigr)^+ \)

The Gaussian Channel Model

Input XGaussianAWGN ChannelY = X + ZZ ~ N(0, Nβ‚€/2)Noise ZCapacityC = B logβ‚‚(1+S/N)bits per secondOutput YDecoded

Chapters

Prerequisites

  • β€£Parts I & II (discrete entropy, channel capacity, coding theorems)
  • β€£Probability theory: Gaussian distribution, expectation, variance
  • β€£Linear algebra: matrix multiplication, eigenvalues, SVD (for Ch 9)
  • β€£Calculus: integration by parts, Lagrange multipliers