Numerical Linear Algebra
Algorithms, stability, and the art of computing with matrices
Historical Context
Numerical linear algebra was born from the computing revolution. Alan Turing analyzed the stability of Gaussian elimination (1948), James Wilkinson developed backward error analysis at the National Physical Laboratory, and Gene Golub pioneered computational SVD algorithms. The LINPACK (1979) and LAPACK (1992) libraries codified best practices, while iterative methods (Lanczos 1950, Hestenes-Stiefel conjugate gradients 1952) enabled solving systems too large for direct methods. Today, numerical linear algebra underpins scientific computing, machine learning, and every application of matrix computation.
4.1 Floating-Point Arithmetic
IEEE 754 double precision uses 64 bits: 1 sign, 11 exponent, 52 mantissa, giving machine epsilon $\epsilon_{\text{mach}} \approx 2.2 \times 10^{-16}$. Every floating-point operation introduces relative error bounded by $\epsilon_{\text{mach}}$: $\text{fl}(x \oplus y) = (x \oplus y)(1 + \delta)$with $|\delta| \leq \epsilon_{\text{mach}}$.
Catastrophic cancellation occurs when subtracting nearly equal numbers, amplifying relative error. Algorithms must be designed to avoid such operations—this is the art of numerical stability.
4.2 Condition Numbers and Stability
Condition Number
The condition number $\kappa(A) = \|A\|\|A^{-1}\| = \sigma_{\max}/\sigma_{\min}$ bounds the worst-case amplification of input perturbations:$\frac{\|\delta x\|}{\|x\|} \leq \kappa(A) \frac{\|\delta b\|}{\|b\|}$.
An algorithm is backward stable if the computed solution is the exact solution of a slightly perturbed problem. Gaussian elimination with partial pivoting, QR factorization via Householder, and the SVD algorithm are all backward stable.
4.3 Iterative Methods
For large sparse systems, direct methods are too expensive. Iterative methods produce a sequence $x^{(k)} \to x^*$:
- Jacobi: $x^{(k+1)} = D^{-1}(b - (L+U)x^{(k)})$, converges for diagonally dominant matrices
- Gauss-Seidel: Uses updated components immediately, converges faster
- Conjugate gradients: Optimal for SPD systems, converges in $\leq n$ steps exactly
- GMRES: Generalized minimum residual for non-symmetric systems
4.4 Eigenvalue Algorithms
- Power iteration: Finds dominant eigenvalue, convergence rate $|\lambda_2/\lambda_1|^k$
- Inverse iteration: Finds eigenvalue nearest a target $\mu$
- QR algorithm: Computes all eigenvalues via $A_k = R_kQ_k$ iterations, $O(n^3)$
- Lanczos: For large sparse symmetric matrices, builds a tridiagonal approximation
4.5 Sparse Matrix Techniques
Sparse matrices store only nonzero entries in formats like CSR (compressed sparse row) and CSC (compressed sparse column). For a matrix with $\text{nnz}$ nonzeros, storage is $O(\text{nnz})$ vs $O(n^2)$ for dense. Sparse direct solvers use fill-reducing orderings (Cuthill-McKee, nested dissection) to minimize fill-in during factorization.
Computational Laboratory
This simulation demonstrates floating-point pitfalls, condition number analysis, iterative solver convergence, the QR eigenvalue algorithm, and sparse vs dense performance comparison.
Numerical Linear Algebra: Algorithms and Stability
PythonClick Run to execute the Python code
Code will be executed with Python 3 on the server