Machine Learning

Part II: Supervised Learning

Supervised learning is the foundation of applied machine learning: given labelled examples, find a function that generalises to new inputs. This part derives three landmark algorithms from first principles β€” linear regression, logistic regression, and support vector machines β€” establishing the statistical and geometric intuition that underpins every modern model.

Prerequisites from Part I

Linear Algebra
Matrix inverses and pseudo-inverses, projections onto subspaces, SVD and rank
Probability
MLE derivation, Bernoulli and Gaussian distributions, Bayes\u2019 theorem
Optimisation
Gradient and Hessian, convexity, Lagrange multipliers, KKT conditions