Newton & Leibniz — The Calculus
Two independent inventions that changed the course of science forever
12.1 Isaac Newton (1642–1727)
Isaac Newton was born on December 25, 1642 (Old Style; January 4, 1643 New Style) in Woolsthorpe, Lincolnshire, England. He was premature and so small at birth that, as his mother later recalled, he could have fit inside a quart mug. His father, an illiterate farmer, had died three months before Newton's birth. When his mother remarried, the young Isaac was left with his grandmother, an arrangement that left deep psychological scars.
Newton attended the King's School in Grantham and then entered Trinity College, Cambridge, in 1661. There he encountered the works of Descartes, Kepler, and the emerging new philosophy. His intellectual appetite was voracious: he read Descartes' Géométrie, Wallis's Arithmetica Infinitorum, and Barrow's geometrical lectures, absorbing and surpassing them all.
In 1665–1666, the Great Plague forced Cambridge to close, and Newton returned to Woolsthorpe. During this period — his annus mirabilis(miracle year) — Newton made discoveries in three areas that would reshape science:
- Mathematics: The method of fluxions (calculus), the generalized binomial theorem, infinite series
- Optics: The decomposition of white light into the spectrum using a prism
- Mechanics and gravitation: The laws of motion and the inverse-square law of gravity
Newton returned to Cambridge in 1667, became Lucasian Professor of Mathematics in 1669 (succeeding Isaac Barrow), and over the next two decades produced the Philosophiae Naturalis Principia Mathematica(1687) — universally known as the Principia — widely considered the greatest scientific work ever written. In it, Newton unified terrestrial and celestial mechanics under a single set of laws, derived Kepler's laws of planetary motion from the law of gravitation, and laid the foundations of classical physics.
Newton was notoriously secretive and reluctant to publish. His mathematical discoveries, made in the 1660s and 1670s, remained largely in manuscript form for decades. This reluctance would lead to the most famous priority dispute in the history of science.
Newton: Key Dates
- 1642 — Born in Woolsthorpe, Lincolnshire
- 1661 — Enters Trinity College, Cambridge
- 1665–66 — Annus mirabilis: calculus, optics, gravitation
- 1669 — Lucasian Professor; circulates De Analysi
- 1671 — Writes Method of Fluxions (published posthumously 1736)
- 1687 — Principia Mathematica
- 1696 — Warden (later Master) of the Royal Mint
- 1703 — President of the Royal Society
- 1727 — Dies in London (March 31)
12.2 Newton's Method of Fluxions
Newton conceived of variables as fluents — quantities that flow or change continuously with time. The rate of change of a fluent was called its fluxion. If $x$ is a fluent, Newton denoted its fluxion by $\dot{x}$ (a dot over the letter). Thus $\dot{x}$ corresponds to what we now write as $\frac{dx}{dt}$. The inverse operation — finding the fluent from its fluxion — was the “inverse method of fluxions,” corresponding to integration.
Fluents and Fluxions
In Newton's framework:
- Fluent: A variable quantity $x$ that changes over time
- Fluxion: The rate of change $\dot{x}$ of the fluent
- Moment: An infinitely small increment $\dot{x} \cdot o$, where $o$ is an infinitely small interval of time
The fluxion of $x^n$ is $n x^{n-1} \dot{x}$, which in modern notation becomes $\frac{d}{dt}(x^n) = n x^{n-1} \frac{dx}{dt}$.
Newton's method for finding fluxions was essentially the same as Fermat's method of adequality, but placed in a kinematic framework. To find the fluxion of $y = x^n$, Newton replaced $x$ by $x + \dot{x} \cdot o$ and $y$ by $y + \dot{y} \cdot o$:
$$y + \dot{y} \cdot o = (x + \dot{x} \cdot o)^n$$
Expanding by the binomial theorem, subtracting $y = x^n$, dividing by $o$, and then discarding terms still containing $o$ gives $\dot{y} = n x^{n-1} \dot{x}$.
The generalized binomial theorem. One of Newton's most powerful discoveries was the extension of the binomial theorem to arbitrary real (or even complex) exponents. The classical binomial theorem gives a finite expansion when the exponent is a positive integer:
$$(1 + x)^n = \sum_{k=0}^{n} \binom{n}{k} x^k \quad \text{(for } n \in \mathbb{N}\text{)}$$
Newton discovered that for any real number $\alpha$, there is an infinite series:
Newton's Generalized Binomial Series
For any real number $\alpha$ and $|x| < 1$:
$$(1 + x)^\alpha = 1 + \alpha x + \frac{\alpha(\alpha - 1)}{2!} x^2 + \frac{\alpha(\alpha-1)(\alpha-2)}{3!} x^3 + \cdots$$
More compactly: $(1+x)^\alpha = \sum_{k=0}^{\infty} \binom{\alpha}{k} x^k$, where the generalized binomial coefficient is:
$$\binom{\alpha}{k} = \frac{\alpha(\alpha-1)(\alpha-2)\cdots(\alpha - k + 1)}{k!}$$
Example: The Square Root Series
Setting $\alpha = 1/2$:
$$\sqrt{1+x} = (1+x)^{1/2} = 1 + \frac{1}{2}x - \frac{1}{8}x^2 + \frac{1}{16}x^3 - \frac{5}{128}x^4 + \cdots$$
Since $\binom{1/2}{0} = 1$, $\binom{1/2}{1} = \frac{1}{2}$, $\binom{1/2}{2} = \frac{(1/2)(-1/2)}{2!} = -\frac{1}{8}$, etc.
Setting $\alpha = -1$:
$$\frac{1}{1+x} = 1 - x + x^2 - x^3 + x^4 - \cdots$$
This is the geometric series, now seen as a special case of the binomial theorem.
Newton used these infinite series expansions to compute integrals, solve differential equations, and calculate areas. His approach was computational and practical: he viewed infinite series as the natural generalization of polynomials, just as Descartes' algebraic curves generalized lines and circles. The question of convergence — for which values of $x$ does the series actually converge to the function? — was not addressed rigorously until the work of Abel, Cauchy, and Weierstrass in the nineteenth century.
12.3 Newton's Fundamental Theorem
The deepest insight of Newton's calculus was the recognition that the two central problems of the subject — finding tangent lines (differentiation) and finding areas (integration) — are inverse operations. This is the Fundamental Theorem of Calculus, which Newton discovered in 1665–1666, though he did not publish it in clear form until much later.
Newton's reasoning for the area under $y = x^n$ proceeded as follows. Consider the area function $A(x) = \int_0^x t^n \, dt$. The “fluxion” of the area (its rate of change with respect to $x$) is the height of the curve at $x$:
$$\dot{A} = x^n$$
This is because a small increment $\Delta x$ in $x$ adds an area approximately$x^n \cdot \Delta x$ (a thin rectangle of height $x^n$ and width $\Delta x$).
Now, Newton needed to find a function whose fluxion is $x^n$. He knew that the fluxion of$x^{n+1}$ is $(n+1)x^n$. Therefore:
$$A(x) = \frac{x^{n+1}}{n+1}$$
Newton's Area Result
The area under the curve $y = x^n$ from $0$ to $a$ is:
$$\int_0^a x^n \, dx = \frac{a^{n+1}}{n+1}$$
This holds for all $n \ne -1$, including negative and fractional values of $n$.
Proof sketch. We want to show that the area under $y = x^n$from 0 to $a$ is $\frac{a^{n+1}}{n+1}$. Let $F(x) = \frac{x^{n+1}}{n+1}$. Then $F'(x) = x^n = f(x)$. The fundamental theorem says:
$$\int_0^a f(x)\,dx = F(a) - F(0) = \frac{a^{n+1}}{n+1} - 0 = \frac{a^{n+1}}{n+1}$$
Worked Example: Area Under y = x^2 from 0 to 3
$$\int_0^3 x^2\,dx = \frac{3^3}{3} = \frac{27}{3} = 9$$
We can verify this by the method of Riemann sums. Dividing $[0, 3]$ into $N$equal subintervals of width $\Delta x = 3/N$:
$$\sum_{k=1}^{N} \left(\frac{3k}{N}\right)^2 \cdot \frac{3}{N} = \frac{27}{N^3} \sum_{k=1}^N k^2 = \frac{27}{N^3} \cdot \frac{N(N+1)(2N+1)}{6}$$
As $N \to \infty$, this approaches $\frac{27 \cdot 2}{6} = 9$. Confirmed!
Newton also used his infinite series to find areas that could not be computed in closed form. For instance, to find $\int_0^x \frac{1}{1+t^2} dt = \arctan(x)$, he expanded$\frac{1}{1+t^2} = 1 - t^2 + t^4 - t^6 + \cdots$ and integrated term by term:
$$\arctan(x) = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots$$
Setting $x = 1$ gives the Leibniz–Gregory series for $\pi/4$(discovered independently by several mathematicians):
$$\frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots$$
12.4 Newton's Method for Roots
Among Newton's many contributions is an iterative method for finding approximate roots of equations, now universally known as Newton's method (or the Newton–Raphson method, as Joseph Raphson independently published a similar procedure in 1690).
The idea. Suppose we want to find a root of $f(x) = 0$and we have an initial approximation $x_0$. The tangent line to $y = f(x)$ at the point $(x_0, f(x_0))$ is:
$$y - f(x_0) = f'(x_0)(x - x_0)$$
This tangent line crosses the $x$-axis (where $y = 0$) at:
$$x_1 = x_0 - \frac{f(x_0)}{f'(x_0)}$$
We then use $x_1$ as our new approximation and repeat:
Newton's Method (Newton-Raphson Iteration)
Starting from an initial guess $x_0$, the sequence:
$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$
converges (under suitable conditions) to a root of $f(x) = 0$. The convergence is quadratic: near the root, the number of correct digits roughly doubles with each iteration.
Worked Example: Find √2 (solve x^2 - 2 = 0)
Here $f(x) = x^2 - 2$ and $f'(x) = 2x$. The iteration is:
$$x_{n+1} = x_n - \frac{x_n^2 - 2}{2x_n} = \frac{x_n + 2/x_n}{2}$$
Starting with $x_0 = 1$:
- $x_1 = \frac{1 + 2}{2} = 1.5$
- $x_2 = \frac{1.5 + 2/1.5}{2} = \frac{1.5 + 1.333...}{2} = 1.41\overline{6}$
- $x_3 = 1.414215686...$
- $x_4 = 1.414213562...$
After just 4 iterations, we have $\sqrt{2} = 1.41421356...$ correct to 9 decimal places!
Worked Example: Solve x^3 - 2x - 5 = 0 (Newton's own example)
Newton himself used this equation in De Analysi. Here $f(x) = x^3 - 2x - 5$, $f'(x) = 3x^2 - 2$.
Trying $x_0 = 2$: $f(2) = 8 - 4 - 5 = -1$, $f'(2) = 10$.
$x_1 = 2 - \frac{-1}{10} = 2.1$
$f(2.1) = 9.261 - 4.2 - 5 = 0.061$, $f'(2.1) = 11.23$.
$x_2 = 2.1 - \frac{0.061}{11.23} \approx 2.09457$
Rapid convergence to the root $x \approx 2.09455...$
Newton's method remains one of the most widely used algorithms in computational mathematics, from engineering to economics to machine learning. Its quadratic convergence makes it extraordinarily efficient when the initial guess is close to the root.
12.5 Gottfried Wilhelm Leibniz (1646–1716)
Gottfried Wilhelm Leibniz was born on July 1, 1646, in Leipzig, Saxony. A prodigy who taught himself Latin at age eight and was reading scholastic philosophy by twelve, Leibniz earned a bachelor's degree in philosophy at 17 and a doctorate in law at 20. The University of Leipzig, considering him too young for a faculty position, drove him to Altdorf, where he was immediately offered a professorship — which he declined, preferring a life of service and scholarship.
Leibniz was a polymath of staggering range. He made fundamental contributions to philosophy (he was one of the great rationalists, alongside Descartes and Spinoza), logic (he envisioned a universal symbolic language and a “calculus of reasoning”), physics (he articulated the principle of conservation of kinetic energy), geology, linguistics, and law. He designed and built a calculating machine that could multiply and divide (improving on Pascal's Pascaline, which could only add and subtract). He independently invented the binary number system. He co-founded the Berlin Academy of Sciences.
Leibniz's mathematical education was relatively late compared to Newton's. During a diplomatic mission to Paris in 1672, he met Christiaan Huygens, who became his mathematical mentor. Over the next few years, Leibniz absorbed the works of Pascal, Descartes, Gregory, and Barrow with remarkable speed. By 1675, he had developed his own version of calculus, and by 1684, he had published it — eight years before Newton published anything on the subject.
Leibniz: Key Dates
- 1646 — Born in Leipzig
- 1666 — Doctorate in law; De Arte Combinatoria
- 1672 — Arrives in Paris; meets Huygens
- 1675 — Discovers calculus (manuscript notes)
- 1676 — Visits London; reads Newton's letters
- 1684 — Publishes first calculus paper in Acta Eruditorum
- 1686 — Publishes integral calculus paper
- 1700 — Founds the Berlin Academy of Sciences
- 1714 — Monadology
- 1716 — Dies in Hanover (November 14), largely forgotten
12.6 Leibniz's Notation
Leibniz's greatest gift to mathematics may have been his notation. Where Newton used dots ($\dot{x}, \ddot{x}$) tied to a kinematic (time-based) interpretation, Leibniz developed a purely symbolic notation of extraordinary power and flexibility.
Leibniz's Notation for Calculus
- $dx$: an infinitesimally small increment of $x$ (a “differential”)
- $dy$: the corresponding infinitesimal change in $y$
- $\frac{dy}{dx}$: the ratio of differentials, the derivative of $y$ with respect to $x$
- $\int y \, dx$: the integral (“sum”) of infinitesimal rectangles $y \cdot dx$
- The integral sign $\int$ is an elongated “S” for summa (Latin for “sum”)
The superiority of Leibniz's notation lies in its suggestiveness. The symbol $\frac{dy}{dx}$ looks like a fraction, and in many calculations it behaves like one. The chain rule, for instance, takes the natural form:
$$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$$
which looks like “canceling” the $du$ factors. Similarly, the substitution rule for integrals becomes transparent: if $u = g(x)$, then $du = g'(x) \, dx$, and:
$$\int f(g(x)) \cdot g'(x) \, dx = \int f(u) \, du$$
Leibniz derived the basic rules of differentiation in his notation:
Leibniz's Differentiation Rules
Product Rule:
$$d(uv) = u \, dv + v \, du \quad \Longleftrightarrow \quad \frac{d(uv)}{dx} = u\frac{dv}{dx} + v\frac{du}{dx}$$
Quotient Rule:
$$d\left(\frac{u}{v}\right) = \frac{v \, du - u \, dv}{v^2}$$
Chain Rule:
$$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$$
Power Rule:
$$d(x^n) = n x^{n-1} \, dx$$
Leibniz Derives the Product Rule
Let $y = uv$. If $u$ changes to $u + du$ and $v$ changes to $v + dv$:
$$y + dy = (u + du)(v + dv) = uv + u \, dv + v \, du + du \, dv$$
Since $y = uv$, we get $dy = u \, dv + v \, du + du \, dv$.
The term $du \, dv$ is the product of two infinitesimals and hence “infinitely smaller” than the other terms. Dropping it: $dy = u \, dv + v \, du$.
Leibniz's notation prevailed on the European continent and is the standard today. Newton's dot notation survives only in physics (where $\dot{x}$ means “time derivative of $x$”) and in some areas of applied mathematics. The triumph of Leibniz's notation had lasting consequences: Continental mathematicians, using the more flexible notation, developed calculus far more rapidly than their British counterparts, who clung to Newton's dots out of national loyalty.
12.7 The Fundamental Theorem of Calculus
Both Newton and Leibniz recognized, in their different frameworks, that differentiation and integration are inverse operations. In Leibniz's notation, this fundamental insight takes a particularly elegant form.
The Fundamental Theorem of Calculus
Part I: If $f$ is continuous on $[a, b]$ and we define:
$$F(x) = \int_a^x f(t) \, dt$$
then $F$ is differentiable and $F'(x) = f(x)$. In words: the derivative of the area function is the original function.
Part II: If $F$ is any antiderivative of $f$ (i.e., $F' = f$), then:
$$\int_a^b f(x) \, dx = F(b) - F(a)$$
In words: the definite integral equals the antiderivative evaluated at the endpoints.
Significance. Before the Fundamental Theorem, computing areas required laborious summation methods (exhaustion, Cavalieri's indivisibles, etc.). After it, computing an area reduces to finding an antiderivative — a problem in algebra rather than geometry. This single theorem unified the two great problems that had driven the development of calculus:
- The tangent problem: given a curve, find its tangent at each point (differentiation)
- The area problem: given a curve, find the area under it (integration)
Computing an Integral Using the FTC
Find $\int_1^4 (3x^2 + 2x) \, dx$.
An antiderivative of $3x^2 + 2x$ is $F(x) = x^3 + x^2$.
By the FTC:
$$\int_1^4 (3x^2 + 2x)\,dx = F(4) - F(1) = (64 + 16) - (1 + 1) = 80 - 2 = 78$$
Without the FTC, this would require evaluating an infinite sum of thin rectangles. With it, the answer is immediate.
The FTC Relates Rates and Accumulations
If $v(t)$ is velocity (the rate of change of position), then the total displacement from $t = a$ to $t = b$ is:
$$s(b) - s(a) = \int_a^b v(t) \, dt$$
For example, if $v(t) = 9.8t$ m/s (free fall), the distance fallen from $t = 0$ to $t = T$ is:
$$\int_0^T 9.8t \, dt = 4.9T^2$$
12.8 The Priority Dispute
The question of who invented calculus first — Newton or Leibniz — became the most bitter scientific controversy of the eighteenth century, poisoning relations between British and Continental mathematics for over a century.
Timeline of the Dispute
- 1665–66 — Newton develops his method of fluxions (unpublished)
- 1669 — Newton circulates De Analysi in manuscript
- 1672–76 — Leibniz independently develops calculus in Paris
- 1676 — Newton and Leibniz exchange letters (via Oldenburg); Newton describes some results but hides the method in anagrams
- 1684 — Leibniz publishes “Nova Methodus pro Maximis et Minimis” in Acta Eruditorum — the first published account of calculus
- 1687 — Newton's Principia uses geometric methods (not fluxion notation) but credits Leibniz in a scholium
- 1693 — Relations still cordial; Newton writes friendly letter to Leibniz
- 1699 — Fatio de Duillier (Newton partisan) publicly accuses Leibniz of plagiarism
- 1704 — Newton publishes his fluxion method in an appendix to Opticks
- 1711–12 — Leibniz appeals to the Royal Society; Newton orchestrates an investigation
- 1713 — Royal Society publishes Commercium Epistolicum, finding for Newton
- 1716 — Leibniz dies, still under the cloud of the accusation
The modern historical consensus is clear: both Newton and Leibniz invented calculus independently. Newton was first chronologically (1665–66 vs. 1675), but Leibniz published first (1684 vs. 1693/1704). There is no credible evidence that Leibniz plagiarized Newton. The 1676 letters that Leibniz saw contained results (the binomial series, etc.) but not the general method of fluxions, and Leibniz's own development shows a clearly independent intellectual trajectory rooted in different sources (sums of differences, the harmonic triangle, transmutation of curves).
The consequences of the dispute were severe. British mathematicians, out of loyalty to Newton, rejected Leibniz's superior notation and the powerful analytical methods developed by Continental mathematicians like the Bernoullis, Euler, and d'Alembert. British mathematics stagnated for over a century until the reform movement led by Charles Babbage, John Herschel, and George Peacock in the 1810s–1820s reintroduced Continental methods at Cambridge.
12.9 The Bernoulli Family
The rapid development of calculus after Leibniz's 1684 publication owed much to the extraordinary Bernoulli family of Basel, Switzerland. Over three generations, the Bernoullis produced at least eight mathematicians of distinction, but the most important for the development of calculus were the brothers Jacob (Jacques) Bernoulli (1655–1705) and Johann (Jean) Bernoulli (1667–1748).
Jacob and Johann were among the first to master Leibniz's calculus and apply it to a wide range of problems. Their contributions include:
The brachistochrone problem (1696). Johann Bernoulli posed a famous challenge: find the curve along which a bead slides under gravity from point $A$ to point $B$ in the least time. The answer is a cycloid, the curve traced by a point on the rim of a rolling circle. Five solutions were received: from Johann himself, Jacob Bernoulli, Leibniz, L'Hôpital, and an anonymous solution that Johann immediately recognized as Newton's (“I know the lion by his claw”).
The catenary. Jacob Bernoulli and Leibniz independently determined the shape of a hanging chain (the catenary), which had been incorrectly identified as a parabola by Galileo. The catenary is described by the hyperbolic cosine function:
$$y = a \cosh\left(\frac{x}{a}\right) = \frac{a}{2}\left(e^{x/a} + e^{-x/a}\right)$$
Differential equations. The Bernoullis were pioneers in solving differential equations. Jacob studied the equation that now bears his name:
$$\frac{dy}{dx} + P(x)y = Q(x)y^n$$
(the Bernoulli equation), showing it can be reduced to a linear equation by the substitution $v = y^{1-n}$.
L'Hôpital's rule. The rule for evaluating limits of the form $0/0$ or $\infty/\infty$:
$$\lim_{x \to a} \frac{f(x)}{g(x)} = \lim_{x \to a} \frac{f'(x)}{g'(x)}$$
was actually discovered by Johann Bernoulli, who communicated it to the Marquis de L'Hôpital under a financial arrangement. L'Hôpital published it in the first calculus textbook, Analyse des Infiniment Petits (1696), with Bernoulli's permission (if grudging consent).
Applying L'Hopital's Rule
Evaluate $\lim_{x \to 0} \frac{\sin x}{x}$.
This is of the form $\frac{0}{0}$. Applying L'Hôpital's rule:
$$\lim_{x \to 0} \frac{\sin x}{x} = \lim_{x \to 0} \frac{\cos x}{1} = \cos(0) = 1$$
Jacob Bernoulli also made foundational contributions to probability theory with his posthumous Ars Conjectandi (1713), which contained the first rigorous statement of the law of large numbers. Johann Bernoulli's son Daniel Bernoulli (1700–1782) would go on to make groundbreaking contributions to fluid dynamics (Bernoulli's principle) and mathematical physics.
12.10 Legacy
The invention of calculus was a watershed in human intellectual history, comparable to the invention of writing or the development of the axiomatic method. Its impact can be measured along several dimensions:
The language of science. Calculus became the universal language for expressing the laws of nature. Newton's second law $F = ma = m\frac{d^2x}{dt^2}$, the wave equation $\frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}$, Maxwell's equations, the Schrödinger equation, Einstein's field equations — all are expressed in the language of calculus. Without it, modern physics, engineering, and technology would be unthinkable.
The unification of mathematics. Calculus united several previously separate threads: the tangent problem (Fermat, Descartes), the area problem (Archimedes, Cavalieri), the theory of infinite series (Newton, Gregory), and the algebraic tradition of the Renaissance. After Newton and Leibniz, mathematics was no longer a collection of techniques for specific problems but a unified science of change and accumulation.
Diverging traditions. The priority dispute created a rift between British and Continental mathematics. The Continentals, using Leibniz's notation and building on the work of the Bernoullis, raced ahead. Euler, d'Alembert, Lagrange, and Laplace developed calculus into a vast analytical machine. The British, hampered by inferior notation and isolationist pride, fell behind for over a century. This divergence is a sobering reminder that intellectual progress depends not only on genius but also on communication, notation, and community.
The question of rigor. Neither Newton nor Leibniz provided a rigorous foundation for calculus. Both used infinitesimals — quantities that are not zero yet smaller than any positive number — without a clear logical basis. The philosopher Bishop Berkeley famously mocked these as “ghosts of departed quantities.” The rigorous foundation of calculus would come only in the nineteenth century, through the work of Cauchy, Riemann, and Weierstrass, who replaced infinitesimals with the $\varepsilon$-$\delta$ definition of limits. (Interestingly, in the 1960s, Abraham Robinson showed that infinitesimals could be made rigorous through non-standard analysis, vindicating the intuitions of Newton and Leibniz.)
Summary: Newton vs. Leibniz
| Aspect | Newton | Leibniz |
|---|---|---|
| Developed | 1665–66 | 1675 |
| Published | 1693/1704 | 1684/1686 |
| Notation | $\dot{x}, \ddot{x}$ | $\frac{dy}{dx}, \int$ |
| Framework | Kinematic (fluents/fluxions) | Algebraic (differentials) |
| Approach | Geometric, physical | Symbolic, formal |
| Legacy notation | Used mainly in physics | Standard in mathematics |
Newton and Leibniz, for all their differences in temperament and approach, shared a common achievement: they created the most powerful mathematical tool the world had ever seen. Calculus made it possible to describe, predict, and control the natural world with unprecedented precision. The Age of Enlightenment that followed was, in large part, built on the mathematical foundations they laid.