Mathematical analysis began not with equations, but with a paradox that claimed motion was impossible. Zeno of Elea, a Greek philosopher from the 5th century BCE, argued that to travel from point A to point B, one must first travel half the distance, then half of the remaining distance, and so on forever, creating an infinite number of steps. This paradox, known as the dichotomy, implicitly contained the concept of an infinite geometric sum, a foundational idea that would eventually evolve into the rigorous study of limits. While Zeno used this logic to deny the existence of motion, later mathematicians like Eudoxus and Archimedes turned the paradox into a tool. Archimedes, working in the 3rd century BCE, employed the method of exhaustion to calculate the area inside a circle by inscribing regular polygons with an increasing number of sides. He did not use the word limit, yet his technique of approaching a value through an infinite process laid the groundwork for the entire field. In Asia, the Chinese mathematician Liu Hui applied similar methods in the 3rd century CE, while Indian scholars from the 4th century BCE had already derived formulas for arithmetic and geometric series, proving that the seeds of analysis were sown across the globe long before the Scientific Revolution.
The Calculus Wars
The formal birth of mathematical analysis occurred in the 17th century, driven by a fierce rivalry between two men who never met. Isaac Newton and Gottfried Wilhelm Leibniz independently developed infinitesimal calculus, the engine that would power the Scientific Revolution. Newton, working in England, focused on the physical applications of motion and change, while Leibniz, in Germany, developed the notation and algebraic framework that remains standard today. Their work built upon the analytic geometry of Pierre de Fermat and René Descartes, who had introduced the Cartesian coordinate system in 1637. This system allowed mathematicians to describe curves and functions using coordinates, bridging the gap between algebra and geometry. However, the early calculus was built on shaky logical foundations. Mathematicians relied on the principle of the generality of algebra, assuming that rules valid for finite numbers applied to infinite quantities without proof. This period saw the rise of applied work, including the calculus of variations and the study of ordinary and partial differential equations, as scientists sought to approximate discrete problems with continuous ones. The tension between the intuitive power of calculus and its lack of rigor would haunt the field for the next two centuries.The Monster Hunters
By the 19th century, the mathematical community began to fear that their theories were built on sand. Bernard Bolzano introduced the modern definition of continuity in 1816, but his work remained obscure until the 1870s. It was Augustin-Louis Cauchy who began to put calculus on a firm logical foundation in 1821, rejecting the loose algebraic assumptions of Euler and formulating calculus in terms of geometric ideas and infinitesimals. Cauchy introduced the concept of the Cauchy sequence, a precursor to the rigorous (epsilon, delta)-definition of limits that would be perfected by Karl Weierstrass. As the field matured, mathematicians began to investigate pathological objects that defied intuition, known as monsters. These included functions that were continuous everywhere but differentiable nowhere, and space-filling curves that could cover an entire plane. Georg Cantor developed naive set theory to handle these complexities, while Henri Lebesgue revolutionized the field by introducing a new theory of integration that could handle functions Riemann's method could not. Richard Dedekind constructed the real numbers using Dedekind cuts, filling the gaps between rational numbers to create a complete continuum. This era of scrutiny transformed analysis from a collection of useful tricks into a disciplined science, where the existence of a number or function had to be proven before it could be used.