Free to follow every thread. No paywall, no dead ends.
Limit (mathematics) | HearLore
Limit (mathematics)
The modern concept of a limit originates from a geometric proposition written over two thousand years ago, yet its true mathematical power remained hidden for centuries. In Proposition X.1 of Euclid's Elements, the ancient Greek mathematician described a process where, from two unequal magnitudes, one subtracts a part greater than half from the greater, and repeats this process continually until the remainder is less than the lesser magnitude. This Method of exhaustion, used by Archimedes to calculate areas and volumes, was the first glimpse of a value being approached without ever quite reaching it in a finite number of steps. It was not until the 17th century that Grégoire de Saint-Vincent provided the first explicit definition of a limit, or terminus, in his 1647 work Opus Geometricum. He described the terminus of a geometric series as the end of the series that no progression can reach, even if continued to infinity, but which it can approach nearer than any given segment. This was a radical shift from the static geometry of the past to a dynamic understanding of approaching values, laying the groundwork for the calculus that would follow.
Newton's Hidden Epsilon Argument
Isaac Newton possessed a sophisticated understanding of limits that historians have only recently begun to fully appreciate, predating the formal definitions by over a century. In the Scholium to his 1687 work, Newton stated that ultimate ratios are not actually ratios of ultimate quantities, but limits which quantities can approach so closely that their difference is less than any given quantity. Bruce Pourciau argues that Newton actually provided the first epsilon argument, a technique that would not be formalized for another hundred years. While Newton's contemporaries relied on intuitive notions of fluxions and infinitesimals, he had already grasped the core logic of the modern definition: that a difference can be made smaller than any arbitrary quantity. This insight was buried in the philosophical language of his time, and it was not until the 19th century that mathematicians like Bernard Bolzano and Augustin-Louis Cauchy would strip away the metaphysics to reveal the rigorous algebraic structure underneath. Newton's work remained a precursor to the formalism that would eventually define the field, proving that the intuition of the limit was present long before the notation could support it.
The Epsilon Delta Revolution
The rigorous definition of a limit that defines modern calculus was forged in the 19th century through the work of Bernard Bolzano and Augustin-Louis Cauchy. In 1817, Bolzano developed the basics of the epsilon-delta technique to define continuous functions, yet his work remained unknown to other mathematicians until thirty years after his death. It was Cauchy in 1821, followed by Karl Weierstrass, who formalized the definition of the limit of a function which became known as the epsilon-delta definition. This definition states that for every real number epsilon, there exists a delta such that for any x satisfying the condition, the function value is within epsilon of the limit. This formalism replaced the vague notion of infinitesimals with a precise logical structure that could be applied to any function. The modern notation of placing the arrow below the limit symbol was invented by John Gaston Leathem in 1905 and popularized by G. H. Hardy's 1908 textbook A Course of Pure Mathematics. This revolution transformed calculus from a collection of powerful but logically shaky tools into a discipline built on unshakeable foundations, allowing mathematicians to prove theorems with absolute certainty.
Common questions
When did Grégoire de Saint-Vincent provide the first explicit definition of a limit?
Grégoire de Saint-Vincent provided the first explicit definition of a limit in his 1647 work Opus Geometricum. He described the terminus of a geometric series as the end that no progression can reach even if continued to infinity. This definition marked a shift from static geometry to a dynamic understanding of approaching values.
What did Isaac Newton state about ultimate ratios in his 1687 work?
Isaac Newton stated in the Scholium to his 1687 work that ultimate ratios are not actually ratios of ultimate quantities but limits which quantities can approach so closely that their difference is less than any given quantity. Bruce Pourciau argues that Newton provided the first epsilon argument over a century before it was formalized. This insight established that a difference can be made smaller than any arbitrary quantity.
Who formalized the epsilon-delta definition of a limit in 1821?
Augustin-Louis Cauchy formalized the definition of the limit of a function in 1821 following the work of Bernard Bolzano. Karl Weierstrass also contributed to formalizing this definition which became known as the epsilon-delta definition. This definition states that for every real number epsilon there exists a delta such that for any x satisfying the condition the function value is within epsilon of the limit.
What is the limit of the sequence 0.999...?
The limit of the sequence 0.999... is exactly 1. This sequence demonstrates that a value can be approached arbitrarily closely and in the limit the difference between the sequence and the target value becomes zero. The distance between 0.999... and 1 is not just small but non-existent in the limit.
How is the derivative defined in terms of limits?
The derivative is defined as the limit of the difference quotient as h approaches 0. This definition relies on the limit of the function f(x + h) - f(x) divided by h as h approaches 0. The derivative measures the instantaneous rate of change but its existence depends entirely on the limit of the difference quotient.
What is a Cauchy sequence in the context of real numbers?
A Cauchy sequence of real numbers is defined such that for every real number epsilon there is an n such that whenever m and k are greater than n the distance between the mth and kth terms is less than epsilon. For sequences of real numbers any Cauchy sequence is convergent and they are equivalent. A metric space in which every Cauchy sequence is also convergent is known as a complete metric space.
The expression 0.999... represents one of the most counterintuitive results in mathematics, where the limit of the sequence 0.9, 0.99, 0.999, and so on is rigorously shown to be exactly 1. This sequence demonstrates that a value can be approached arbitrarily closely, and in the limit, the difference between the sequence and the target value becomes zero. The formal definition requires that for every real number epsilon, there exists a natural number n such that for all m greater than n, the absolute value of the difference is less than epsilon. This means that the distance between 0.999... and 1 is not just small, but non-existent in the limit. The concept extends to sequences of real numbers where a sequence with a limit is called convergent, while one without a limit is divergent. Not every sequence has a limit, and a convergent sequence has only one limit, ensuring the uniqueness of the value. This paradoxical equality challenges the intuition that 0.999... is always slightly less than 1, revealing that in the realm of limits, the distinction vanishes entirely.
Oscillation and the Infinite
Not all sequences converge to a finite value, and some exhibit behavior that defies simple categorization. A sequence can be divergent but not tend to infinity, a state known as oscillatory. An example of an oscillatory sequence is the sequence of terms 1, -1, 1, -1, which never settles on a single value but fluctuates between two points. In contrast, a sequence that tends to infinity is defined such that for each real number known as the bound, there exists an integer n such that for each m greater than n, the term is greater than the bound. This means that for every possible bound, the sequence eventually exceeds it. The notion of tending to negative infinity is defined by changing the inequality, and sequences that do not tend to positive infinity are called bounded above, while those that do not tend to negative infinity are bounded below. In the context of metric spaces, the discussion extends to sequences valued in more abstract spaces, where the limit is an element such that the distance between the sequence terms and the limit becomes arbitrarily small. This abstraction allows mathematicians to study convergence in spaces far removed from the real number line, including topological spaces where limits may not be unique.
Functions and the Shape of Continuity
The definition of continuity at a point is given through limits, establishing that a function is continuous if the limit of the function as x approaches a equals the value of the function at a. This definition holds even if the function is not defined at the point, as seen in the function f(x) = (x^2 - 1)/(x - 1), which is not defined at x = 1, yet as x moves arbitrarily close to 1, the function correspondingly approaches 2. This can be calculated algebraically as x + 1 for all real numbers except 1, and since the function is continuous in x at 1, one can plug in 1 for x, leading to the equation 2. In the most general setting of topological spaces, a continuous function preserves limits, meaning that if a sequence converges to a point, the image of the sequence under the function converges to the image of the point. This property is fundamental to real analysis, where a continuous function is defined as one that is continuous at every point of its domain. The concept extends to function spaces, where different notions of convergence, such as pointwise and uniform convergence, determine whether the limit of a sequence of functions retains properties like continuity.
The Derivative as a Limit
The derivative is defined formally as a limit, serving as the cornerstone of differential calculus. In the scope of real analysis, the derivative at a point x is defined as the limit of the difference quotient as h approaches 0, or equivalently, the limit as x approaches h of the difference quotient. If the derivative exists, it is commonly denoted by f'(x). This definition relies on the limit of the function f(x + h) - f(x) divided by h as h approaches 0. The derivative measures the instantaneous rate of change, but its existence depends entirely on the limit of the difference quotient. This connection between limits and derivatives allows mathematicians to analyze the behavior of functions at specific points, determining slopes, tangents, and rates of change. The derivative is a limit that captures the essence of change, transforming the static concept of a function into a dynamic tool for understanding motion and variation. Without the rigorous definition of limits, the derivative would remain an intuitive concept without a solid logical foundation.
Cauchy Sequences and Completeness
A property of convergent sequences of real numbers is that they are Cauchy sequences, defined such that for every real number epsilon, there is an n such that whenever m and k are greater than n, the distance between the mth and kth terms is less than epsilon. Informally, for any arbitrarily small error, it is possible to find an interval of diameter epsilon such that eventually the sequence is contained within the interval. Cauchy sequences are closely related to convergent sequences, and for sequences of real numbers, they are equivalent: any Cauchy sequence is convergent. In general metric spaces, it continues to hold that convergent sequences are also Cauchy, but the converse is not true. A classic counterexample is the rational numbers, where the sequence of decimal approximations to the square root of 2 is a Cauchy sequence but does not converge in the rationals. A metric space in which every Cauchy sequence is also convergent is known as a complete metric space. This property ensures that the space contains all its limit points, making it a robust environment for analysis. The concept of completeness is crucial in functional analysis, where spaces like Banach spaces and Hilbert spaces are defined by their completeness, allowing mathematicians to solve equations and analyze functions with confidence.