Free to follow every thread. No paywall, no dead ends.
Function (mathematics) | HearLore
Function (mathematics)
In 1734, Leonhard Euler introduced a notation that would become the universal language of mathematics, yet the concept of a function remained a vague, intuitive idea for nearly two centuries. Before the 19th century, mathematicians believed that a function had to be a smooth, continuous curve that could be drawn with a single stroke of a pen. They assumed that any function describing physical reality, such as the position of a planet over time, must be differentiable, meaning it had no sharp corners or breaks. This rigid view limited the scope of mathematics to the study of smooth curves and differential equations, effectively excluding any function that behaved erratically or discontinuously. The history of the function concept reveals a slow evolution from a geometric intuition to a precise logical definition, a transformation that required the invention of set theory to fully articulate. The shift from viewing functions as geometric objects to viewing them as abstract rules of assignment fundamentally altered the trajectory of modern science and engineering.
The Formal Definition That Broke Rules
The true revolution in the understanding of functions occurred at the end of the 19th century when mathematicians realized that the old definition was insufficient for the new problems arising in analysis. In 1837, Peter Gustav Lejeune Dirichlet proposed a definition that stripped away the requirement for smoothness, stating that a function is simply an association of one output to each input, regardless of whether that association can be described by a formula or drawn as a curve. This definition allowed for the existence of functions that are nowhere continuous, such as the Dirichlet function which equals 1 for rational numbers and 0 for irrational numbers, a concept that was previously considered impossible to define. The formalization of functions in terms of set theory, specifically as a subset of the Cartesian product of two sets, provided the rigorous foundation needed to handle these pathological cases. This set-theoretic approach defined a function as a binary relation that satisfies two conditions: every element in the domain must map to exactly one element in the codomain, and no element in the domain can map to more than one element. This abstraction liberated mathematics from the constraints of geometry and calculus, allowing for the development of complex analysis, topology, and modern algebra.
The Invisible Domain of Partial Functions
A subtle but critical distinction in the study of functions is the concept of the partial function, which exists when the domain of definition is a proper subset of the intended set. In calculus, a real-valued function of a real variable is often a partial function because the formula defining it may be undefined for certain values, such as when a denominator becomes zero. The determination of the domain of definition for such functions can be as difficult as solving major open problems in mathematics. For instance, finding the domain of the multiplicative inverse of the Riemann zeta function is more or less equivalent to proving or disproving the Riemann hypothesis, one of the most famous unsolved problems in mathematics. In computability theory, the domain of a general recursive function is the set of inputs for which an algorithm does not run forever, a problem that is undecidable according to the Halting problem. This means that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether a specific value belongs to its domain. The existence of these partial functions highlights the limits of computation and the complexity of defining the boundaries of mathematical objects.
Common questions
When did Leonhard Euler introduce the notation f(x) for functions?
Leonhard Euler introduced the notation f(x) in 1734. This notation became the universal language of mathematics and remains the standard today.
Who proposed the modern definition of a function in 1837?
Peter Gustav Lejeune Dirichlet proposed the modern definition of a function in 1837. He stated that a function is simply an association of one output to each input regardless of whether that association can be described by a formula or drawn as a curve.
What is the vertical line test used for in mathematics?
The vertical line test provides a simple geometric criterion to determine if a relation is a function. If any vertical line intersects the graph at more than one point, the relation is not a function.
When were the terms injective surjective and bijective coined?
The terms injective surjective and bijective were originally coined by the Bourbaki group in the second quarter of the 20th century. These terms replaced older phrases like one-to-one and onto in advanced mathematics.
What is the relationship between the domain of the Riemann zeta function and the Riemann hypothesis?
Finding the domain of the multiplicative inverse of the Riemann zeta function is more or less equivalent to proving or disproving the Riemann hypothesis. The Riemann hypothesis is one of the most famous unsolved problems in mathematics.
The visual representation of a function, known as its graph, serves as a powerful tool for understanding the relationship between inputs and outputs. Formally, the graph of a function is the set of all ordered pairs consisting of an element from the domain and its corresponding image in the codomain. When the domain and codomain are sets of real numbers, these pairs can be plotted as points in a two-dimensional Cartesian plane, creating a curve that illustrates the behavior of the function. The vertical line test provides a simple geometric criterion to determine if a relation is a function: if any vertical line intersects the graph at more than one point, the relation is not a function. This geometric interpretation was central to the early development of calculus, where functions were visualized as smooth curves. However, the graph of a function is not limited to two dimensions; for multivariate functions, the graph may exist in higher-dimensional spaces, such as a surface in three dimensions. The ability to visualize functions through graphs has made them indispensable in science and engineering, allowing researchers to interpret complex data and predict future behavior based on observed trends.
The Language of Notation and Symbols
The way mathematicians write functions has evolved to accommodate the increasing complexity of the concepts being studied. Functional notation, which uses a letter followed by an argument in parentheses, such as f(x), was first used by Leonhard Euler in 1734 and remains the standard today. This notation allows for the concise expression of operations and the composition of functions, where the output of one function becomes the input of another. Arrow notation, which uses the symbol to denote the rule of a function without requiring a name, provides a way to define functions inline, such as x to x plus 1. Index notation is used for functions whose domain is the set of natural numbers, known as sequences, where the element is called the nth element of the sequence. Specialized notations have also emerged in sub-disciplines, such as bra-ket notation in quantum mechanics and lambda calculus in logic and computer science. These notations allow for the precise expression of function abstraction and application, enabling the development of functional programming languages like Haskell. The evolution of notation reflects the need for clarity and precision in a field that deals with increasingly abstract and complex relationships.
The Properties of One-to-One and Onto
Functions possess specific properties that determine their behavior and their invertibility, leading to the classification of functions as injective, surjective, or bijective. A function is injective, or one-to-one, if distinct elements in the domain map to distinct elements in the codomain, meaning that no two inputs produce the same output. A function is surjective, or onto, if every element in the codomain is the image of at least one element in the domain, meaning that the range of the function equals its codomain. A function that is both injective and surjective is called bijective, or a one-to-one correspondence, and it admits an inverse function. These properties are fundamental to the study of functions, as they determine whether a function can be reversed and whether it preserves the structure of the sets involved. The terms injective, surjective, and bijective were originally coined by the Bourbaki group in the second quarter of the 20th century, replacing older terms like one-to-one and onto. The distinction between these terms is crucial in advanced mathematics, where the precise nature of the mapping determines the validity of theorems and the solvability of equations.
The Multi-Valued Mystery of Complex Functions
In the realm of complex analysis, the concept of a function expands to include multi-valued functions, which assign more than one value to a single input. This phenomenon arises when defining the inverse of a function, such as the square root, which has two values for any positive real number: a positive root and a negative root. When extending the domain of a complex function through analytic continuation, one may encounter different values depending on the path taken around a singularity, a phenomenon known as monodromy. To resolve this, mathematicians define a branch cut, a curve along which the function is discontinuous, to select a principal value. Alternatively, they consider the function as multi-valued, allowing it to jump between values when following a closed loop around a singularity. This complexity is not merely a theoretical curiosity but has practical implications in physics and engineering, where the behavior of complex functions is used to model fluid dynamics and electromagnetic fields. The study of multi-valued functions has led to the development of Riemann surfaces, which provide a geometric framework for understanding these functions.
The Foundations of Computation and Logic
The concept of a function has become the cornerstone of computer science and the foundations of mathematics, serving as the basis for the theory of computation. In computer programming, a function is a subroutine that produces an output for each input, and functional programming is a paradigm that builds programs using only subroutines that behave like mathematical functions, meaning they have no side effects and depend only on their arguments. The lambda calculus, a theory that defines computable functions without using set theory, provides the theoretical background for functional programming and has been used to define the semantics of programming languages. The Church-Turing thesis claims that every philosophically acceptable definition of a computable function defines the same set of functions, a claim that has been supported by the equivalence of various models of computation, including general recursive functions, lambda calculus, and Turing machines. In the foundations of mathematics, functions are taken as primitive notions in type theory, and the study of function spaces allows for the use of algebraic and topological properties to solve differential equations. The function remains the central object of investigation in most fields of mathematics, bridging the gap between abstract logic and practical application.