Free to follow every thread. No paywall, no dead ends.
Integer: the story on HearLore | HearLore
Integer
The word integer originates from the Latin integer, meaning "whole" or literally "untouched," derived from the prefix in meaning "not" and the verb tangere meaning "to touch." For centuries, this term described only positive numbers, serving as a synonym for natural numbers before the concept of negative values was accepted into the mathematical mainstream. Leonhard Euler revolutionized this definition in 1765 with his work Elements of Algebra, explicitly expanding the scope of integers to include both positive and negative numbers. Before this pivotal moment, the phrase the set of the integers did not exist, and the very idea of subtracting a larger number from a smaller one to create a new type of number was largely rejected by mathematicians who viewed numbers as counts of physical objects. The history of the integer is a history of expanding the boundaries of what a number could be, moving from simple counting to a complex system capable of describing debt, direction, and abstract algebraic structures.
The German Letter Z
The symbol used to denote the set of all integers, the boldface Z, traces its lineage back to the German word Zahlen, which simply means numbers. This notation was attributed to the famous mathematician David Hilbert, yet its adoption was far from immediate or universal. The earliest known appearance of this specific notation in a textbook occurred in 1947 within the collective work Algèbre by Nicolas Bourbaki, but even then, the mathematical community was slow to standardize the symbol. For decades, alternative notations persisted, with some textbooks utilizing the letter J, and a paper published in 1960 using Z to denote only non-negative integers. It was not until 1961 that Z became the generally accepted standard in modern algebra texts to represent the set containing both positive and negative integers. The ambiguity of the symbol Z continues to this day, as some authors use it to denote non-zero integers, while others use it for the set of integers modulo n, or even the set of p-adic integers, creating a landscape of notation that varies significantly between different schools of thought.
The Ring of Zero
In the vast landscape of algebraic structures, the integers form the smallest ring containing the natural numbers, serving as the prototype for all objects of such algebraic structure. This unique status means that for any ring, there exists a unique ring homomorphism from the integers into that ring, a property known as being an initial object in the category of rings. The integers are closed under addition and multiplication, meaning the sum and product of any two integers are always integers, but they are not closed under division. The quotient of two integers, such as 1 divided by 2, need not be an integer, which is why the integers do not form a field. The smallest field containing the integers as a subring is the field of rational numbers, a construction that mimics the process of forming fractions from whole numbers. This lack of multiplicative inverses for most integers, such as the number 2, distinguishes the integers from fields and places them in the category of integral domains, where the product of two non-zero integers can never be zero.
Common questions
What is the origin of the word integer?
The word integer originates from the Latin integer, meaning whole or literally untouched, derived from the prefix in meaning not and the verb tangere meaning to touch.
When did Leonhard Euler expand the definition of integers to include negative numbers?
Leonhard Euler revolutionized this definition in 1765 with his work Elements of Algebra, explicitly expanding the scope of integers to include both positive and negative numbers.
When did the symbol Z become the standard notation for the set of all integers?
It was not until 1961 that Z became the generally accepted standard in modern algebra texts to represent the set containing both positive and negative integers.
Who formalized the concept that the set of integers is countably infinite?
This counterintuitive property was formalized by Georg Cantor, who introduced the concept of infinite sets and set theory at the end of the 19th century.
What is the fundamental theorem of arithmetic regarding integers?
The fundamental theorem of arithmetic states that any positive integer can be written as the product of primes in an essentially unique way.
Despite extending infinitely in both positive and negative directions, the set of integers is countably infinite, meaning it is possible to pair each integer with a unique natural number. This counterintuitive property was formalized by Georg Cantor, who introduced the concept of infinite sets and set theory at the end of the 19th century. The cardinality of the integers is denoted by aleph-null, representing a specific type of infinity that is no larger than the set of natural numbers. A bijection, or one-to-one correspondence, can be constructed to map every integer to a natural number, proving that the integers are not a larger infinity than the natural numbers. This pairing allows mathematicians to treat the infinite set of integers with the same rigor as finite sets, establishing that the integers are the only nontrivial totally ordered abelian group whose positive elements are well-ordered. The ordering of integers is compatible with algebraic operations, ensuring that if a is less than b and c is less than d, then a plus c is less than b plus d.
The Subtraction Paradox
The formal construction of integers in modern set-theoretic mathematics resolves the paradox of subtraction by defining them as equivalence classes of ordered pairs of natural numbers. The intuition behind this construction is that the pair (a, b) stands for the result of subtracting b from a, allowing mathematicians to define arithmetic operations without any case distinction between positive and negative numbers. An equivalence relation is defined on these pairs such that (a, b) is equivalent to (c, d) precisely when a plus d equals b plus c. This method allows the negation of an integer to be obtained simply by reversing the order of the pair, and subtraction is defined as the addition of the additive inverse. Every equivalence class has a unique member that is of the form (n, 0) or (0, n), recovering the familiar representation of the integers as the set of all whole numbers, positive and negative. This abstract approach eliminates the tedious piecewise definitions required in traditional arithmetic, where each operation must be defined separately for positive numbers, negative numbers, and zero.
The Computer's Limit
In the realm of theoretical computer science, integers are often a primitive data type, yet practical computers can only represent a subset of all integers due to their finite capacity. The common two's complement representation used in programming languages like C, Java, and Algol64 distinguishes between negative and non-negative values, but cannot store the infinite set of all integers. Fixed length integer approximation data types, denoted as int or Integer, are limited to a number of bits which is a power of 2, such as 4, 8, 16, or 32 bits. Variable-length representations, known as bignums, can store any integer that fits within the computer's memory, but they require more complex algorithms to manage. Automated theorem provers and term rewrite engines utilize alternative constructions of integers, representing them as algebraic terms built using basic operations like zero, succ, and pred. These constructions differ in the number of basic operations used and the types of arguments accepted, with some tools using free constructors that are simpler and more efficient to implement than the equivalence class method used in pure mathematics.
The Fundamental Theorem
The integers possess a property known as Euclidean division, which states that given two integers a and b with b not equal to zero, there exist unique integers q and r such that a equals bq plus r, where r is the remainder. This property implies that the integers form a Euclidean domain, which in turn implies that the integers are a principal ideal domain. The most profound consequence of this structure is the fundamental theorem of arithmetic, which states that any positive integer can be written as the product of primes in an essentially unique way. This theorem serves as the bedrock of number theory, ensuring that every integer greater than 1 has a unique prime factorization, barring the order of the factors. The Euclidean algorithm for computing greatest common divisors works by a sequence of Euclidean divisions, leveraging this unique factorization to solve problems involving divisibility and congruence. The lack of zero divisors in the integers ensures that if the product of two integers is zero, then at least one of the integers must be zero, a property that distinguishes the integers from other algebraic structures.