In 1613, the English writer Richard Brathwait used the word computer to describe a person, not a machine, when he wrote about the truest computer of times in his book The Yong Mans Gleanings. For centuries, the term referred exclusively to human beings who performed calculations, often women hired to do tedious arithmetic for less pay than their male counterparts. By 1943, most human computers were women, working in offices and laboratories to solve complex mathematical problems that were too slow or difficult for manual calculation. These individuals were the original processors of information, using abacuses, slide rules, and mechanical aids to carry out sequences of operations that would later be automated. The transition from human to machine began with the Industrial Revolution, where mechanical devices like the Jacquard loom used punched cards to automate patterns, foreshadowing the programmable nature of future computers. The concept of a machine that could be programmed to carry out sequences of arithmetic or logical operations was not a sudden invention but a gradual evolution from these early human efforts. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves, marking the beginning of a new era where the computer would replace the human calculator.
The Mechanical Mind
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer in the early 19th century, though his designs were never fully realized in his lifetime. In 1822, he announced his invention of the difference engine, a machine designed to compute mathematical tables, and by 1833, he realized that a much more general design, the analytical engine, was possible. The analytical engine was to be a general-purpose computer that could be programmed via punched cards, a method being used at the time to direct mechanical looms. It would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. However, the machine was about a century ahead of its time, and all the parts for his machine had to be made by hand, which was a major problem for a device with thousands of parts. The project was eventually dissolved with the decision of the British Government to cease funding, and Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit in 1888, and gave a successful demonstration of its use in computing tables in 1906. The legacy of Babbage's work laid the foundation for the future of computing, proving that machines could be programmed to perform complex tasks.
During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications, leading to the development of the first electronic digital programmable computer. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women, but to crack the more sophisticated German Lorenz SZ 40/42 machine, Max Newman and his colleagues commissioned Tommy Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus, which was delivered to Bletchley Park on the 18th of January 1944 and attacked its first message on the 5th of February. Colossus was the world's first electronic digital programmable computer, using a large number of valves, or vacuum tubes, and paper-tape input. It was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built, and the Mark II with 2,400 valves was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. In the United States, the ENIAC, or Electronic Numerical Integrator and Computer, was the first electronic programmable computer built, combining the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine, and was built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The programmers of the ENIAC were six women, often known collectively as the ENIAC girls, who had to manually set the program by mechanically resetting plugs and switches. These machines were the precursors to modern computers, demonstrating the potential of electronic digital computing.
The Stored Program
The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers, where he introduced the concept of a universal Turing machine. Turing proved that such a machine is capable of computing anything that is computable by executing instructions stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer, and his 1945 report Proposed Electronic Calculator was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945, which further developed the concept of the stored program. The Manchester Baby was the world's first stored-program computer, built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on the 21st of June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device, and although the computer was described as small and primitive by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1, which in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. The stored program architecture revolutionized computing by allowing programs to be stored in memory alongside the data they operate on, making computers more flexible and powerful.
The Transistor Revolution
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925, and John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the second generation of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. The MOSFET, or metal-oxide-silicon field-effect transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel. In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip, leading to the microcomputer revolution.
The Digital Age
Since ENIAC in 1945, computers have advanced enormously, with modern System on a Chip, or SoC, devices being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. Mobile computers were once heavy and ran from mains power, but the first laptops, such as the Grid Compass, removed this requirement by incorporating batteries, and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s, and these smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip, which are complete computers on a microchip the size of a coin. A general-purpose computer has four main components: the arithmetic logic unit, the control unit, the memory, and the input and output devices. These parts are interconnected by buses, often made of groups of wires, and inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit of information, so that when the circuit is on it represents a 1, and when off it represents a 0. The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Modern computers have billions or even trillions of bytes of memory, and the CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area, and there are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed, and as data is constantly being worked on, reducing the need to access main memory greatly increases the computer's speed. Computer main memory comes in two principal varieties: random-access memory or RAM, and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary. In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. Input and output devices are the means by which a computer exchanges information with the outside world, and devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of input and output, and computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA, now DARPA, and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms, and the ability to store and execute lists of instructions is the defining feature of modern computers which distinguishes them from all other machines. That is to say that some type of instructions, the program, can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language, and in practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second, and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Errors in computer programs are called bugs, and they may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to hang, becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer, and since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design. Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term bugs in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. The ability to store and execute lists of instructions is the defining feature of modern computers which distinguishes them from all other machines, and the concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on, and this is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers, and while this technique was used with many early computers, it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember, a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language, and converting programs written in assembly language into something the computer can actually understand is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced, and programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise, and they are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques, and there are thousands of programming languages, some intended for general purpose programming, others useful for only highly specialized applications. High-level languages are considerably easier than in machine language, and writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently, and thereby help reduce programmer error. High level languages are usually compiled into machine language, or sometimes into assembly language and then into machine language, using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer, and this is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies, and the task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult, and the academic and professional discipline of software engineering concentrates specifically on this challenge. The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed, and that is to say that some type of instructions, the program, can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language, and in practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second, and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Errors in computer programs are called bugs, and they may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to hang, becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer, and since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design. Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term bugs in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. The ability to store and execute lists of instructions is the defining feature of modern computers which distinguishes them from all other machines, and the concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on, and this is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers, and while this technique was used with many early computers, it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember, a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language, and converting programs written in assembly language into something the computer can actually understand is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced, and programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise, and they are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques, and there are thousands of programming languages, some intended for general purpose programming, others useful for only highly specialized applications. High-level languages are considerably easier than in machine language, and writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently, and thereby help reduce programmer error. High level languages are usually compiled into machine language, or sometimes into assembly language and then into machine language, using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer, and this is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies, and the task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult, and the academic and professional discipline of software engineering concentrates specifically on this challenge.