— Ch. 1 · Foundations Of Intelligence —
Philosophy of artificial intelligence.
~7 min read · Ch. 1 of 6
In 1956, a group of researchers gathered at Dartmouth College to launch the field of artificial intelligence. The program for their conference declared that every aspect of learning or any other feature of intelligence could be precisely described so that machines could simulate it. This bold assertion became the foundational premise for decades of research and philosophical debate. Alan Turing had already proposed in 1950 that if a machine behaves as intelligently as a human being, then it is as intelligent as a human being. He called this idea a polite convention, suggesting that instead of arguing endlessly about whether people think, we should simply accept that they do. Turing extended this convention to machines, asking whether computers could also qualify under the same standard. The question shifted from abstract metaphysics to practical behavior: can a machine solve problems that humans solve by thinking? This behavioral definition allowed AI researchers to sidestep deep questions about consciousness while still pursuing functional goals. Stuart Russell and Peter Norvig later formalized intelligence as goal-directed behavior, defining an agent as something that perceives and acts in an environment to maximize expected success based on past experience. They preferred the term rational over intelligent because it avoided testing for unintelligent human traits like making typing mistakes. Yet these definitions failed to differentiate between things that think and things that merely act. A thermostat qualifies as a simple form of intelligent agent under such criteria, even though no one claims it possesses understanding. The basic position of most AI researchers remains tied to what machines can achieve rather than how they achieve it.
Symbol Systems And Minds
Allen Newell and Herbert A. Simon proposed in 1963 that symbol manipulation was the essence of both human and machine intelligence. Their physical symbol system hypothesis claimed that a physical symbol system has the necessary and sufficient means of general intelligent action. This assertion implied two strong conclusions: first, that human thinking is a kind of symbol manipulation; second, that machines can be intelligent if they manipulate symbols correctly. Most AI programs written between 1956 and 1990 used word-like high-level symbols that directly corresponded with objects in the world, such as <dog> or <tail>. These symbols were treated as discrete units manipulated according to formal rules. Hubert Dreyfus later described this view as the psychological assumption, stating that the mind operates on bits of information following explicit procedures. However, modern AI based on statistics and mathematical optimization does not use the same kind of high-level symbol processing. The shift away from symbolic logic marked a turning point in how researchers approached cognition. Turing had anticipated objections to rule-based systems when he classified informal behavior as an argument against complete laws governing complex actions. He argued that just because we do not know the rules does not mean no such rules exist. Scientific observation remains the only way to find them. Russell and Norvig noted progress since Dreyfus published his critique toward discovering the rules that govern unconscious reasoning. Computational intelligence paradigms like neural nets and evolutionary algorithms now simulate unconscious learning rather than relying on predefined symbolic structures.