On the surface, logic is like a game of chess, a closed system with unambiguous rules governing every square of the board (or universe). One and one is always two; a king can move only one space at a time; the square of five is always twenty-five; a chess board consists of sixty-four squares. In Principia Mathematica, Whitehead and Russell attempted to fashion a universal system of logic, with a neat set of rules for every situation. It was a gallant and prodigious effort – embracing three thick volumes, and, to an admirable degree, it succeeded. The work became one of the foundations of modern logic. As time went by, however, logicians began to realize that Principia’s eternal verities didn’t perform very well in certain situations.
In an effort to clarify the situation, the German mathematician David Hilbert (1863-1943) posed three questions that framed the issues at stake. Was logic complete, in the sense that every statement, from 1 + 1 = 2 and on up, can be proved or disproved? Was it consistent, in the sense that 1 + 1 always equaled 2? And was it decidable, in the sense that there was a method that demonstrated the truth or falsity of every statement? In other words, was there such a thing as an unsolvable problem and a tried-and-true method of determining solvability?
These questions go to the heart of logic. And it turned out that, despite the best-laid plans of Whitehead, Russell, and other logicians, certain logical statements were indeed unsolvable, for now and forever. The culprits were self-contradictory statements such as the old Greek paradox “I am lying.” At one and the same time, the speaker of “I am lying” is lying and telling the truth: if he is lying, then he is telling the truth, but if he is telling the truth how can he be lying? A classic example of an incomplete and inconsistent statement, “I am lying” doubles back on itself like a figure in an Escher print. (And so does “The following sentence is true. The preceding sentence is false.”) Czech mathematician Kurt Godel provided the first convincing demonstration of the incompleteness and inconsistency of logic, drawing up a statement, in logical notation, that was the logical equivalent of “I am lying.” Although Godel’s work tended to show that logic was also undecidable, he did not present a proof of the Entscheidungsproblem.
It was Turing who, in “Computable Numbers,” offered the most imaginative and, for our purposes, most influential demonstration of undecidability. (But he was not the first; Alonzo Church, a mathematician at Princeton University, published a proof of undecidability a few months earlier.) With its three-part operational repertoire, a Turing machine can perform any logical operation; yet, no matter what it does, it can’t judge the truth or falsity of certain paradoxical statements or predetermine their solvability. Even if the machine’s operational catalog were enhanced, it still couldn’t resolve these statements. Turing had come up with a litmus test of decidability – a mechanical method that showed that the only way to determine whether a statement was true or false was, obviously enough, to try to solve it. At the same time, and quite incidentally, the Turing machine also provided a cogent theoretical demonstration of the potential and limitations of a machine. On the one hand, no machine, even a computer equipped with an infinite storehouse of information and instructions, can answer every problem; on the other hand, a machine with the slimmest of operational abilities can solve a phenomenal range of problems, and this was Turing’s first great contribution to our understanding of computers.
A highly theoretical paper, “Computable Numbers” had no discernible impact on the very untheoretical development of ENIAC and its predecessors. None of the early computer pioneers – Zuse, Atanasoff, Stibitz, Aiken, Eckert, or Mauchly read Turing’s paper. However, von Neumann, who had met Turing at Cambridge, where the young Englishman was a fellow at King’s College, was well aware of his work, and there’s an uncanny resemblance between some of the central ideas in “Report on the EDVAC” and “Computable Numbers”; for example, the infinite tape of the Turing machine is essentially a general-purpose memory for both data and programs and the Turing machine itself is really an unlimited stored-program computer. Odds are, though, that these similarities were purely coincidental. The remarkable thing is that Turing had, by intuition and through the backdoor, discovered what Eckert and Mauchly had learned through hard practice.