How a great mathematician solved a classic problem and laid the theoretical foundation for modern computers.
In March 1935, at the age of 22, Alan Turing was elected a fellow of King’s College at Cambridge University. In May of the next year, he submitted a paper with the opaque title of “On Computable Numbers, with an Application to the Entscheidungsproblem.”
It was shortly after Turing received his fellowship that he was introduced to the challenge of this Entscheidungsproblem. Posed by David Hilbert in 1928, the Entscheidungsproblem asks whether a definite procedure can be devised by which any mathematical assertion can be analyzed to determine if that assertion can be proven.
Hilbert had set out a grand challenge: he intended to put mathematics on a solid footing, and to do so, he explained that you had to show three things. You had to show that mathematics was complete, meaning that all true statements could be proven and all other statements would be provably false; that it was consistent, meaning that no contradictions could be arrived at within the system; and that it was decidable, meaning that for any assertion there was a finite mechanical process that would decide its truth or falsity. He set these challenges for the mathematicians and logicians of his day, and the third of these challenges was the Entscheidungsproblem.
In English, “Entscheidungsproblem” translates to “decision problem,” and three quarters of a century later, every computer that has ever existed can trace its origins to the way that Alan Turing went about meeting the challenge that the decision problem presented.
A Lonely Genius
From early childhood, Alan Mathison Turing stood out from the crowd. He taught himself to read in three weeks, but he refused to write, almost until he entered University. He objected to the conformity of public school but didn’t rebel against it so much as withdraw from it. On his own, he performed chemical experiments and perfected mathematical equations. He became a long-distance runner of considerable ability. And when, against the judgment of his teachers, he sat for the examination to enter King’s College, he was accepted.
Turing became a fellow of King’s largely as the result of a dissertation in which he independently proved the central limit theorem of probability. He is most often thought of as a mathematician. But his interests were broader than this: in his early years at King’s College, he became acquainted with Einstein’s theories and Russell’s symbolic logic. Turing particularly liked the fact that Einstein was more concerned with the outcome of his theory of relativity than with the rigor of its proof.
Given his background, Alan Turing would have begun his assault on the Entscheidungsproblem by restating it in his own words and then reducing the problem to symbolic logic. In his own words, Turing might have said that he was looking for an algorithm that can decide whether any other algorithm is computable. A mathematical expression is computable if there is an algorithm that allows its values to be calculated. Turing conjectured that no algorithm existed that could prove the computability of every expression.
In order to reduce his conjecture to symbolic logic, he looked at how people solve problems. He reflected in great detail on the individual steps that a human takes when solving a problem. In his own words, what was someone thinking about as they go about multiplying the two numbers 4,231 and 77? Turing noticed that the human computer began with the units column and multiplied 7 and 1. The human computer must then change his or her state of mind (Turing’s expression) in order to write the product on a piece of paper. After storing the result on paper, the human computer proceeds to multiply the numbers in the tens column, and repeats similar steps and similar changes in his or her state of mind. At the conclusion of the operation, the product—325,787—is known. Through this detailed examination of the calculating process, Turing determined that at any single instant in time, the human computer was concentrating on a single operation.
And it was here, at this moment in his deliberations, that Turing’s genius was manifest.
The Invention of Computing
While on a run in the English countryside, he thought about the human computers and their calculations. As one stride followed another, he dissected each logical step. If he could devise an algorithm for the steps, he thought, then the process of calculation could be mechanized. As the miles went by, the rhythm of the run gave Turing the inspiration that he needed. When he returned to his room at King’s College, the world had changed. For the first time in the history of the human race, one of its members had thought like a modern computer—Alan Turing had invented computing.
He reasoned that a machine, using an infinitely long strip of paper tape segmented in squares, could reproduce the steps of the human calculator. When the human wrote an interim result on paper, the machine would record a result on the paper tape. When the human changed his state of mind, the machine would, too. People have memory, and use it in calculating. If the paper tape could move backward and forward, the machine could have some short term memory, too. And, most remarkably, the machine would use symbols to identify the operation and the result. All of the steps from each completed computation, like the product of 77 and 4,231 or the solution to 2x equals 3 plus 5, would be kept, and could be rerun to duplicate the algorithm.
Now came a great mental leap. Turing imagined another machine, a universal machine, that would take as input the rolls of paper tape from the first machine’s completed computations and detect if that specific algorithm had a solution.
In his paper “On Computable Numbers,” Turing described just such a machine. It would be able to execute instructions that would enable the machine to read, print, move the paper tape, and set the machine’s state of mind. After providing several examples of how his machines would work, Turing concluded: “We are now in a position to show that the Entseheidungsproblem cannot be solved.” With that, Turing the mathematician solved the decision problem and Turing the genius laid the theoretical foundation for modern computers.
Alan Turing died in 1954 but his legend, like his machines, lives on.
Dan Wohlbruck has over 30 years of experience with computers, with over 25 years of business and project management experience in the life and health insurance industry. He has written articles for a variety of trade magazines and websites. He is currently hard at work on a book on the history of data processing.