Pretty image
Every electronic computer ever built—and the languages that direct them—trace their roots to Alan Turing.

On June 23, 1912, Alan Mathison Turing was born in Paddington, England. He would have been 98 this month. When Turing was born, a computer was someone who made computations, usually long and tedious, the answer to which might predict the ebb and flow of the tides or the next date for an eclipse of the sun. In his lifetime, Turing changed the meaning of the word computer from the description of a person to that of a machine that could do what humans did.

Today, Alan Turing is best remembered for his test of artificial intelligence. He proposed that if a machine could make a human questioner believe that the machine was human, then that machine could be said to have intellect. His 1950 article, “Computing Machinery and Intelligence,” proposed how the test could be conducted and also suggested how a machine could be trained. The Turing Test has become an AI benchmark.

The Entscheidungsproblem

Turing’s true greatness—his genius—however, rests on another article, published in 1936 while he was a fellow at King’s College. The article is titled, “On Computable Numbers, with an Application to the Entscheidungsproblem.” In 1917 the German mathematician David Hilbert asked for proofs to three questions. His third question asks if a mechanical procedure exists that can be used to decide the solvability (computability) of formulas in 1st-order logic. It is this question that is called, in German, the Entscheidungsproblem. By the time that Turing became interested in Hilbert’s questions, Gödel had published his Incompleteness Theorems that successfully dealt with the first two of Hilbert’s questions. Turing took up the challenge of the third question, finding a general, mechanical process for dealing with the solvability of algorithms.

In the first paragraph of his paper, Turing defines a computable number as a real number whose expression as a decimal is calculable by finite means. He concludes his definition by saying, “... a number is computable if its decimal can be written down by a machine.” Turing goes on to provide this basic description of the machine that would do the computing:

“The machine is supplied with a ‘tape,’ running through it, and (the tape) is divided into sections (called ‘squares’) each capable of bearing a ‘symbol.’ At any moment there is just one square ... which is ‘in the machine.’ We may call this square the ‘scanned square.’ The symbol on the scanned square may be called the ‘scanned symbol.’”

Having described the architecture of his machine, Turing describes its operation as proceeding step by step. A step goes from one configuration to the next. The description of a configuration includes the location of the read-and-write head, what is on the tape at that location, and the configuration after reading what is on the tape.

And that is what Turing called an a-machine.

The principles of operation for the a-machine indicate that each program step has a command (read, move the tape, print, or erase) and a configuration setting. As presented by Turing, an a-machine instruction would be in two parts: first the command which is informed by the configuration, followed by a setting of the configuration.

Turing then makes this amazing statement, “It is my contention that these operations include all those which are used in the computation of a number.”

Turing gives this a-machine, as he gives to all of his a-machines, a standard description number. The Turing number (we can think of the number as a description of the program) is like a serial number in that it identifies the specific function that was computed by a given machine. The standard description identifies, step-by-step, the configurations, symbols, and actions taken by the machine as it works through the solution.

The Concept of Computability

Returning to Turing’s paper, he then posed the question at the core of the computability problem itself. Could a machine be built, to be called the Universal Machine, which would take as input a Turing machine’s standard definition number, use the same set of configurations and the same architecture as the basic a-machine, and determine if the underlying formula or function was provable? Turing then set about to describe just such a Universal Machine.

The Universal Machine is a programmable computer. When the Universal Machine is executing a program that is the coded standard description of an a-machine, it makes itself behave as if it were that a-machine. Given the same input as the other machine, the Universal Machine will produce the same output.

Turing suggested that a given algorithm processed by one of his a-machines would be either circular (think endless loop) or circle-free. The Universal Machine, when it completed its simulation of the a-machine’s work, would print either a 0 or 1 to signify that the result was circular or circle-free. If the algorithm was circle-free, it had a finite solution. After six more pages of rigorous proof, Turing concluded:

“We are now in a position to show that the Entseheidungsproblem cannot be solved.”

Turing’s argument was straightforward. His machines could compute any function that had a solution. There are some things, however, for which no proof exists. In the 75 years since Turing published his paper, his extraordinary claim has held up: it has been shown that the basic Turing machine can compute all of the algebraic numbers.

Defining the Computer

Although the mathematical result that was the essence of Turing article was of great theoretical importance, a side-effect of its publication was arguably of far greater practical importance. Here, as part of Turing’s proof, was a rigorous specification of a mechanical device that included input, output, memory, and the ability to process algorithms expressed as commands. A computer.

Of course, Turing’s Universal Machine was theoretical. It wasn’t until a number of developments, including John von Neumann describing what would come to be known as the von Neumann architecture for computers with internally-stored programs, that an actual Universal machine would become a reality. Every electronic computer ever built is such a Universal machine, and every such machine and the languages that direct them can trace their roots to Turing.

Alan Turing lived to see only the beginnings of his ideas becoming reality. After a distinguished career as a code-breaker during World War II, Turing spent a few years at one academic appointment and a few more years writing programs for the University of Manchester. Then, on June 7, 1954, he took his own life by poisoning.

Alan Turing died fifty-six years ago, but his genius lives on. It all started in June 1912—and that’s when it happened.

Dan Wohlbruck has over 30 years of experience with computers, with over 25 years of business and project management experience in the life and health insurance industry. He has written articles for a variety of trade magazines and websites. He is currently hard at work on a book on the history of data processing.

Send the author your feedback or discuss the article in the magazine forum.