Ted Fussell – Commended Entry in the 2021 Teddy Rocks Maths Essay Competition
This is the story of David Hilbert, and his grand attempt to formalise all of mathematics into a singular, unified system. It is also the story of Alan Turing who, in attempting to disprove Hilbert’s scheme, came up with an idea of an abstract machine that unintentionally became the basis of all computer science. Despite being created to solve a completely different problem, this Turing machine has taken on a life of its own and is now the fundamental blueprint for modern computers.
As a mathematician working at the beginning of the 20th century, David Hilbert was not a happy man. The maths that people were doing was becoming more and more abstract and the thinkers of the day were beginning to question, and as a result disagree, what maths fundamentally was. Some followed an idea of forms, put forward by Plato, that shapes, numbers, and other mathematical objects were idealisations of real-world objects. Others believed in Kant’s intuitionism, that mathematics was simply a creation of the human mind, or Bertrand Russell’s logicism, where mathematics is seen as an extension of logic and all mathematical truths can be formally derived through a formal and logical system.
To resolve these discrepancies, Hilbert formulated a grand programme, seeking to place all existing mathematical ideas and theories into a finite, complete set of axioms (fundamental truths from which other statements can be proven) to prove the entirety of mathematics without any paradoxes and inconsistencies. This was similar to Euclid’s ‘Elements’, except on a much grander scale. In 1928, Hilbert set out the three main goals for his system: consistency (a proof that no contradiction can be obtained in the system), completeness (a proof that all true mathematical statements within the system can be proven by the system) and decidability (an ‘effective procedure’ for deciding whether any mathematical statement is true or false). This final criterion was also known as the unpronounceable ‘Entscheidungsproblem’, or ‘decision problem’.
Sadly for Hilbert, in 1931 logician Kurt Gödel released his two ‘incompleteness theorems’ upon the world, shattering Hilbert’s dream for a unified and consistent mathematical framework. Gödel’s first theorem proved that in any formal mathematical system, there would exist true mathematical statements that could not be prove. His second proved that any consistent system powerful enough to contain addition and multiplication cannot prove its consistency. However, the Entscheidungsproblem still remained ambiguous.
Enter Alan Turing, who was studying at Cambridge in the early 1930s when he heard about Hilbert’s program and the decision problem. But to go about solving the problem, he first needed to formalise what an ‘effective procedure’ meant. Turing took it to mean a set of instructions that one could follow to complete a task without any actual thought or reasoning. So, Turing came up with an incredibly simple, theoretical setup to run a given ‘effective procedure’. This was known as the Turing machine, and whilst never a physical creation, its invention was instrumental in disproving the decision problem, as well as accidentally birthing the entire field of computer science.
A Turing machine consists of a tape, a read/write head, and a set of instructions. The tape a strip of paper with an arbitrary finite length, divided into cells that contain at least either a 1 or a 0. The read/write head is something which reads the contents of a cell or changes the contents of a cell from a 0 to a 1 and vice versa, as well as being able to move back and forth along the tape. Finally, the machine has a set of instructions that tell it what to do depending on what symbol the head reads.
These instructions are divided into an arbitrary finite number of ‘states’, groups of instructions that the machine can switch between. For example, state 1 might say ‘if you see a 0, leave it be, move 1 to the left and go to state 3. If you see a 1, change it to a 0, stay in the same place and go to state 5’. The state system allows the machine to take in incredibly complex programs whilst only using two symbols on the tape, as the symbols result in different actions depending on the state. Also, the Turing machine could have a ‘halt’ command, which would tell it to stop running the program when certain conditions are met to prevent the machine from looping forever. This setup is very simple, and yet it is all that’s needed to do any kind of computation.
This theoretical machine allowed Turing to formalise computation and the following of a procedure. A Turing machine would be programmed with various states to fulfil a task (for example add together two numbers), the tape would be loaded with an input (in this case the two numbers you wanted to add), and the machine would run and then halt, having produced an output on the tape. As long as you could figure out how to program the machine, it could do practically anything. Then Turing went one step further: he proved the existence of a ‘universal Turing machine’, one which would receive the program of any other Turing machine as part of its input and perform that machine’s function. The input of this machine would include both the instructions the machine was to follow alongside the numbers it was to perform them on, enabling it to suit multiple purposes. Having just casually invented the computer (in theory at least), Turing was now ready to disprove the decision problem.
First, Turing converted the decision problem into something equivalent but more manageable. For example, to prove a statement such as the famous Goldbach conjecture (that every even number greater than two is the sum of two primes), you could tell the universal Turning machine to look at every even number starting from four and have it figure out every way that number can be made from summing two other numbers. If it finds a pair of primes, it moves on to the next even number, otherwise the machine halts. If the machine were ever to halt, then the Goldbach conjecture would have been disproven. Therefore, if you can find a way to determine whether the Goldbach crunching Turing machine will halt, you’ve disproved the conjecture. This allowed Turing to rephrase the decision problem into what he called the halting problem: is there a procedure by which you can determine if any input to the universal Turing machine will cause it to halt?
To prove this, Turing used everyone’s favourite proof, proof by contradiction. The proof goes roughly as follows: say there exists a procedure (in the form of a Turing machine we’ll call ‘Halts’) that will determine whether a given input will loop forever or halt (it doesn’t matter how the procedure works, just assume it does). We then take this theoretical machine and augment it with a module that either does nothing or goes into an infinite loop depending on the output of ‘Halts’. If ‘Halts’ outputs a yes- i.e., the input will loop forever- the new machine halts. Conversely, if ‘Halts’ outputs a no- i.e., the input will halt- the new machine goes into an infinite loop. Turing then takes the instructions for the entire machine (‘Halts’ plus the additional module) and feeds them into the machine itself as an input. ‘Halts’ then asks, ‘will this input cause the machine to halt?’. If ‘Halts’ answers yes, then the new machine loops, meaning that the original answer is actually no, causing the new machine to halt, meaning that the original answer is actually yes, causing the new machine to loop etc etc. In the end, this creates a paradox, meaning that the original premise (the existence of ‘Halts’) must be false and hence the answer to the halting problem (and by proxy the decision problem) is a resounding no.
Turing proved this miraculous result with his colleague Alonzo Church (who reached the same conclusion using a different, but equivalent, method called lambda calculus) in their 1936 paper ‘On computable numbers, with an application to the Entscheidungsproblem’, which is considered to be ‘easily the most influential math paper in history.’ This paper was the killing blow to Hilbert and his grand program, ending his plan to formalize all of maths (luckily this isn’t too much of an issue as it’s still possible to formalize pretty much all the maths anyone actually cares about, so present-day mathematicians aren’t running into paradoxes all the time. But at the same time, it also single-handedly gave birth to a whole new discipline: computer science. As what was effectively a brief aside, Turing also proved that his universal machine could perform all mathematical processes that could be represented by an algorithm; a Turing machine can compute anything computable, a statement known today as the Church Turing thesis. The result of this thesis was that the Turing machine is the most powerful model of computation that can be constructed in the real world. This meant that not only had Turing come up with the concept of a modern programmable computer (a general-purpose machine that takes some instructions and some numbers as input and produces an output using algorithms), but he had also come up with the final form of the modern computer.
Even though today’s laptops and iPhones seem infinitely more complex than Turing’s tape and head, at a fundamental level they’re identical. Anything that an iPhone can do, a Turing machine could theoretically also do, albeit with an absurdly greater amount of time and resources required. Even quantum computers, with their seemingly mystical abilities and mysterious qubits, can’t do anything extra compared to a Turing machine, only significantly faster. Being able to do anything a Turing machine can do is a concept known as ‘Turing completeness’, and it is the highest level of computational ability a system can have. As one might expect, the programming languages that run phones and computers are proven to be Turing complete, as are other languages such as C++, Pascal and Python. However, many other sillier systems are also Turing complete, often accidentally, including Conway’s Game of Life, Microsoft PowerPoint, Minecraft, and the Magic The Gathering TCG. Indeed, it has been mathematically proven that anything your phone can do can also be achieved by an incredibly large (but decidedly finite) arrangement of logic gates made of dominos.
Finally, these discoveries have some interesting philosophical implications. The human brain is often compared to a computer following algorithms and, although this is a very contentious topic, is considered by some to be Turing complete. If that is indeed the case, then it follows that, like the halting problem, there exist certain problems that the human brain is fundamentally incapable of solving, not due to a lack of time or for biological reasons, but because of the limits placed on computation by the decision problem. Conversely, if the brain is not a Turing machine, then the way we think must be entirely independent of computation, and possibly beyond it, enabling us to potentially solve problems that an algorithm never could.
A. M. Turing- ‘On Computable Numbers, with an Application to the Entscheidungsproblem’
T. Wildenhain- ‘on the Turing Completeness of MS PowerPoint’
D. Darling, A. Banerjee- ‘Weirds Maths, at the edge of infinity and beyond’
M. Parker- ‘Things to make and do in the 4th dimension’
Tom Scott, Up and Atom, Computerphile and Tom Wildenhain on YouTube
[…] Ted Fussell ’Turing Machines: the death of formalism and the birth of Computer Science’ […]