Rise of the Machines II
In my last entry, we introduced a formal definition of a Turing machine (TM). In this article we will look closer at this mental device and see how it works. To begin with, we can examine what a physical TM might look like. I have included a picture from Wikipedia.
Turing Machine Illustration, Wikipedia, (Drawing after Minsky, 1967, p.121)
A TM is made of tape of infinite length, consisting of chain of cells. In each cell there is a symbol which the machine can read or write over, one cell at a time, using the machine’s head. At any given time, the machine is in one of a finite number of states stored on it. It can do basic operations; move right one cell, move left one cell, read, print, erase and change states. By changing from one state to another, the machine can remember previously attained states. What action the machine takes depends on the input from the tape at the head’s current position, and its state at that point. The sequence of states and symbols and associated actions can be listed in a transition table, and effectively describe an algorithm for solving the problem.
Let us start by building a TM, call it M , which accepts a string of the form ωωR, where ωR is the reverse string of string ω. This language can be written as:
L = {ωωR|ω ∈ {0, 1}∗ and ωR is the reverse string of ω}
For example, consider the input string 0, 1, 1, 1, 1, 0. M will accept since 1, 1, 0 is the reverse string of 0, 1, 1; M is said to decide the language L. Next let us consider how machine M might accomplish this.
Observe that the first symbol of the string is the same as the last symbol, the second symbol of the string is the same as the second from last symbol, and so forth. M begins by entering state q1, the initial state, and sets the machine’s head at the leftmost side of the input string. It will then read the first symbol, store it and erase it while maintaining its current state q1. Next, it will move a cell to the right and read the symbol. If it is either a ’0’ or a ’1’ it will continue to the right. When the machine encounters a blank cell, it will move a cell to the left so that the head is positioned at the last symbol of the input string. It will read the symbol and if it is the same as the first, it will erase it, change into the next state q2, and continue to the left until it reads a blank. If at any time M encounters a mismatch, it will enter state qreject and halt. If it runs into a blank immediately after changing direction from another blank, it will enter state qaccept and halt. This process describes a finite sequence of instructions that ultimately decide L on any input. It might take longer for larger (finite) inputs, but it will use the same sequence of steps and eventually halt. The power of it is that when the machine comes to a condition that it has encountered before, that is, some combination of qi ∈ Q and σk ∈ Σ, it will execute the same sequence of instructions from that point on in the transition table, essentially, it will repeat itself.
A possible transition table for M is shown in Table 1. below. Note that the
columns of the table are in the form:
Q × Γ → Q × Γ × (L, R, S)
where Q = {q1, q2, q3, q4, qreject, qaccept} is the set of states, Σ = {0, 1} is the input alphabet, Γ = {0, 1, b} is the tape symbols and (L, R, S) are direction instructions. The b symbol in the write symbol column represents a blank cell left after an erasure, not an input since b !∈ Σ. In M above, we introduced memory, [i ∈ Γ], in order to store the first symbol of the input string for comparison with the last, and b whenever the head’s last position was out of bounds. Such alterations are permissible as long as they do not require the machine to work with too many inputs other than its state and tape symbols. Where there is a [.] suffix, it means that what is stored in the memory is not important for determining the final instruction. Our machine starts in state q1[b] with its head at the first symbol in string ωωR, where |ωωR| > 0 and proceeds as instructed. If |ωωR| = 1, M accepts. The head can read, write or erase a symbol.
State | Read Symbol | Next Statel | Write Symbol | Move |
q1[b] | 0 | q2[0] | b | R |
q1[b] | 1 | q2[1] | b | R |
q1[b] | b | qaccept | b | S |
q2[.] | 0 | q2[.] | 0 | R |
q2[.] | 1 | q2[.] | 1 | R |
q2[.] | b | q3[.] | b | L |
q3[0] | 0 | q4[0] | b | L |
q3[1] | 0 | qreject | 0 | S |
q3[0] | 1 | qreject | 1 | S |
q3[1] | 1 | q4[1] | b | L |
q3[.] | b | qaccept | b | S |
q4[.] | 0 | q4[.] | 0 | L |
q4[.] | 1 | q4[.] | 1 | L |
q4[.] | b | q1[b] | b | R |
Table 1: A Transition Table for Turing Machine M
Try using the M on input 1, input 1, 0, input 1, 0, 1 and the input stated before of 0, 1, 1, 1, 1, 0. This is by no means the only way to construct M and you can probably build a better one. Try it yourself. For further familiarity, try constructing a decider for the following language POW;
POW = {akbk| where a, b ∈ {0, 1}, a != b, and k ≥ 0}
We have used a single tape TM; we could have used a multi-tape TM with separate heads for each tape. This makes solving more complicated problems easier as we have extra tape to use as memory. But this is of no real consequence because for all k-tape deterministic TM Mk, k ≥ 2, there exists a single tape deterministic TM M1, such that Mk can be reduced to M1 efficiently. The term deterministic is important to assure us that our TM M does not use probability at any stage, so for every unique input there is a specific output at all times. The term efficiently assures us that the transformation does not take too long so as to make such transformation unviable.
In the above example, we are solving a decision problem since our solution is a member of the set {0,1}. TM’s can also solve function problems. To start us off, however, we are interested in solving decision problems. This does not take anything away from us because any function can be expressed as a decision given additional parameters. For example suppose we want to find the total of two numbers x and y; how do we express that in decision form? We can include a parameter k and state the problem as “is the total of two numbers x and y at most k.” We do this for many reasons, most important of which is mathematical simplicity.
The last nuance that needs to be discussed is that of the Universal Turing Machine (UTM). This came about because it was initially envisaged that different TM’s were constructed for different problems. But the UTM, much like our machine M above, has memory which can be used to temporarily store programmes. This programmable aspect of it meant that all one needed to do was to provide the UTM with the appropriate transition table, and it will solve the requisite problem. Today this seems trivial because we are accustomed to the notion, but back in 1936, it was considered ground breaking.
From Machines to Magic
Now that we have developed a strong intuition about how TM’s work, we can better understand why they are important to us. With a universal machine that can, in principle, encode all problems, we can use it to evaluate the strength of any arbitrary computation. The strongest computer program that we can design does exactly what the TM does, we say it is TM complete. We do not foresee a computation that can do anything beyond what a TM can do. One might consider quantum computers, which take advantage of spin in quantum particles; but the general consensus is that such machines will not be able to do what TM’s cannot do, but that they will be able to do some things more efficiently. Given this, we can be confident that out TM is a good model to use for measuring any computation, regardless of the underlying computational model used.
Measuring Time Complexity
In computer science, we use algorithms to solve problems. Any given problem can have any number of algorithms that ultimately solve it. How do we conclude that an algorithm Ai is better than another algorithm Aj from, say, Ak , k ≥ 2 algorithms? We compare the time and memory required to get a solution. More commonly, we measure complexity by counting the elementary steps required for a Turing machine (algorithm) to produce a solution and halt. In the transition table above, each row can be considered an elementary operation. Consider the number of steps required to decide language POW = {0k1k|k ≥ 0}. On an input of length n:
Scan across tape and reject if string is not in the form 0i1j: ≈ n
If there are any 0’s and 1’s, repeat: scan across tape crossing out a single 1 for every single 0: ≈ n2
If 0’s remain after all 1’s have been crossed out or vice-versa, reject; else, accept: ≈ n
Let M be a TM that halts on all inputs:
Definition 1.The running time or time complexity of M is the function f:N → N such that
f(n) = maximum number of steps taken by M taken over any input of length n.
Definition 2.TIME(t(n)) = {L’| there is a TM M with time complexity O(t(n)) so that L’ = L(M)} = {L’|L’ is a language decided by a TM with O(t(n)) running time}
In POW above
POW = {0k1k|k ≥ 0} ∈ TIME(n2)
The rate of growth of an algorithm’s running time in terms of the input size n is expressed in asymptotic notation. The ”big-oh” notation, O(t(n)), is used to describe the upper limit of a given algorithm. It is the most widely used asymptotic notation because it describes the worst case scenario. We are guaranteed that our TM will not perform any worse than O(t(n)) and that assurance makes it a definitive construct to utilize. Other measures such as best time and average time can be more complicated to work with. They are however useful for analysis so we will discuss them and show how they are all related.
In my next post I will look closer at O(t(n)) and other asymptotic notations, and introduce complexity classes.
Leave a Reply