Thursday 2/7

Computability, Complexity, and the Church-Turing Thesis

Assignments

Readings

Texts

Notes

Synopsis

Fair warning: As I mentioned in class,this is a long synopsis. It summarizes not just what we did in class, but the key steps in the argument we've been constructing since the start of the semester--the positive case, that is, we've been making for our capacity to understand the mind in light of Dretske's Dictum. It takes some time, though, to lay out the terrain we've covered. I hope it helps you better grasp our work thus far if you are struggling to understand the broad sweep of the argument (the forest for the trees, as it were). If you already understand it, then it might be useful to at least have confirmation that you've correctly grasped the key steps.

We began today by discussing some of the common problems from the first problem set I handed back, as well as the importance of being responsible for tracking your own grades as we move through the semester.

Now, recall that Turing used his mathematical model of an effective procedure--the Turing Machine--to give a negative answer to Hilbert's challenge. Turing proved undecideability, in other words.

In proving undecideability, however, Turing proved that there exists a Turing Machine which can compute any function computable by any Turing Machine. Turing's Theorem proves the existence of the Universal Turing Machine, and his discovery should send shivers down your spine. First, it demonstrates the possibility of the modern programmable computer, thus laying the foundation for Computer Science and the so-called Information Revolution.

Second, it raises an intriguing possibility: Perhaps insofar as we are meat machines, we are indeed nothing more than biological Universal Turing Machines.

To be sure, we don't have little boxcars stuttering across the cells of indefinitely long tapes in our heads. Instead, Turing's theory of computation gives us a model of effective or mechanical procedures, and every other model we've devised turns out to compute exactly the same class of functions computable by Turing Machines.

Taken as a proposition, this is the Church-Turing Thesis:

Any function computable by some effective procedure is computable by some Turing Machine.

As Boolos and Jeffries explain, if the Church-Turing Thesis is false, we can prove it false merely by devising some effective or mechanical procedure which computes functions not otherwise Turing Machine computable. We cannot, however, prove the Church-Turing Thesis true because we have no way of anticipating the variety of effective procedures some creative genius might devise.

We strongly suspect the Church-Turing Thesis is true, however, because every effective procedure we've so far been able to devise computes exactly the same class of functions as the Universal Turing Machine. That is, the class of Turing Machine computable functions is, provably, the exact same class as the class of

  • λ-calculable functions
  • gödelian primitive recursive functions
  • abacus computable functions
  • Post's Machine computable functions

The upshot of the twin propositions,

  • There exists a Turing Machine that can compute any function computable by any Turing Machine; and,
  • Any function computable by some effective procedure is computable by some Turing Machine

(one provable, Turing's Theorem; the other not provable but, to the best of our knowledge, true) is that if we are basically meat machines, even if we employ radically different kinds of mechanical procedures to compute the cognitive functions machine functionalism presumes we must to be able to do the kinds of things we can do, those same functions can be implemented or modeled by Turing Machines.

This is an important thought--a stunning thought, indeed. Let us pause for a moment to put all this in perspective.

Now, we've covered a great deal of material in a very short amount of time, and the arguments only become more complicated as we work our way through the material. Given the many perplexed looks I saw today, this is perhaps as good a time as any to catch our breath and look back at what we've accomplished so far. I'll put it in terms of a question-and-answer dialogue.

What are we doing this semester?

We're studying the Mind by adopting Dretske's Dictum: You don't understand it if you don't know how to build it. Applied to the Mind itself, this engineering obligato has a number of important implications.

  1. It grounds our inquires by forcing us to abstain from flights of fancy or mere speculation: We have to make good on any claims we might make.
  2. It requires that the Mind be buildable, in the sense that it presumably emerges from a natural environment and is made up, ultimately, only of things found in the natural environment: Nothing spooky, supernatural, or mysterious will be allowed without sacrificing our understanding of the Mind.
  3. It presupposes that understanding the Mind (or anything to which the Dictum might be applied) is not possible without grasping exactly how the mind is able to do the amazing things it can.

Dretske's Dictum does not, all the talk of Artificial Intelligence notwithstanding, commit us to actually having to build a Mind. It may be that some technique or material is simply not available to us. That's okay. We just need to know how to build it; we don't actually need to build it.

So what is the Mind?

We began the semester by noting that the Mind is very unlike most other things we find in nature. Those things have location, mass, extension in space, color, texture, odor, etc. None of those properties, it seems, are had by the Mind except metaphorically. So, yes, what is the Mind? It is extraordinary, whatever it is!

To help us come to grips with this question, we conducted an altogether too brief survey of the history of the Philosophy of Mind. There were two reasons for this. First, the history is important for understanding where the Philosophy of Mind now stands in terms of the kinds of assumptions being argued by philosophers and cognitive scientists today. Thus we traced an important division between the soul and body from Plato to Descartes against what might be called the proto-functionalism of Aristotle and Hume, which sets the stage for contemporary functionalist-physicalism. Second, we saw articulated in the history of the Philosophy of Mind various theses about peculiarly cognitive faculties and capacities. Understanding these capacities from an engineering standpoint--i.e., how would you go about building something that could do that?--is the challenge for the project of Artificial Intelligence.

In light of all the arguments we considered, we adopted Machine Functionalism as an excellent starting point for further inquiry. In particular, we provisionally accepted the viewpoint that we ourselves are meat machines: Roughly, our minds are the 'software' being run by our 'hardware', or bodies. Given Putnam's Multiple Realizability Argument, taking ourselves to be meat machines fits neatly with Dretske's Dictum: We don't especially need to use the same stuff in figuring out how to build a mind, since it should be possible for the same mental function to be implemented using different raw materials.

Having described Machine Functionalism and having adopted it as our operating assumption for the semester, we next considered the history of Artificial Intelligence from the construction of marvelous automata to the hobbesian thesis that thinking just is computation, where computation is understood as the rule-governed manipulation of strings of symbols. Following Haugeland, we saw how Galileo's extension of geometrical techniques to represent and prove results about quantities other than distance or location, together with Descartes' important mathematical insight that those geometrical techniques could be represented by algebraic manipulation of certain equations, suggests the possibility that human reasoning and other cognitive capacities are fundamentally a matter of manipulating internal symbols according to 'laws of thought'. If this is true, then it seems at least possible that we could construct an Artificial Intelligence. Yet all of this ignores an extremely important question:

What is intelligence?

Alternatively,

How will we know if we've successfully figured out how to build a Mind?

Alan Turing thought the first question had no answer. Intelligence is simply too complicated, vague, and abstract a notion to be answered in any satisfactory way. Instead, he argued, we should adopt a behavioral test for machine intelligence. His proposal, appropriately known as the Turing Test, holds that a machine which is indistinguishable from a person in ordinary conversation and after careful interrogation is, so far as anyone could possibly care, intelligent.

So, at this point in Minds and Machines we have sketched some of the views philosophers have taken on the mind, explained some of the capacities which seem distinctive of things with minds, examined how thinking itself might be, at root, something machines could do, and given a possible answer to the question of when something should be considered intelligent.

These are all fairly large dots. We are in the process of sketching these dots even as we connect them. Yet another dot remains.

What is a machine?

What, generally speaking, is mechanism?

Alan Turing again comes to our rescue. That is, in trying to solve Hilbert's decidability problem of whether there exists an effective procedure for deciding whether certain mathematical problems are solvable or not, Turing had to make the notion of an effective procedure precise. Turing took as his model computers (people, women usually, who performed calculations on lists of numbers using pencil and paper). Turing recognized that what these 'computers' did all day long could be done by carefully described automata, now called “Turing Machines”, if we got rid of all the things the 'computer' does, like drinking coffee or doodling, which were irrelevant to the task of computation. (Chapter 7 of Martin Davis' entertaining and accessible The Universal Computer: The Road from Leibniz to Turing describes how Turing might have gone about this.)

The Stanford Encyclopedia of Philosophy has an excellent overview of Turing Machines, and there are even online Turing Machine simulators.

Although it may seem that we are getting too bogged-down in technical details, remember that we are trying to understand the capacities and limitations of machines. Nor are the kinds of machines we seek to understand chosen at random. What Turing showed in the process of proving undecidability is the existence of a Universal Turing Machine. That is, Turing showed that there exists a machine which can compute any function which is computable by any Turing Machine whatsoever. We call this "Turing's Theorem".

Further, these Universal Turing Machines are what we today call computers.

In any case, it is crucial for us to understand what machines are if we are to make any headway in understanding the proposition that we are meat machines.

What kinds of (Turing) machines are there?

We can distinguish between kinds of Turing Machines. In particular, we can describe the following:

  • Deterministic Single-Tape Turing Machines (what we have been learning)
  • Deterministic Multi-Tape Turing Machines
  • Indeterministic Turing Machines
  • Non-deterministic Turing Machines

We note that the class of functions which are computable by a deterministic single-tape Turing Machine is infinite but countable (or denumerable), while the class of all functions is non-denumerably infinite. Thus the class of Turing Machine computable functions is a proper subset--indeed, quite a small subset--of the class of all functions. It turns out, however, that the class of Turing Machine computable functions is also a very special class.

Why is the class of Turing Machine computable functions so special?

Recall that Turing sought a way to compute any of the functions any person with pencil and paper could, given enough paper and enough time, compute. Turing's version of the Church-Turing thesis is very simply that any function computable by a person with pencil and paper is computable by some Turing Machine. If we understand that the person with pencil and paper is implementing an effective procedure for computing a function, then we have the following characterization of the Church-Turing Thesis:

The Church-Turing Thesis: Any function computable by some effective procedure is computable by some Turing Machine.

The Church-Turing Thesis, if false, is provably false. Just find an effective procedure which computes some function which cannot be computed by any Turing Machine. On the other hand, if the Church-Turing thesis is true, it can never be proven true. There is no way to know in advance the kinds of effective procedures that might be developed or, equivalently, the kinds of tricks a person with pencil and paper might develop to compute a function. Yet whenever we find some effective procedure for computing a function, we also find that the procedure computes the same class functions Turing Machines compute.

So what does the class of Turing Machine computable functions have to do with Dretske's Dictum?

If we characterize the mind as composed of various distinctively cognitive functions as Machine Functionalism proposes (however they may be defined), then we are now able to ask the question Artificial Intelligence is asking:

Is the class of cognitive functions Turing Machine computable?

Note that the class of cognitive functions may not be Turing Machine computable and we may still be biological machines: The Church-Turing theses does not state that any function computable by any machine is Turing Machine computable, although such machines would not, if the Church-Turing thesis is true, employ an effective procedure to compute the functions they do. It may be that humans hypercompute in the sense that we compute cognitive functions which are not Turing Machine computable or, given the Church-Turing Thesis, computable by any effective procedure whatsoever. Or it may be that what we call cognitive functions under the assumption of machine functionalism are not actually functions at all, so the thought that we might compute them is simply absurd from the get go.

It would be a disaster if that were the case, however, since we would not then be able to understand the mind vis-a-vis Dretske's Dictum: You don't understand it if you don't know how to build it. Maybe we would just have to give up Dretske's Dictum and hope for some understanding of the mind, however limited.

In light of Dretske's Dictum, Machine Functionalism (The Meat Machine Thesis), and the Church-Turing Thesis, we find ourselves alongside the vast majority of cognitive scientists, neurobiologists, and computer scientists anticipating that cognitive functions are Turing Machine computable. The possibility of a Universal Turing Machine one day passing the Turing Test presupposes that the class of relevant cognitive functions (relevant, that is, to passing the Turing Test) is Turing Machine computable.

Yet we have no proof that cognitive functions are Turing Machine computable, only hope.

Is the Turing Machine computability of cognitive functions the only thing standing in the way of meeting the requirements of Dretske's Dictum?

No, it turns out there is a further complication. It is not enough that the cognitive functions be Turing machine computable. They must be computable in a reasonable amount of time using a reasonable number of resources. A function that takes nearly the age of the Universe to compute could hardly satisfy the requirements of Artificial Intelligence, since, to pass the Turing Test, the AI must respond quickly and efficiently.

Put another way, intelligence has evolved in environments where survival often hinges on how quickly one can react and adapt. Speed is of the essence in computing cognitive functions.

We therefore distinguish between the spatial and temporal complexity of a Turing Machine. The spatial complexity of a Turing Machine is a function from the length n of its computable input to the maximum number of cells the Turing Machine requires to compute the input. The temporal complexity of a Turing machine is a function from the length n of its computable input to the maximum number of steps the Turing Machine requires to compute the input. To say that cognitive functions must be efficiently calculable by Turing Machine is just to say that the function describing their spatial and temporal complexity is bounded by some limiting function. In these terms, the question for Artificial Intelligence becomes,

Is the class of cognitive functions efficiently Turing Machine computable?

As we will see in the coming weeks, there are lots of reasons why one might answer "No".

It is interesting to note that if we are meat machines, we should expect evolution to have equipped us with adaptively advantageous computational mechanisms. That is, if we are meat machines, we ourselves should be efficient computers of cognitive functions. We need to be at least fast enough in computing cognitive functions to evade saber-toothed tigers, which I understand were really, really fast and had huge, sharp teeth.

In an important sense this is a testable hypothesis. That is, as we investigate our own neurophysiology, we should find the kinds of time and space saving features for computing cognitive functions one would expect to find in a meat machine evolved under circumstances where speed of computation is usually the difference between survival and death. We closed today by discussing two such examples:

  1. It is not true to say that you jerked your hand away from the hot stove, rather your arm jerked your hand away from the hot stove. Given the slow propagation of signals along neural pathways, there is a short-circuit of sorts in your arm which keeps your brain from slowing you down.
  2. It is not true to say that you extend your legs as you run, rather your leg extends itself in response to running. Your brain again has to be kept as much out of the action as possible simply because it would take too long to make running possible otherwise.

There are many other examples. Suffice it to say that cognitive scientists, including neurobiologists, psychologists, and others, have good reasons to think that we really are meat machines in light of what that assumption predicts.

Nevertheless, there are deep and fundamental features of the human mind which challenge the idea that we are meat machines--important cognitive functions we cannot imagine any machine computing---and we turn to those features next.

Where are we now?

In an important sense, we have traced the history of our understanding of minds and machines to arrive at an optimistic conclusion: We should one day be able to satisfy the requirements of Dretske's Dictum because given

  1. Machine Functionalism (The Meat Machine Hypothesis);
  2. The Church-Turing Thesis; and,
  3. Turing's Theorem,

we should someday be able to (or at least know how to) build something that efficiently computes cognitive functions and thereby build something that has a mind--a brain-child, if you will. That is, putting the three points together, we get computationalism, or the notion that minds are multiply realizable because the totality of cognitive functions which make up a mind are in turn TM-computable. Computationalism, we note, is the fundamental assumption of modern cognitive neuroscience.

We have made the optimistic case on behalf of our ultimately being able to understand the mind. Yet that optimistic case confronts enormous challenges. Next we turn to the pessimists who argue that there are some cognitive functions which cannot be efficiently computed, or even computed at all.

What if I still don't understand?

Well, first and foremost, rest assured you are not stupid or dimwitted. Even the most clever among us find these issues profoundly challenging

So if at the end of reading this and talking about it with your classmates you still don't grasp the thread of the argument I am making this semester, then we will need to spend more time talking about it to be sure you're up to speed. I would in that case propose we insert a review class in our schedule during which I would do little of the talking except to offer the occasional guidance on the discussion.

If enough people ask, that is what we shall do.