Tuesday 2/18

Tuesday 2/18

Robot Intentionality I: The Chinese Room

Readings

Texts

Notes

Synopsis

Let us briefly revisit the optimistic case we have made for meeting the engineering obligato posed by Dretske's Dictum--which case also, by the way, underwrites the assumptions made by Cognitive Science.

Turing proved that there exists a Universal Turing Machine, a Turing Machine which computes any function computable by any Turing Machine. This result, which is sometimes called “Turing's Theorem”, demonstrates the possibility of generalized computing machines. The modern digital computer is one manifestation of the practical applicability of Turing's Theorem developed along the lines specified by the brilliant John von Neumann, among others.

It also turns out that every effective procedure we conjure to compute functions computes the same functions as Turing Machines, which strongly suggests that Turing Machines can be used in place of other effective procedures. It is, of course, a strong suggestion only. We have no way of knowing whether some clever person will invent an effective procedure which computes a function that cannot be computed by a Turing Machine. Nevertheless, the Church-Turing Thesis, as it is called, gives us hope that any procedure which can be boiled down to basic, finite, determinate, and repeatable operations can also be implemented a Universal Turing Machine.

If the operations of the human brain can be similarly boiled down, then it ought to be possible to implement those operations on a Universal Turing Machine. After all, it is well-documented that neurons are complicated kinds of switches performing various specialized tasks. Prima facie, there is no reason Turing Machines could not accomplish the same tasks, perhaps even better than their biological counterparts.

What are these tasks, though? We have yet to specify some way to characterize the class of cognitive functions, except insofar as we said that they are whatever functions a machine would need to pass the Turing Test. This assumes that, as a test of human-level intelligence, the Turing Test is neither too strong, as many computer scientists object, nor too weak, as many philosophers object. Whether the Turing Test is too weak, too strong, or just right, however, the fundamental question of AI remains: Are cognitive functions efficiently Turing Machine computable?

Yet that question clearly begs a further question: What are cognitive functions?

Enter the philosopher of psychology Franz Brentano (1838-1917). In his 1874 book Psychology from an Empirical Standpoint, Brentano sought to distinguish mental phenomena:

Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction toward an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself, although they do not do so in the same way. In presentation, something is presented, in judgment something is affirmed or denied, in love loved, in hate hated, in desire desired and so on.

This intentional inexistence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves.

From Brentano we get the thesis and the slogan, “Intentionality is the mark of the mental.”

These passages are admittedly confusing. To sort out intentionality, first note that the term “intentionality” is a philosophical term-of-art. It is not to be confused with intending in the sense of “She intended to hurt his feelings” or “It was her intention to hurt his feelings.”

Second, contrast the statement

The cat is in the cat-tree.

with the belief-ascription,

Ashley believes that the cat is in the cat-tree.

and

The state-of-affairs of the cat's being in the cat-tree.

The statement and the belief are about or represent the state-of-affairs of the cat's being in the cat-tree. That is, statements and beliefs are intentional insofar as they are about, reach out to, or are directed upon, objects and the relations into which they (the objects) enter--the aforementioned states-of-affairs. Absent someone who understands the statement, “the cat is in the cat-tree”, of course, the statement is just a meaningless string of symbols or marks. Thus written and spoken sentences (utterances, generally) have at most derived intentionality. Beliefs and other mental states, however, have original intentionality. There is nothing further required for mental states to be intentional than that they be intentional.

Brentano's thesis recast in our terms, then, is the claim that a function is cognitive iff it exhibits original intentionality, where original intentionality is understood to be the relationship the function has in representing the objects in the world and their states-of-affairs. Intentionality, however, is an extremely puzzling property. Borrowing from--and supplementing, just a bit--Michael Thau's excellent Consciousness and Cognition, four paradoxes emerge from intentionality. Although the same paradoxes trouble other cognitive functions if Brentano is right, let us cast them as problems for belief specifically.

  1. Beliefs can represent in absentia. How do we account for my believing that extraterrestrial aliens are responsible for UFO sightings if there are no extraterrestrial aliens? In that case I am mistaken, of course. But even mistaken, if intentionality is a relation between a belief and the content of the belief, or the state-of-affairs the belief represents, then how can my belief have any content if there are no extraterrestrial aliens? Beliefs are representational, even when there is no thing being represented. Yet if representation is a relation, there must be something to stand in the relation. This is what Brentano is talking about when he uses the apparently absurd phrase, the 'inexistence of an object'.
  2. Beliefs can represent indeterminately. Contrast my belief that the cat is in the cat-tree with my belief that some cat or other is in the cat-tree. In the first case, my belief is about a specific, and specifiable, animal. In the second case, my belief is about some specific yet un-specifiable or indeterminate animal. Or consider another cognitive function, my desire to own a cat. The content of my desire is not even a specific cat, but it is some cat. Again, if representation is a relation, there must be some specific, determinate thing that stands in the relation.
  3. Beliefs can represent differentially. My belief that Frank has degree in psychology is perfectly consistent with my belief that the manager of B&J's Pizza does not have a degree despite the fact that Frank is the manager of B&J's pizza. Thus my beliefs represent the same object, Frank, in different--in this case, even incompatible--ways.
  4. Beliefs can represent mistakenly. My belief that there is a horse in the horse-trailer represents the state-of-affairs of horse's being in the horse-trailer even if the furry brown ear I glimpsed through the dusty window and which caused me to believe that there is a horse in the horse-trailer is attached to a cow. My belief, in short, represents a horse, but the object so represented is not a horse. It is a cow.

A theory of intentionality has the difficult task of making sense of these apparent paradoxes. Nevertheless, Brentano's thesis boils down to the claim that cognitive functions exhibit original intentionality. That is, they are representational without requiring interpretation such as a sentence, a drawing, or even a picture would.

The genius of Searle's Chinese Room Thought Experiment is to make vivid a point we can well appreciate from our work constructing Turing Machines: No function computable by a Turing Machine could be cognitive, since Turing Machines operate by the rule-governed manipulation of strings of symbols. But nothing like that, so the argument goes, could exhibit original intentionality.

Searle's Chinese Room Thought Experiment presents us with a fairly serious challenge: How can intentionality emerge from the rule-governed manipulation of strings of symbols? Searle's Chinese Room Thought Experiment puts Searle himself in the position of being a rule-governed manipulator of strings of (Chinese) symbols, none of which Searle understands.

The fact that neither Searle-in-the-Chinese-Room nor any combination of Searle with the Chinese room understands Chinese even while passing the Turing Test in Chinese is a serious matter indeed. For if Brentano is correct and intentionality is the mark of the mental in such a way that cognitive functions are intentional, yet no intentional function is Turing Machine Computable, then no cognitive function is Turing Machine Computable, either. Since there are only countably infinitely many Turing Machine Computable functions compared to the uncountable infinite totality of functions, it is entirely possible that cognitive functions escape Turing Machine Computability. The problem is not just that the Turing Test is too weak--other, less complicated arguments can be given for rejecting the thesis that the perfect imitation of intelligence is intelligence. Rather, the Chinse Room Thought Experiment seems to show that congnitive functions, distinguishable from other functions by their intentionality, are not Turing Machine Computable at all. In light of the Church-Turing Thesis and Dretske's Dictum, the most we can say is that if we are meat machines, we cannot understand our own minds. We cannot build minds unless we duplicate exactly what we already have using exactly the same materials and structures that evolved in us. Such duplication is not entirely without its merits, but it gets us no nearer to understanding minds than we were when we began. Moreover, it suggests that there is something crucial about the particular substance and structure of us that makes it impossible to replicate cognitive functionality in, say, silicon.

Searle neatly summarizes and clarifies his argument as follows:

By way of concluding I want to try to state some of the general philosophical points implicit in the argument. For clarity I will try to do it in a question-and-answer fashion, and I begin with that old chestnut of a question:

"Could a machine think?"

The answer is, obviously, yes. We are precisely such machines.

"Yes, but could an artifact, a man-made machine, think?"

Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question.

"OK, but could a digital computer think?"

If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.

"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"

This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.

"Why not?"

Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese.

Today we sketched the Chinese Room Thought Experiment. We begin next time by considering the responses Searle canvasses in his seminal article.

If you feel hazy about its substance, this 60 second video explanation of the Chinese Room Thought Experiment may help.