Searle's Argument
Searle's skepticism that the rule-governed manipulation of strings of symbols could ever result in original intentionality has a counterpart in his skepticism that the brain itself is a rule-governed manipulator of strings of symbols, a view he calls 'Cognitivism' or sometimes 'Computationalism'. As he puts it in describing his project in “Is the Brain a Digital Computer?”,
This paper is about Cognitivism, and I had better say at the beginning what motivates it. If you read books about the brain (say Shepherd (1983) or Kuffler and Nicholls (1976)) you get a certain picture of what is going on in the brain. If you then turn to books about computation (say Boolos and Jeffrey, 1989) you get a picture of the logical structure of the theory of computation. If you then turn to books about cognitive science, (say Pylyshyn, 1985) they tell you that what the brain books describe is really the same as what the computability books were describing. Philosophically speaking, this does not smell right to me and I have learned, at least at the beginning of an investigation, to follow my sense of smell.
Searle proposes four skeptical challenges to computationalism. I will cast them as four separate arguments to the conclusion that computationalism is not true, even though Searle himself lumps them all together at the end of “Is the Brain a Digital Computer?”.
The Physics of Syntax
Specifically, there is none, or at least none that is not observer relative.
Searle's Argument from the Physics of Syntax | |||
1 | Multiple Realizability implies Universal Realizability. | ||
2 | If (1), then anything can be viewed as a rule-governed manipulator of strings of symbols. | ||
∴ | 3 | Anything can be viewed as a rule-governed manipulator of strings of symbols. | 1&2 |
4 | If (3), then syntax is not intrinsic to physics. | ||
∴ | 5 | Syntax is not intrinsic to physics. | 3&4 |
6 | If (5), then the brain is a rule-governed manipulator of strings of symbols only when viewed as such by some observer. | ||
∴ | 7 | The brain is a rule-governed manipulator of strings of symbols only when viewed as such by some observer. | 5&6 |
2 | If Computationalism is true, then it is not the case that the brain is a rule-governed manipulator of strings of symbols only when viewed as such by some observer. | ||
∴ | 3 | Computationalism is not true. | 7&8 |
Searle neatly contrasts this argument with the Chinese Room Thought Experiment:
This is a different argument from the Chinese Room Argument and I should have seen it ten years ago but I did not. The Chinese Room Argument showed that semantics is not intrinsic to syntax. I am now making the separate and different point that syntax is not intrinsic to physics. For the purposes of the original argument I was simply assuming that the syntactical characterization of the computer was unproblematic. But that is a mistake. There is no way you could discover that something is intrinsically a digital computer because the characterization of it as a digital computer is always relative to an observer who assigns a syntactical interpretation to the purely physical features of the system.
The Homunculus Fallacy
Searle's Argument from the Homunculus Fallacy | |||
1 | Computationalism commits the Homunculus Fallacy. | ||
2 | If Computationalism commits the Homunculus Fallacy, then computationalism necessarily presupposes intentional entities in its explanations. | ||
3 | If computationalism necessarily presupposes intentional entities in its explanations, then computationalism is not true. | ||
∴ | 4 | Computationalism is not true. | 1,2&3 |
Searle summarizes the argument by drawing a pointed analogy:
For real computers of the kind you buy in the store, there is no homunculus problem, each user is the homunculus in question. But if we are to suppose that the brain is a digital computer, we are still faced with the question "And who is the user?" Typical homunculus questions in cognitive science are such as the following: "How does the visual system compute shape from shading; how does it compute object distance from size of retinal image?" A parallel question would be, "How do nails compute the distance they are to travel in the board from the impact of the hammer and the density of the wood?" And the answer is the same in both sorts of case: If we are talking about how the system works intrinsically neither nails nor visual systems compute anything. We as outside homunculi might describe them computationally, and it is often useful to do so. But you do not understand hammering by supposing that nails are somehow intrinsically implementing hammering algorithms and you do not understand vision by supposing the system is implementing, e.g, the shape from shading alogorithm.
The Causal Powers of Syntax
As with physics, and quite unsurprisingly given that argument, Searle's contends that syntax has no causal power by which the mere rule-governed manipulation of strings of symbols could account for behavior apart from the possibly complex set of neurophysiological events an organism undergoes in interacting with its environment.
Searle's Argument from the Causal Powers of Syntax | |||
1 | The attribution of syntax to neurophysiology adds nothing to the causal role already played by neurophysiology. | ||
2 | If (1), then computationalism is not true. | ||
∴ | 3 | Computationalism is not true. | 1&2 |
(2) is a bit quick, perhaps. The point is that if computationalism were true, then the brain would have to be viewed as a rule-governed manipulator of strings of symbols because only under that description would it have the necessary causal powers. But since syntax is causally impotent, it's not the case that the brain has to be viewed as a rule-governed manipulator of strings of symbols. Indeed, it is quite deceptive to view it that way. As Searle puts it,
To explore this puzzle let us try to make the case for Cognitivism by extending the Primal Story to show how the Cognitivist investigative procedures work in actual research practice. The idea, typically, is to program a commercial computer so that it simulates some cognitive capacity, such as vision or language. Then, if we get a good simulation, one that gives us at least Turing equivalence, we hypothesize that the brain computer is running the same program as the commercial computer, and to test the hypothesis we look for indirect psychological evidence, such as reaction times. So it seems that we can causally explain the behavior of the brain computer by citing the program in exactly the same sense in which we can explain the behavior of the commerical computer. Now what is wrong with that? Doesn't it sound like a perfectly legitimate scientific research program? We know that the commercial computer's conversion of input to output is explained by a program, and in the brain we discover the same program, hence we have a causal explanation.
Two things ought to worry us immediately about this project. First, we would never accept this mode of explanation for any function of the brain where we actually understood how it worked at the neurobiological level. Second we would not accept it for other sorts of system that we can simulate computationally.
The Brain as Information Processor
Here is what Searle says about the argument:
The [computationalist's] mistake is to suppose that in the sense in which computers are used to process information, brains also process information. To see that that is a mistake contrast what goes on in the computer with what goes on in the brain. In the case of the computer, an outside agent encodes some information in a form that can be processed by the circuitry of the computer. That is, he or she provides a syntactical realization of the information that the computer can implement in, for example, different voltage levels. The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though, of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder. And the physics does not matter provided only that you can get it to implement the algorithm. Finally, an output is produced in the form of physical phenomena which an observer can interpret as symbols with a syntax and a semantics.
But now contrast that with the brain. In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything they can be described from an observer relative point of view) and the specificity of the neurophysiology matters desperately. To make this difference clear, let us go through an example. Suppose I see a car coming toward me. A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence, "There is a car coming toward me". But that is not what happens in the actual biology. In the biology a concrete and specific series of electro-chemical reactions are set up by the assault of the photons on the photo receptor cells of my retina, and this entire process eventually results in a concrete visual experience. The biological reality is not that of a bunch of words or symbols being produced by the visual system, rather it is a matter of a concrete specific conscious visual event; this very visual experience. Now that concrete visual event is as specific and as concrete as a hurricane or the digestion of a meal. We can, with the computer, do an information processing model of that event or of its production, as we can do an information model of the weather, digestion or any other phenomenon, but the phenomena themselves are not thereby information processing systems.
Although this argument is somewhat less clear than the others, here is one way to cast it.
Searle's Argument from the Brain as Information Processor | |||
1 | If X is assigned the role of information processor by an agent, then X has causal properties which are antecedent to, and independent of, X's role as an information processor. | ||
2 | If X has causal properties which are antecedent to, and independent of, X's role as an information processor, then X is not intrinsically an information processor. | ||
3 | If computationalism is true, then the brain is intrinsically an information processor. | ||
4 | The brain is assigned the role of information processor by the computationalist. | ||
∴ | 5 | Computationalism is not true. | 1,2,3&4 |