Tuesday 2/25

Tuesday 2/25

Robot Intentionality III: Dretske's Response

Readings

Texts

Notes

Synopsis

Recall that Boden's criticisms of Searle begin by noting Searle's apparently peculiar insistence that while the brain with all its masses of interconnected neurons provides original intentionality (and, thus, cognition itself) nothing but the brain--not even something very like the brain, recalling the Brain Simulator Reply Searle discusses--does. Thus Searle's challenge can be seen as unduly anthropocentric, in something of an echo of the Multiple Realizability Argument.

Searle, naturally, wants to meet the charge of excessive anthropocentrism. To do this he carefully clarifies and defends his position, but see our notes for more on his argument. The broad outline of his argument is that syntax is not intrinsic to physics, it is only read into physics by observers who confer at most derived intentionality by their own original intentionality. Yet if syntax is not intrinsic to physics, syntax enjoys no causal powers beyond those rendered by observers. As a result, the rule-governed manipulation of strings of symbols, in whatever physical way it is realized, cannot bear the causal relationships it must to exhibit original intentionality. Viewing the brain computationally, then, is only possible if one freely commits the homunculus fallacy--in effect begging the question by hypothesizing intentionality-rich sub-minds. Further, while we can view the brain as a computer in the same sense in which nearly any physical process can be harnessed to suit computational needs, it is not intrinsically an information processor, which it would have to be to cash the check computationalism (the view assumed by virtually the entire Cognitive Science community that the brain is a natural kind of computer) has written. In this way, Searle's response to Boden can be taken as a criticism of computationalism itself. And it is an important one. If something is a computer only insofar as it is viewed as such, then computers (and, with them, computational states) are not natural kinds--they are not part of nature at all.

Thus far Searle's approach and defense of it: Today we also contrasted Boden and Block's responses to the Chinese Room Thought Experiment with Dretske's response and got a start on setting-out Dretske's response.

It is important to recognize that Boden's criticism--that the Chinese Room necessarily evinces some intentionality, however rudimentary--and Block's criticism--that, properly conceived, the Systems Reply is correct insofar as Searle-in-the-Chinese-Room is an English-understanding system implementing a Chinese-understanding system without being aware of that fact--are both critical responses to the Chinese Room Thought Experiment. That is, they both argue that the thought experiment does not show what Searle takes it to show, namely that intentionality cannot arise from the rule-governed manipulation of strings of symbols or, equivalently, that a machine can appear to understand without actually understanding what it is doing. They do this by arguing that intentionality can so arise (Boden) or that understanding and awareness of understanding are two different things (Block), contra Searle.

Another approach is to just grant the Chinese Room Thought Experiment in its entirety. Accept Searle's claims about what the Chinese Room Thought Experiment is supposed to show, but take it as a challenge to be met. If intentionality cannot arise from the rule-governed manipulation of strings of symbols, what must we add to computation to secure intentionality? How, in other words, might we go about grounding the symbols used in those rule-governed manipulations that constitute computation? This is Dretske's approach, which has made him a popular read in roboticist circles and which we take up next time after concluding our discussion of Searle.

The task of laying out the groundwork for understanding Dretske's answer to the Chinese Room Thought Experiment is somewhat non-trivial. [As we shall see, we must distinguish between 'intention'--as in, acting with a goal or purpose--'intentionality'--as in, the aboutness or directedness sentences (derivatively) and mental states (originally) enjoy--and 'intensionality', which simply refers to the failure of substitutivity of co-referring terms or co-extensional predicates salve veritate in linguistic contexts (sometimes called 'opaque contexts' when they exhibit this failure). Granted, intensionality-with-an-s is a technical notion.]

To frame Dretske's response, however, let us briefly revisit Dretske's Dictum and our discusson of it:

You don't understand it if you don't know how to build it.

Now, perhaps, is as good a time as any to review and clarify this assertion. Here is what Dretske himself says in explaining it (from If You Can't Make One, You Don't Know How It Works.)

There are things I believe that I cannot say - at least not in such a way that they come out true. The title of this essay is a case in point. I really do believe that, in the relevant sense of all the relevant words, if you can't make one, you don't know how it works. The trouble is I do not know how to specify the relevant sense of all the relevant words.

I know, for instance, that you can understand how something works and, for a variety of reasons, still not be able to build one. The raw materials are not available. You cannot afford them. You are too clumsy or not strong enough. The police will not let you.

I also know that you may be able to make one and still not know how it works. You do not know how the parts work. I can solder a snaggle to a radzak, and this is all it takes to make a gizmo, but if I do not know what snaggles and radzaks are, or how they work, making one is not going to tell me much about what a gizmo is. My son once assembled a television set from a kit by carefully following the instruction manual. Understanding next to nothing about electricity, though, assembling one gave him no idea of how television worked.

I am not, however, suggesting that being able to build one is sufficient for knowing how it works. Only necessary. And I do not much care about whether you can actually put one together. It is enough if you know how one is put together. But, as I said, I do not know how to make all the right qualifications. So I will not try. All I mean to suggest by my provocative title is something about the spirit of philosophical naturalism. It is motivated by a constructivist's model of understanding. It embodies something like an engineer's ideal, a designer's vision, of what it takes to really know how something works. You need a blueprint, a recipe, an instruction manual, a program. This goes for the mind as well as any other contraption. If you want to know what intelligence is, or what it takes to have a thought, you need a recipe for creating intelligence or assembling a thought (or a thinker of thoughts) out of parts you already understand.

In speaking of parts one already understands, I mean, of course, parts that do not already possess the capacity or feature one follows the recipe to create. One cannot have a recipe for cake that lists a cake, not even a small cake, as an ingredient. One can, I suppose, make a big cake out of small cakes, but recipes of this sort will not help one understand what a cake is (although it might help one understand what a big cake is). As a boy, I once tried to make fudge by melting fudge in a frying pan. All I succeeded in doing was ruining the pan. Don't ask me what I was trying to do - change the shape of the candy, I suppose. There are perfectly respectable recipes for cookies that list candy (e.g., gumdrops) as an ingredient, but one cannot have a recipe for candy that lists candy as an ingredient. At least it will not be a recipe that tells you how to make candy or helps you understand what candy is. The same is true of minds. That is why a recipe for thought cannot have interpretive attitudes or explanatory stances among the eligible ingredients - not even the attitudes and stances of others. That is like making candy out of candy - in this case, one person's candy out of another person's candy. You can do it, but you still will not know how to make candy or what candy is.

In comparing a mind to candy and television sets I do not mean to suggest that minds are the sort of thing that can be assembled in your basement or in the kitchen. There are things, including things one fully understands, things one knows how to make, that cannot be assembled that way. Try making Rembrandts or $100 bills in your basement. What you produce may look genuine, it may pass as authentic, but it will not be the real thing. You have to be the right person, occupy the right office, or possess the appropriate legal authority in order to make certain things. There are recipes for making money and Rembrandts, and knowing these recipes is part of understanding what money and Rembrandts are, but these are not recipes you and I can use. Some recipes require a special cook.

This is one (but only one) of the reasons it is wrong to say, as I did in the title, that if you cannot make one, you do not know how it works. It would be better to say, as I did earlier, that if you do not know how to make one, or know how one is made, you do not really understand how it works.

Perhaps a simpler way to make the same point is that just as you can drive a car without understanding how a car works, you use your mind without understanding it. To understand the mind, just as in understanding the car, you have to understand how it's built--how, that is, all the parts and pieces fit together and function in such a way as to get you down the road (both car and mind!) You don't need to be actually able to build one, of course, but you do need to know how it's built to truly understand it.

This, as we have said, is something of an engineering constraint which has largely been ignored by psychology and psychiatry. Its value is recognized, however, by cognitive psychologists and cognitive neuroscientists. We may see them as friends, even if they are often quick to sweep complicated problems under the rug while claiming to have solved them with an unappealing hubris.

Now, it is important to emphasize (again) that Dretske has a very different strategy than Boden or Block. Where Boden and Block put their efforts into showing how the Chinese Room Thought Experiment does not show what Searle thinks it shows--in each case by arguing that we do find intentionality in the Chinese Room even if we might have to look very hard for it--Dretske's response grants Searle's basic point: We won't have created artificial intelligence if we go about it along anything like the lines of the Chinese Room construction.

Rather, we have to take the challenge of figuring out how to efficiently computing cognitive (intentional) functions seriously. (This is why I say that Dretske has something of an engineering approach to philosophy, and it is one of many reasons why he tends to be popular among roboticists.) That is, Dretske grants that the Chinese Room is a problem, but it is a problem he thinks we can solve. That is, Dretske's strategy is to concede Searle's point in the Chinese Room Thought Experiment but argue that original intentionality is neither rare nor special.

First, we need to approach this problem as an engineer might and figure out how to build a mind out of simpler elements we already understand. Since original intentionality is a unique property of minds--not to mention being a uniquely problematic feature for those wishing to build minds--it would be helpful if the elements of our design exhibited original intentionality, giving us something of a leg-up in our project.

Notice that building a mind out of elements having original intentionality is not itself a problem for understanding the mind, since our goal is understanding the mind, not understanding intentionality. Along the way, of course, we will come to a better understanding of the intentionality if all goes as it should.

Yet is it especially easy to find basic elements that exhibit original intentionality?

Dretske thinks so; it is as easy as going to the local hardware store to buy a compass. Since Dretske's argument draws on oft-confusing distinctions in the Philosophy of Language, we will attempt to spell them out next time using lots of examples.