Thursday 2/20

Thursday 2/20

Robot Intentionality II: The Chinese Room Replies, Boden's Response, and Searle's Argument

Assignments

Readings

Texts

Notes

Videos

Synopsis

To frame this synopsis, begin with this youtube news report.

Watson went on the second day to trounce his human competitors. More recently IBM has announced that it is licensing Watson as a medical assistant, which raises interesting questions about what precisely it will mean to be a physician. This article in the NY Times neatly summarizes the IBM project and where they hope to take it next.

Does Watson understand the questions it is answering, however? This is the challenge Searle presents with the justly famous Chinese Room Thought Experiment.

I'm grateful to a former student of Minds and Machines for finding and recommending the following (3+ minutes) segment from the BBC: The Chinese Room Experiment - The Hunt for AI - BBC. (youtube)

To be sure, the very simplicity of Searle's Chinese Room Thought Experiment lends it considerable force against the proposition that any effective procedure relying solely on the rule-governed manipulation of strings of symbols could exhibit intentionality. Nevertheless, in proposing the Chinese Room Thought Experiment, Searle met not a few counter-arguments. Considering these counter-arguments as we did today, along with Searle's responses to them, goes a long way towards fully fleshing out Searle's criticism.

Without rehearsing our entire discussion, we have considered,

  • The Systems Reply
  • The Robot Reply
  • The Brain Simulator Reply
  • The Combination Reply
  • The Other Minds Reply
  • The Many Mansions Reply

Focusing on, first, the ideas that motivated and made prima facie plausible each reply to the Chinese Room Thought Experiment and, second, the reasons Searle gave for rejecting upon careful consideration each reply goes a considerable distance in grasping Searle's point in constructing the Chinese Room Thought Experiment.

Now, a curious feature of Searle's challenge to traditional artificial intelligence is that intentionality is fundamentally a biological phenomenon. That is, our brains have the capacity to underwrite intentional states because they have have specific causal features which depend, ultimately, on the particular stuff out of which the brain is made.

One might imagine resurrecting Putnam's Multiple Realizability Argument against Type-Physicalism. That is, one might argue that Searle's emphasis on the particular causal features of our neurobiology makes him into something of a Type-Physicalist, so that nothing which fails to have our special neurobiology can have mental states since mental states are intentional yet intentionality is biological in a uniquely human sense.

This criticism would not be quite right, however, nor is it part of Boden's Reply. In particular, Searle can and does admit that mental states are multiply realizable. Crucially for Searle, though, the other stuff that realizes mental states must have mostly the same causal features of the biological stuff which underwrites our mental states. So intelligence can be realized in other substances, but those substances must be similar in specific causal respects to our substance. It is not the case that intelligence can be constructed out of any old thing, but it's also not the case that we are the only thing that can be intelligent. A puzzle for Searle is just how similar the other stuff must be to our stuff to have enough of the same causal features to secure intentionality.

A better question, and the one Boden asks, is why the causal capacities Searle thinks underwrite intentionality are features of mere substance as opposed to features of a substance having a specific structure. After all, it seems that the causal capacities a thing has are more a matter of its structure than the material out of which it is made. Try playing billiards with cubes instead of spheres. Even if the cubes are made of the same stuff as the spheres, you won't have much of a game.

In short, it's fine for Searle to say

The brain, as far as its intrinsic operations are concerned, does no information processing. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. (John Searle, "Is the Brain a Digital Computer")

Yet even if we agree that the rule-governed manipulation of strings of symbols will never yield intentionality, we still need an explanation for why processes other than neurobiological ones can't.

Today we took up Boden's reply and began considering Block's effort to resurrect the Systems Reply.

Before turning to their arguments, though, I should like to briefly revisit the puzzling philosophical notion of intentionality from last time. Here is a passage from the first chapter (pp. 5-8) of John Haugeland's "Mind Design II: Philosophy, Psychology, Artificial Intelligence" (Cambridge, Mass.: MIT Press, 1997) which might help:

"Intentionality", said Franz Brentano (1874/1973), "is the mark of the mental". By this he meant that everything mental has intentionality, and nothing else does (except in a derivative or second-hand way), and, finally, that this fact is the definition of the mental. 'Intentional' is used here in a medieval sense that harks back to the original Latin meaning of "stretching toward" something; it is not limited to things like plans and purposes, but applies to all kinds of mental acts. More specifically, intentionality is the character of one thing being "of" or "about" something else, for instance by representing it, describing it, referring to it, aiming at it, and so on. Thus, intending in the narrower modern sense (planning) is also intentional in Brentano's broader and older sense but much else is as well, such as believing, wanting, remembering, imagining, fearing, and the like.

Intentionality is peculiar and perplexing. It looks on the face of it to be a relation between two things. My belief that Cairo is hot is intentional because it is about Cairo (and/or its being hot). That which an intentional act or state is about (Cairo or its being hot, say) is called its intentional object. (It is this intentional object that the intentional state stretches toward.) Likewise, my desire for a certain shirt, my imagining a party on a certain date, my fear of dogs in general, would be "about"--that is, have as their intentional objects--that shirt, a party on that date, and dogs in general. Indeed, having an object in this way is another way of explaining intentionality; and such having seems to be a relation, namely between the state and its object.

But, if it's a relation, it's a relation like no other. Being-inside-of is a typical relation. Now notice this: if it is a fact about one thing that it is inside of another, then not only that first thing, but also the second has to exist X cannot be inside of Y, or indeed be related to Y in any other way, if Y does not exist. This is true of relations quite generally; but it is not true of intentionality. I can perfectly well imagine a party on a certain date, and also have beliefs, desires and fears about it, even though there is (was, will be) no such party. Of course, those beliefs would be false, and those hopes and fears unfulfilled; but they would be intentional--be about, or "have", those objects--all the same.

It is this puzzling ability to have something as an object, whether or not that something actually exists, that caught Brentano's attention. Brentano was no materialist: he thought that mental phenomena were one kind of entity, and material or physical phenomena were a completely different kind. And he could not see how any merely material or physical thing could be in fact related to another, if the latter didn't exist; yet every mental state (belief, desire, and so on ) has this possibility. So intentionality is the definitive mark of the mental...

Many material things that arent intentional systems are nevertheless about other things - including, sometimes, things that don't exist. Written sentences and stories, for instance, are in some sense material; yet they are often about fictional characters and events. Even pictures and maps can represent nonexistent scenes and places Of course, Brentano knew this... But [he] can say that this sort of intentionality is only derivative. Here's the idea: sentence inscriptions--ink marks on a page, say--are only about anything because we (or other intelligent users) mean them that way. Their intentionality is second-hand, borrowed or derived from the intentionality that those users already have.

So, a sentence like "Santa lives at the North Pole", or a picture of him or a map of his travels, can be about Santa (who, alas, doesn't exist), but only because we can think that he lives there, and imagine what he looks like and where he goes. It's really our intentionality that these artifacts have, second-hand, because we use them to express it. Our intentionality itself, on the other hand, cannot likewise be derivative: it must be original. (Original, here, just means not derivative, not borrowed from somewhere else. If there is any intentionality at all, at least some of it must be original; it can't all be derivative.)

The problem for mind design is that artificial intelligence systems, like sentences and pictures, are also artifacts. So it can seem that their intentionality too must always be derivative--borrowed from their designers or users, presumably--and never original. Yet, if the project of designing and building a system with a mind of its own is ever really to succeed then it must be possible for an artificial system to have genuine original intentionality, just as we do. Is that possible?

Think again about people and sentences with their original and derivative intentionality, respectively. What's the reason for that difference? Is it really that sentences are artifacts, whereas people are not, or might it be something else? Here's another candidate. Sentences don't do anything with what they mean: they never pursue goals, draw conclusions, make plans, answer questions, let alone care whether they are right or wrong about the world they just sit there, utterly inert and heedless. A person, by contrast, relies on what he or she believes and wants in order to make sensible choices and act efficiently; and this entails, in turn, an ongoing concern about whether those beliefs are really true, those goals really beneficial, and so on. In other words, real beliefs and desires are integrally involved in a rational, active existence intelligently engaged with its environment. Maybe this active, rational engagement is more pertinent to whether the intentionality is original or not than is any question of natural or artificial origin.

I find Haugeland's discussion of intentionality illuminating, with the necessary caveat that intentionality is such a fundamental notion virtually any discussion of it will involve a certain amount of handwaving. It is, to be sure, a slippery concept.

Let us return now to Boden's reply to Searle's Chinese Room Thought Experiment so as to set the stage next time for Searle's subsequent analysis of the brain. Even if we do not endorse Boden's argument that the Chinese Room qua Robot exhibits some minimal intentionality, it is much harder to dismiss her challenge to Searle's position that the brain is an organ which has evolved to realize intentional states. Thus, for Searle, biochemical processes underwrite intentionality in a way that the rule-governed manipulation of strings of symbols cannot.

Boden's challenge to this view is that what is important about the brain is not its biochemical processes per se but what those biochemical processes do in transmitting information. In a manner of speaking, Searle recapitulates the Type-Physicalist's error (at least, according to Putnam's Multiple Realizability Argument). If intentionality is the mark of the mental, as Brentano reminds us, then restricting intentional states to our peculiar biochemical processes restricts minds to only those things that enjoy similar composition.

To be sure, that is not all of Boden's response to Searle. She argues that, properly understood, the Chinese Room as Searle conceives it necessarily includes some, albeit minimal, original intentionality.

Today we did not get to Searle's argument in response, largely and unregrettably because of the terrific discussion we had over these arguments. We'll begin there next time before moving on to our discussion of Block's Systems Reply and subsequently Dretske's analysis.