Thursday 3/5

Thursday 3/5

Robot Intentionality VI: Theories of Meaning

Assignments

Readings

Texts

Notes

Synopsis

Running behind a bit, today we were supposed to take up The Frame Problem. The problem is fascinating, at least in part because, as Dennett I submit correctly points out, here we have a problem wholly emerging from considerations in Artificial Intelligence (just how do we get a robot to do that?) which turns into a novel epistemic puzzle when we think about our own human intelligence (how do we do that, in the first place?)

I was eager, however, to get on to theories of meaning, as they become foundationally important for some of the arguments we present later, so I elided the discussion altogether. Now as I sit down to post this synopsis I discover I rue that decision, not least because it wrong-foots those who determined to take up the frame problem in their term paper prospectus. Accordingly, I'll begin the synopsis with an overview of what we should have (in hindsight) discussed in class.

Now, the Frame Problem is at once a formal problem in computer science and an epistemic problem as Dennett articulates it. In his characteristically clear way of explaining matters, Dennett casts the Frame Problem in terms of a concrete example:

Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for itself. One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon. R1 located the room, and the key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT (Wagon, Room, t) would result in the battery being removed from the room. Straightaway it acted, and did succeed in getting the battery out of the room before the bomb went off. Unfortunately, however, the bomb was also on the wagon. R1 knew that the bomb was on the wagon in the room, but didn't realize that pulling the wagon would bring the bomb out along with the battery. Poor R1 had missed that obvious implication of its planned act.

Back to the drawing board. 'The solution is obvious,' said the designers. 'Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side-effects, by deducing these implications from the descriptions it uses in formulating its plans.' They called their next model, the robot-deducer, R1D1. They placed R1D1 in much the same predicament that R1 had succumbed to, and as it too hit upon the idea of PULLOUT (Wagon, Room, t) it began, as designed, to consider the implications of such a course of action. It had just finished deducing that pulling the wagon out of the room would not change the colour of the room's walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon - when the bomb exploded.

Back to the drawing board. 'We must teach it the difference between relevant implications and irrelevant implications,' said the designers, 'and teach it to ignore the irrelevant ones.' So they developed a method of tagging implications as either relevant or irrelevant to the project at hand, and installed the method in their next model, the robot-relevant-deducer, or R2D1 for short. When they subjected R2D1 to the test that had so unequivocally selected its ancestors for extinction, they were surprised to see it sitting, Hamlet-like, outside the room containing the ticking bomb, the native hue of its resolution sicklied o'er with the pale cast of thought, as Shakespeare (and more recently Fodor) has aptly put it. 'Do something!' they yelled at it. 'I am,' it retorted. 'I'm busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore, and...' the bomb went off.

What is the general philosophical problem posed by R2D1, and why haven't philosophers focused on it heretofore? With no little snarkiness, Dennett goes on to explain what it is and why philosophers have done such a good job ignoring it:

One utterly central - if not defining - feature of an intelligent being is that it can 'look before it leaps'. Better, it can think before it leaps. Intelligence is (at least partly) a matter of using well what you know - but for what? For improving the fidelity of your expectations about what is going to happen next, for planning, for considering courses of action, for framing further hypotheses with the aim of increasing the knowledge you will use in the future, so that you can preserve yourself, by letting your hypotheses die in your stead (as Sir Karl Popper once put it). The stupid - as opposed to ignorant - being is the one who lights the match to peer into the fuel tank, who saws off the limb he is sitting on, who locks his keys in his car and then spends the next hour wondering how on earth to get his family out of the car.

But when we think before we leap, how do we do it? The answer seems obvious: an intelligent being learns from experience, and then uses what it has learned to guide expectation in the future. Hume explained this in terms of habits of expectation, in effect. But how do the habits work? Hume had a hand-waving answer - associationism - to the effect that certain transition paths between ideas grew more likely-to-be-followed as they became well worn, but since it was not Hume's job, surely, to explain in more detail the mechanics of these links, problems about how such paths could be put to good use - and not just turned into an impenetrable maze of untraversable alternatives - were not discovered.

Hume, like virtually all other philosophers and 'mentalistic' psychologists, was unable to see the frame problem because he operated at what I call a purely semantic level, or a phenomenological level. At the phenomenological level, all the items in view are individuated by their meanings. Their meanings are, if you like, 'given' - but this just means that the theorist helps himself to all the meanings he wants. In this way the semantic relation between one item and the next is typically plain to see, and one just assumes that the items behave as items with those meanings ought to behave. We can bring this out by concocting a Humean account of a bit of learning.

Suppose that there are two children, both of whom initially tend to grab cookies from the jar without asking. One child is allowed to do this unmolested but the other is spanked each time she tries. What is the result? The second child learns not to go for the cookies. Why? Because she has had experience of cookie-reaching followed swiftly by spanking. What good does that do? Well, the idea of cookie-reaching becomes connected by a habit path to the idea of spanking, which in turn is connected to the idea of pain... so of course the child refrains. Why? Well, that's just the effect of that idea on that sort of circumstance. But why? Well, what else ought the idea of pain to do on such an occasion? Well, it might cause the child to pirouette on her left foot, or recite poetry, or blink, or recall her fifth birthday. But given what the idea of pain means, any of those effects would be absurd. True; now how can ideas be designed so that their effects are what they ought to be, given what they mean? Designing some internal things - an idea, let's call it - so that it behaves vis-a-vis its brethren as if it meant cookie or pain is the only way of endowing that thing with that meaning; it couldn't mean a thing if it didn't have those internal behavioural dispositions.

That is the mechanical question the philosophers left to some dimly imagined future researcher. Such a division of labour might have been all right, but it is turning out that most of the truly difficult and deep puzzles of learning and intelligence get kicked downstairs by this move. It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half trick, they explain that it is really quite obvious: the magician doesn't really saw her in half; he simply makes it appear that he does. 'But how does he do that?' we ask. 'Not our department', say the philosophers - and some of them add, sonorously: 'Explanation has to stop somewhere.'

Snark aside, you now see how the Frame Problem is fundamentally a problem of intentionality--how, that is, our mental states can be not just about states of affairs but about the right or relevant states of affairs.

For more on the formal problem and its relationship to Dennett's Frame Problem, see the handout, which is just the Stanford Encyclopedia of Philosophy entry on the Frame Problem and is quite good at describing the formal problem in lay terms.

Notice that the Frame Problem is not only a serious problem for the project of building minds. It is, like the Problem of Intentionality itself, a quite general problem anyone who thinks they can understand the human mind without being able to build minds--those, in other words, who would reject Dretske's Dictum--must face. Somehow the human mind has evolved in such a way that it is able to attend to all and only those features of its environment relevant to successful action. How it does that is, frankly, a mystery.

We can, being good Cognitivists in Searle's somewhat derogatory sense, posit the existence of an Attention Mechanism or Homunculus that directs belief and desire formation for relevance, but that only highlights the problem: For how does this Homunculus know what to count as relevant and what not for action?

Thus far the Frame Problem.

Today we took up a problem that has been lurking for some time: Our focus since encountering the skeptical challenge posed by the Chinese Room Thought Experiment has been to understand how original intentionality is possible. Recall that one way of describing it is as the symbol-grounding problem. How is it, that is to ask, that the symbols subject to rule-government manipulations vis-a-vis Turing Machines mean anything? For that is what the Chinese Room Thought Experiment seems to show. The squiggles and squoggles that are the objects of Searle-in-the-Chinese-Room's obsessive transcriptions mean nothing to Searle apart from their association via the rule-book he so assiduously follows. The lurking problem is this: Quite apart from squiggles and squoggles, how do the words in natural language come to have the meanings they do in the first place? Put another way, how is linguistic communication possible?

On what I am calling the Naive View, the token of a piece of linguistic communication (scribbles on paper, modulated sound-waves, or what have you) expresses an idea or thought--quite literally a psychological state in the speaker--and arouses, if you will, the same idea or thought in the listener's mind. Thus an expression may be about a state of affairs in the world, but the connection between expression and world is not direct: Mental states mediate.

To be sure, we have no idea (pun intended) whether the listener's mediating idea is the same as the speaker's. If what the expression is about depends on the idea it expresses, then we seem to be in a very real quandary. I've no more access to your ideas than you do to mine, since psychological states are private. The public continuity linguistic communication presupposes fails because ideas are not publicly accessible, nor are (for example, and put very crudely) pressure variations in the air caused by my flapping my meat and causing the flapping of your meat plausible candidates to secure my communication (expressing? squirting?) of my idea that the cat is in the cat-tree so you have the idea of the cat's being in the cat-tree.

Enter Frege. In order to solve the Frege Puzzle, he insisted that reference be mediated. Yet he recognized that using psychological states--peculiar as they presumably are to the individual--won't secure linguistic communication. As an alternative, he posited what we now call the descriptivist theory of meaning. According to descriptivism, names pick out their senses which in turn pick out a reference, where senses are understood (loosely) as sets of definite descriptions--hence, descriptivism. Moreover, Frege was a realist about senses. They exist and are publicly accessible in such a way that we can secure communication by their use. The speaker's use of a term expresses its sense, which sense is then used by the listener to pick out the term's reference in turn. We can extend descriptivism to entire sentences by supposing that compositionality is as true of sentences as any complex expression with respect to both sense and reference. Thus the sense of a complex expression (sentence) is a function of the senses of its parts, the senses of its parts determine their references, while the reference of the whole complex expression (sentence) is likewise a function of its (entire) sense, or thought where complete sentences are concerned, as Frege deceptively calls it.

I say "deceptively" because Frege's use of such apparently psychological terms as 'concept' and 'thought' are nothing of the sort. As Frege understands them, concepts and thoughts are not ideas in the head but logical abstractions which are nonetheless real and, crucially, at the heart of the possibility of linguistic communication. For suppose, as we discussed, that the thought expressed by a statement genuinely were an idea in the mind of the speaker: Since minds are impenetrably closed to any but the speaker himself, no one else could have access to the thought expressed, from which it follows that no one would be able to understand (grasp the meaning) of what anyone said. So whatever concepts and thoughts are, they must be publicly accessible if they are going to do the hard work of underwriting (making possible, that is) linguistic communication. They can't, then, be ideas or psychological items if they are to do this work because of the essentially inaccessible nature of mental states.

For example, in uttering "the cat is in the cat-tree", I grasp a thought (the sense of the expression), which is the very same sense you must grasp if I am to successfully communicate to you that the cat is in the cat-tree. My idea of the cat's being in the cat tree (perhaps a mental picture of it) won't do, since you have no way of accessing my mental picture. So the thought expressed by "the cat is in the cat-tree" is graspable only if it is public, not private, yet all mental states are wholly private. If the thought is publicly graspable as it must be if communication is at all possible--that is, if sense determines reference--then it cannot be an idea in my head or a mental picture of any sort.

It follows on the Fregean Descriptive Theory of Reference that you grasp what I mean by my utterances when you grasp the very same sense of them I grasp in making them, and it is from their sense that their reference, or how they connect with the world, is determined. This presupposes that senses be non-psychological, real (albeit abstract) items in the world which can be apprehended or grasped by the language users regardless of their attitudes towards, or psychological states about, these senses.

To be sure, senses are metaphysically peculiar, which suggests that we dispense with mediated reference altogether by rejecting them. The relationship between a term and what it means, in this case, is direct inasmuch as there are no intervening ideas or senses. This is sometimes called the theory of direct reference or the causal/historical theory of reference, since the way in which terms come to designate is specified by their causal history in the linguistic community insofar as (in clear cases) it can be traced back to baptismal or christening 'tagging' event whereby the Titannic, say, is christened "Titanic", or Aristotle is baptized, anachronistically enough, "Aristotle".

Next time we take up a fascinating and powerful argument for direct reference and consider its implications for two further issues bearing on intentionality: Externalism and Extended Minds.