Tuesday 2/27

Tuesday 2/27

Robot Intentionality IV: Dennett's Response & the Frame Problem

Assignments

Readings

Texts

Notes

Synopsis

Dretske, recall, thinks original intentionality is woven into the fabric of the universe in virtue of the causal relation. The lowly compass, indicating as it does the magnetic north in virtue of its causal relationship and thereby being about the magnetic north (in an admittedly attenuated sense) exhibits original intentionality. To be sure, this is not the full-blown, tough-nut-to-crack intentionality our cognitive functions enjoy, yet it is enough, Dretske thinks, to respond to Searle's skeptical challenge.

The particulars of Dretske's argument that the compass exhibits original intentionality can be found in the handout on Dretske's Argument. Suffice it to say that Dretske has an intriguing argument which bears further scrutiny.

If Dretske's Argument is successful, then the problem of intentionality dissolves. That is, if the lowly $1.99 compass exhibits original intentionality, then our problem in building minds is not that of building something that exhibits original intentionality. We can already do that, and quite well. Instead, our problem is that the original intentionality exhibited by the compass is substantially less 'feature-rich', if you will, than it would need to be to be useful in building minds. Let me explain.

The original intentionality exhibited by the compass is a function of its causal relation to the Earth's magnetic field. Yet this causal relation is constant: In the absence of stronger magnetic fields, the compass will always and consistently point to the magnetic north. Indeed, in the 'presence' of stronger magnetic fields, it will point to their 'magnetic north', if you will. Thus the compass is veridical. It always points to the magnetic north, which is of course why the compass is such a useful instrument for navigation.

Beliefs and other intentional states of mind are not, however, veridical. Unlike compasses, beliefs can be mistaken. The cow in the field might cause me to form the belief that there is a horse in the field late in the evening. Worse, why does the state-of-affairs of the cow's being in the field cause me to believe that there is a cow in the field, if it does, but it doesn't cause me to believe that there is either a cow or a horse in the field? In short, a belief can misrepresent a state-of-affairs, and is not at all clear if a belief represents a state-of-affairs that it represents that state-of-affairs at all. To further complicate matters, consider that desires are never directed on existing states-of-affairs. Rather, the desire that there be a horse in the field represents the non-existent state-of-affairs of a horse being in the field, but it does not thereby misrepresent even though it is strictly speaking false that there is a horse in the field. Mental representations can be in error, but being in error is not always the same thing as misrepresenting.

Intentionality as it applies to mental states goes far beyond the simple causal relation that secures, Dretske argues, the compass' original intentionality. Dretske thinks he can secure a richer intentional basis to allow for misrepresentation by introducing the notion of a natural function--an indicator, that is, which indicates the presence of an F, quite apart from our reading it as indicating the presence of an F, and even if it is caused to indicate the presence of an F by the presence of a G.

However, do Dretske's natural functions provide the right sort of intentionality Brentano insisted was characteristic of all mental states?

Dretske provides an especially clear way of understanding the challenge intentionality--also brought out by Searle's Chinese Room Thought Experiment--poses for artificial intelligence and, ultimately, understanding the mind. Under the assumption of computationalism, consider the analogy between a person and a vending machine:

A Person A Vending Machine
   
Functions according to neurophysiological states determined by the formation of beliefs and desires. Functions according to electromechanical states determined by buttons pressed and coins inserted.
   
Beliefs have intrinsic properties and extrinsic properties. Coins have intrinsic properties and extrinsic properties.
   
A belief's intrinsic (biochemical) properties include the person's neurophysiological state in having that belief. A coin's intrinsic (material) properties include its size, shape, weight, material, and electrical characteristics.
   
A belief's extrinsic (intentional) properties include the state-of-affairs it is about. A coin's extrinsic (economic) properties include its value.

We assume that--and would like to be able to explain just how--a belief's extrinsic properties play a causal role in the person's behavior. My belief that the cat is on the mat, for example, causes me to step elsewhere.

Yet it appears that what is relevant to, and all that is relevant to, the production of behavior are the belief's intrinsic, biochemical properties. That is, following the analogy through we recognize that coin's value, which indeed fluctuates day-to-day, has nothing to do with the behavior of the vending machine. All that matters for the behavior of the vending machine are the coin's material properties. Anything with the same material properties the vending machine checks will be counted as a coin by the vending machine--hence the possibility of 'slugs' or cheats for vending machines.

Extrinsic properties like intentional or representative relationships are irrelevant to the production of behavior in persons and vending machines alike, or so it seems. If so, then the fact that my belief that the cat is on the mat is about the cat's being on the mat has no bearing on my behavior, contrary to almost everyone's pre-theoretic intuition.

This is a further challenge to which Dretske must respond, and the remainder of his article is devoted to explaining how the intentional properties of a belief can bear on its causal relations in the mind.

Next today we examined Dennett's argument that rather than meet Searle's Chinese Room Thought Experiment head-on, as it were, as Boden, Block, and Dretske do in their various ways, we can simply sidestep it.

Characterized only somewhat sarcastically, if Dretske thinks intentionality is cheaply had, Dennett thinks intentionality can be had on the cheap.

The point is that for Dennett there is no problem of original intentionality, because intentionality is in the eye of the beholder. Well, not entirely, but almost, which is why Dennett likes to call himself a quasi-realist with respect to intentionality.

In what sense could intentionality be in the eye of the beholder? Consider the example of a car and the different explanations that could be given for its failure to start:

  1. Taking the Physical Stance, we say the car failed to start because the rapid chemical process involved in hydrocarbon combustion failed to occur.
  2. Taking the Design Stance, we say the car failed to start because the ignition system failed to deliver spark to the cylinders.
  3. Taking the Intentional Stance, we say that the car hates its owner.

Of course, so described the Intentional Stance seems patently absurd, despite the fact that alarmingly many people do in fact take the Intentional Stance with respect to complicated machines like cars and computers.

Yet suppose, Dennett asks, that you are in fact able to explain and predict something's behavior by attributing intentional states (beliefs, desires, etc.) to it. Suppose furthermore that that is the only way to provide explanations and predictions, because for whatever reason the Physical and Design stances aren't available to take. Then because we must take the Intentional Stance and because we can successfully take the Intentional Stance, the thing has so much as any of us should care intentionality. Witness how we explain and predict each other's behavior! Consider further how we explain and predict our very own behavior!

Next today, busy as we were, we took up The Frame Problem. The Frame Problem is at once a formal problem in computer science and an epistemic problem as Dennett articulates it. In his characteristically clear way of explaining matters, Dennett casts the Frame Problem in terms of a concrete example:

Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for itself. One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon. R1 located the room, and the key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT (Wagon, Room, t) would result in the battery being removed from the room. Straightaway it acted, and did succeed in getting the battery out of the room before the bomb went off. Unfortunately, however, the bomb was also on the wagon. R1 knew that the bomb was on the wagon in the room, but didn't realize that pulling the wagon would bring the bomb out along with the battery. Poor R1 had missed that obvious implication of its planned act.

Back to the drawing board. 'The solution is obvious,' said the designers. 'Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side-effects, by deducing these implications from the descriptions it uses in formulating its plans.' They called their next model, the robot-deducer, R1D1. They placed R1D1 in much the same predicament that R1 had succumbed to, and as it too hit upon the idea of PULLOUT (Wagon, Room, t) it began, as designed, to consider the implications of such a course of action. It had just finished deducing that pulling the wagon out of the room would not change the colour of the room's walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon - when the bomb exploded.

Back to the drawing board. 'We must teach it the difference between relevant implications and irrelevant implications,' said the designers, 'and teach it to ignore the irrelevant ones.' So they developed a method of tagging implications as either relevant or irrelevant to the project at hand, and installed the method in their next model, the robot-relevant-deducer, or R2D1 for short. When they subjected R2D1 to the test that had so unequivocally selected its ancestors for extinction, they were surprised to see it sitting, Hamlet-like, outside the room containing the ticking bomb, the native hue of its resolution sicklied o'er with the pale cast of thought, as Shakespeare (and more recently Fodor) has aptly put it. 'Do something!' they yelled at it. 'I am,' it retorted. 'I'm busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore, and...' the bomb went off.

What is the general philosophical problem posed by R2D1, and why haven't philosophers focused on it heretofore?  With no little snarkiness, Dennett goes on to explain what it is and why philosophers have done such a good job ignoring it:

One utterly central - if not defining - feature of an intelligent being is that it can 'look before it leaps'. Better, it can think before it leaps. Intelligence is (at least partly) a matter of using well what you know - but for what? For improving the fidelity of your expectations about what is going to happen next, for planning, for considering courses of action, for framing further hypotheses with the aim of increasing the knowledge you will use in the future, so that you can preserve yourself, by letting your hypotheses die in your stead (as Sir Karl Popper once put it). The stupid - as opposed to ignorant - being is the one who lights the match to peer into the fuel tank, who saws off the limb he is sitting on, who locks his keys in his car and then spends the next hour wondering how on earth to get his family out of the car.

But when we think before we leap, how do we do it? The answer seems obvious: an intelligent being learns from experience, and then uses what it has learned to guide expectation in the future. Hume explained this in terms of habits of expectation, in effect. But how do the habits work? Hume had a hand-waving answer - associationism - to the effect that certain transition paths between ideas grew more likely-to-be-followed as they became well worn, but since it was not Hume's job, surely, to explain in more detail the mechanics of these links, problems about how such paths could be put to good use - and not just turned into an impenetrable maze of untraversable alternatives - were not discovered.

Hume, like virtually all other philosophers and 'mentalistic' psychologists, was unable to see the frame problem because he operated at what I call a purely semantic level, or a phenomenological level. At the phenomenological level, all the items in view are individuated by their meanings. Their meanings are, if you like, 'given' - but this just means that the theorist helps himself to all the meanings he wants. In this way the semantic relation between one item and the next is typically plain to see, and one just assumes that the items behave as items with those meanings ought to behave. We can bring this out by concocting a Humean account of a bit of learning.

Suppose that there are two children, both of whom initially tend to grab cookies from the jar without asking. One child is allowed to do this unmolested but the other is spanked each time she tries. What is the result? The second child learns not to go for the cookies. Why? Because she has had experience of cookie-reaching followed swiftly by spanking. What good does that do? Well, the idea of cookie-reaching becomes connected by a habit path to the idea of spanking, which in turn is connected to the idea of pain... so of course the child refrains. Why? Well, that's just the effect of that idea on that sort of circumstance. But why? Well, what else ought the idea of pain to do on such an occasion? Well, it might cause the child to pirouette on her left foot, or recite poetry, or blink, or recall her fifth birthday. But given what the idea of pain means, any of those effects would be absurd. True; now how can ideas be designed so that their effects are what they ought to be, given what they mean? Designing some internal things - an idea, let's call it - so that it behaves vis-a-vis its brethren as if it meant cookie or pain is the only way of endowing that thing with that meaning; it couldn't mean a thing if it didn't have those internal behavioural dispositions.

That is the mechanical question the philosophers left to some dimly imagined future researcher. Such a division of labour might have been all right, but it is turning out that most of the truly difficult and deep puzzles of learning and intelligence get kicked downstairs by this move. It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half trick, they explain that it is really quite obvious: the magician doesn't really saw her in half; he simply makes it appear that he does. 'But how does he do that?' we ask. 'Not our department', say the philosophers - and some of them add, sonorously: 'Explanation has to stop somewhere.'

Snark aside, you now see how the Frame Problem is fundamentally a problem of intentionality--how, that is, our mental states can be not just about states of affairs but about the right or relevant states of affairs.

For more on the formal problem and its relationship to Dennett's Frame Problem, see the handout, which is just the Stanford Encyclopedia of Philosophy entry on the Frame Problem and is quite good at describing the formal problem in lay terms.

Notice that the Frame Problem is not only a serious problem for the project of building minds. It is, like the Problem of Intentionality itself, a quite general problem anyone who thinks they can understand the human mind without being able to build minds--those, in other words, who would reject Dretske's Dictum--must face. Somehow the human mind has evolved in such a way that it is able to attend to all and only those features of its environment relevant to successful action. How it does that is, frankly, a mystery.

We can, being good Cognitivists in Searle's somewhat derogatory sense, posit the existence of an Attention Mechanism or Homunculus that directs belief and desire formation for relevance, but that only highlights the problem: For how does this Homunculus know what to count as relevant and what not for action?