Robot Intentionality V: The Frame Problem
- Fred Dretske, "Minds, Machines, and Money: What Really Explains Behavior" (from last time)
- Dan Dennett, "True Believers (Introduction)" (from last time)
- Dan Dennett, "True Believers: The Intentional Strategy and Why It Works" (from last time)
- Dan Dennett, "True Believers (Postscript)" (from last time)
- Dan Dennett, "Cognitive Wheels: The Frame Problem of AI"
- The Frame Problem (SEP)
Question: Do Dretske's natural functions which make original intentionality off-the-shelf cheap provide the right sort of intentionality Brentano insisted was characteristic of all mental states?
Dretske provides an especially clear way of understanding the challenge intentionality--also brought out by Searle's Chinese Room Thought Experiment--poses for artificial intelligence and, ultimately, understanding the mind. Under the assumption of computationalism, consider the analogy between a person and a vending machine:
|A Person||A Vending Machine|
|Functions according to neurophysiological states determined by the formation of beliefs and desires.||Functions according to electromechanical states determined by buttons pressed and coins inserted.|
|Beliefs have intrinsic properties and extrinsic properties.||Coins have intrinsic properties and extrinsic properties.|
|A belief's intrinsic (biochemical) properties include the person's neurophysiological state in having that belief.||A coin's intrinsic (material) properties include its size, shape, weight, material, and electrical characteristics.|
|A belief's extrinsic (intentional) properties include the state-of-affairs it is about.||A coin's extrinsic (economic) properties include its value.|
We assume that--and would like to be able to explain just how--a belief's extrinsic properties play a causal role in the person's behavior. My belief that the cat is on the mat, for example, causes me to step elsewhere.
Yet it appears that what is relevant to, and all that is relevant to, the production of behavior are the belief's intrinsic, biochemical properties. That is, following the analogy through we recognize that coin's value, which indeed fluctuates day-to-day, has nothing to do with the behavior of the vending machine. All that matters for the behavior of the vending machine are the coin's material properties. Anything with the same material properties the vending machine checks will be counted as a coin by the vending machine--hence the possibility of 'slugs' or cheats for vending machines.
Extrinsic properties like intentional or representative relationships are irrelevant to the production of behavior in persons and vending machines alike, or so it seems. If so, then the fact that my belief that the cat is on the mat is about the cat's being on the mat has no bearing on my behavior, contrary to almost everyone's pre-theoretic intuition.
This is a further challenge to which Dretske must respond, and the remainder of his article is devoted to explaining how the intentional properties of a belief can bear on its causal relations in the mind.
Next today we examined Dennett's argument that rather than meet Searle's Chinese Room Thought Experiment head-on, as it were, as Boden, Block, and Dretske do in their various ways, we can simply sidestep it.
Characterized only somewhat sarcastically, if Dretske thinks intentionality is cheaply had, Dennett thinks intentionality can be had on the cheap.
The point is that for Dennett there is no problem of original intentionality, because intentionality is in the eye of the beholder. Well, not entirely, but almost, which is why Dennett likes to call himself a quasi-realist with respect to intentionality.
In what sense could intentionality be in the eye of the beholder? Following Dennett, distinguish three 'stances' we might take (or strategies we might employ) in explaining the behavior of a complex system (organism, say, or mechanism, if that distinction any longer makes any sense.)
- Taking the Physical Stance, we say the car failed to start because the rapid chemical process involved in hydrocarbon combustion failed to occur.
- Taking the Design Stance, we say the car failed to start because the ignition system failed to deliver spark to the cylinders.
- Taking the Intentional Stance, we say that the car hates its owner.
Of course, so described the Intentional Stance seems patently absurd, despite the fact that alarmingly many people do in fact take the Intentional Stance with respect to complicated machines like cars and computers.
Yet suppose, Dennett asks, that you are in fact able to explain and predict something's behavior by attributing intentional states (beliefs, desires, etc.) to it. Suppose furthermore that that is the only way to provide explanations and predictions, because for whatever reason the Physical and Design stances aren't available to take. Then because we must take the Intentional Stance and because we can successfully take the Intentional Stance, the thing has so much as any of us should care intentionality. Witness how we explain and predict each other's behavior! Consider further how we explain and predict our very own behavior!