Malcolm's Argument
Malcolm begins by explaining what he means by “mechanism”.
By mechanism I am going to understand a special application of physical determinism namely, to all organisms with neurological systems, including human beings. The version of mechanism I wish to study assumes a neurophysiological theory which is adequate to explain and predict all movements of human bodies except those caused by outside forces. The human body is assumed to be as complete a causal system as is a gasoline engine. Neurological states and processes are conceived to be correlated by general laws with the mechanisms that produce movements. Chemical and electrical changes in the nervous tissue of the body are assumed to cause muscle contractions, which in turn cause movements such as blinking, breathing, and puckering of the lips, as well as movements of fingers, limbs, and head. Such movements are sometimes produced by forces (pushes and pulls) applied externally to the body. If someone forced my arm up over my head, the theory could not explain that movement of my arm. But it could explain any movement not due to an external push or pull. It could explain, and predict, the movements that occur when a person signals a taxi, plays chess, writes an essay, or walks to the store.
Mechanism is a species of physical determinism: mechanism is physical determinism as it applies to organisms with neurological systems. Such an organism is mechanistic if there exists a neurophysiological theory that both explains and predicts all movements of the system's body, except those due to external forces, on the basis of information about neurological states and processes and the general laws correlating states to physical movements. Presumably, these general laws are causal laws, and the neurophysiological system is a complicated causal system.
Next, Malcolm distinguishes between Neurophysiological Explanations (NE’s) and Purposive Explanations (PE’s). Using Malcolm’s example, suppose we want an explanation for why a man climbed a ladder up to a roof-top. As it turns out, the man’s hat was blown up onto the roof by a gust of wind, and he wanted it back. A PE of the man’s behavior might be:
If a man wants to retrieve his hat and believes this requires him to climb a ladder, he will do so provided there are no countervailing factors.
This man wanted to retrieve his hat and believed that this required him to climb a ladder, and there were no countervailing factors.
Therefore, he climbed a ladder.
In general, PE’s differ in form from NE’s. NE’s are of the form,
Whenever an organism of structure S is in neurophysiological state q it will emit movement m.
Organism O of structure S was in neurophysiological state q.
Therefore, O emitted m.
while PE’s have the form,
Whenever an organism O has goal G and believes that behaviour B is required to bring about G, O will emit B.
O had G and believed B was required of G.
Therefore, 0 emitted B.
The most important difference between PE’s and NE’s lies in their first premise. The first premise of the NE expresses a contingent correlation between neurological processes and behaviour, whereas the first premise of the PE expresses an a priori connection between intentions and behavior. As Malcolm puts it,
Premisses of the one sort express contingent correlations between neurological processes and behaviour. Premisses of the other sort express a priori connections between intentions (purposes, desires, goals) and behaviour.
This difference is of the utmost importance. Some students of behaviour have believed that purposive explanations of behaviour will be found to be less basic than the explanations that will arise from a future neurophysiological theory. They think that the principles of purposive explanation will turn out to be dependent on the neurophysiological laws. On this view our ordinary explanations of behaviour will often be true: but the neural explanations will also be true and they will be more fundamental. Thus we could, theoretically, by-pass explanations of behaviour in terms of purpose, and the day might come when they simply fall into disuse.
Contrary to these “students of behavior”, Malcolm argues that neurophysiological theory will not be found to be more basic than purposive explanation. His argument is given in a single, somewhat dense paragraph:
I wish to show that neurophysiological laws could not be more basic than purposive principles. I shall understand the statement that a law L2 is more basic than a law L1 to mean that L1 is dependent on L2 but L2 is not dependent on L1. To give an example, let us suppose there is a uniform connection between food abstinence and hunger: that is, going without food for n hours always results in hunger. This is L1. Another law L2 is discovered namely, a uniform connection between a certain chemical condition of body tissue (called cell-starvation) and hunger. Whenever cell-starvation occurs, hunger results. It is also discovered that L2 is more basic than L1. This would amount to the following fact: food abstinence for n hours will not result in hunger unless cell-starvation occurs; and if the latter occurs, hunger will result regardless of whether food abstinence occurs. Thus the L1 regularity is contingently dependent on the L2 regularity, and the converse is not true. Our knowledge of this dependency would reveal to us the conditions under which the L1 regularity would no longer hold.
Malcolm adds,
Our comparison of the differing logical natures of purposive principles and neurophysiological laws enables us to see that the former cannot be dependent on the latter. The a priori connection between intention or purpose and behaviour cannot fail to hold. It cannot be contingently dependent on any contingent regularity. The neurophysiological explanations of behaviour could not, in the sense explained, turn out to be more basic than our everyday purposive explanations.
Unpacking Malcolm’s argument requires a brief foray into modal analysis. Malcolm claims that if a purposive law (PL) is true, then it is necessarily true. The PL
If a man wants to retrieve his hat and believes this requires him to climb a ladder, he will do so provided there are no countervailing factors.
is true a priori. That is to say, the a priori connection between intention or purpose and behaviour “cannot fail to hold,” which is just to say that the PL is necessarily true if it is true at all.
But if a neurophysiological law (NL) is true, then it is contingently true. This may seem odd, since physical laws are usually taken to involve some sort of necessity. The point of claiming that instances of
Whenever an organism of structure S is in neurophysiological state q it will emit movement m,
if true, are contingently true is just to assert that it is possible for an organism of structure S to be in neurophysiological state q and not emit movement m.
The usual (semantic) analysis of modal claims like the one above is done in terms of possible worlds. A proposition is necessarily true if it is true at every possible world, while a proposition is possibly true if it is true at some possible world. Physical laws, like those embodied by instances of the NL schema above, are necessary in the sense that they are true at the actual world and they are true at every world which has the same underlying physics as the actual world. Since there are possible worlds which differ in their physics from the actual world, instances of the NL schema are not necessarily true. Natural laws are physically necessary, or true at every possible world in a proper subset of all possible worlds, but they aren’t logically necessary – i.e., true at every possible world.
According to Malcolm, a true NL could not be more basic than a true PL since there are possible worlds at which the PL is true but the NL is not. Spelling the argument out, we have,
1 | If NL can be more basic than PL, then PL is dependent on NL and NL is not dependent on PL. | ||
2 | If PL is dependent on NL and NL is not dependent on PL, then it is not possible for PL to be true and NL to not be true, but it is possible for NL to be true and PL to not be true. | ||
3 | If it is not possible for PL to be true and NL to not be true, then NL is necessarily true if PL is necessarily true. | ||
4 | NL is not necessarily true and PL is necessarily true. | ||
∴ | 5 | NL could not be more basic than PL. | 1,2,3&4 |
What does all this have to do with different explanations of behavior? Suppose there is an NE of the man’s ladder climbing behavior. If this argument is sound, it follows that the ladder-climbing man could be climbing the ladder without having any intention to do so provided that he was in the neurophysiological state indicated in the NE. Indeed, regardless of his intentions, if the NE of his behavior is true and NL’s are not more basic than PL’s, then his ladder-climbing behavior occurs regardless of what a PE might say. In this sense, his goals, intentions, beliefs, and desires are irrelevant to his subsequent ladder-climbing behaviors. But a PE assumes exactly the opposite; a PE is based on the relevance of the agent’s beliefs, desires, and intentions to his, her, or its behaviors. Hence a (true) NE for a given behavior excludes the possibility of a PE, given that NL’s could not be more basic than PL’s. As Malcolm puts it, “a mechanistic explanation of behaviour rules out any explanation of it in terms of the agent's intentions. If a comprehensive neurophysiological theory is true, then people's intentions never are causal factors in behaviour.” The argument is more easily followed once it has been set-out:
1 | If it is not possible for NL’s to be more basic than PL’s, then it is not possible for a NE and a PE to be true of the same behavior. | ||
2 | It is not possible for NL’s to be more basic than PL’s. | ||
∴ | 3 | It is not possible for a NE and a PE to be true of the same behavior. | 1&2 |
What has this argument got to do with machine autonomy? Naively, autonomous behavior is self-directed behavior. A true explanation of autonomous behavior, then, necessarily involves reference to the agent’s goals, intentions, beliefs, and desires, among other things. At least in the case of autonomous behavior, the agent’s goals, intentions, beliefs, and desires could never be irrelevant. NE’s are thus incompatible with autonomous behavior in the sense that a behavior is autonomous only if there exists a PE of it, and NE’s are incompatible with PE’s if the above argument is sound.
The implication for machine autonomy is obvious, particularly when one notes that the equivalent of a NE can be given for every machine behavior. But since behavior cannot be autonomous if it is possible to give the equivalent of an NE for it, machines cannot act autonomously. The argument can be set out as follows.
1 | If X is a machine behavior, then the equivalent of a NE can be given for X. | ||
2 | If the equivalent of a NE can be given for X, then X cannot be an autonomous action. | ||
∴ | 3 | If X is a machine behavior, then X cannot be an autonomous action. | 1&2 |