Machine Autonomy II: Against Machine Autonomy
- Lynne Rudder Baker, "Why Computers Can't Act" (pdf)
- Susan Wolf, "The Importance of Free Will" (pdf)
- Mark Fisher, "A Note on Free Will and Artificial Intelligence"
- Norman Malcolm, "The Conceivability of Mechanism" (pdf)
- Harry G. Frankfurt, "Freedom of the Will and the Concept of a Person"
- Baker's Argument
- Wolf's Argument and the Problem of Original Agency
- Fisher's Argument
- Malcolm's Argument
- Frankfurt's Theory
We will use zoom for class discussion, meeting at the same time as class, but online. Please download the client here for your computer, laptop, or smartphone. You do not need an account, nor do you need to pay for the service. I will text a meeting ID and meeting password to the class via our class GroupMe and via email about 15 minutes prior to class. We will also use the same zoom room for the dedicated office hour immediately following class.
Recall that, having spent the first half of the semester carefully making out the positive case for the possibility of Artificial Intelligence and, given Dretske's Dictum, the possibility of understanding our own minds, we are now examining skeptical arguments. What are these arguments?
First, they all conclude that there is some feature of our cognitive capacities which doom any attempt to build something that has a mind. That is, we might build something that passes the (unrestricted) Turing Test, yet it would be a false positive: We would have no reason to think it has some important or pervasive feature of our cognition which it must have to be truly or genuinely intelligent. It would be a fake or a fraud. Because it passed the Turing Test, there would be those who argue it is intelligent because the perfect imitation of intelligence just is intelligence. We would know better, because it lacked cognitive capacities--unrevealed by the Turing Test--we conclude essential to intelligence. Thus if intentionality is the mark of the mental, yet computational states lack intentionality, then artificial intelligence via computation is impossible. Searle-in-the-Chinese-Room does not understand Chinese, after all.
Second, none of them answer the question, how do we do it? That is, we recognize the essential cognitive capacities in ourselves, but like the driver of the car who has no clue how the car works, we have no idea how we ourselves come to have these capacities. Presumably if we were able to somehow unravel some of these mysterious capacities in terms of their neurological bases, we would have some insight how an artificial intelligence might then be designed to have the same cognitive capacities. Yet some capacities we find essential to genuine intelligence, like the capacity to have subjective experience, are so challenging we can't imagine what more information we might glean about our own neurophysiology that would show how we have them. With regard to subjective experience specifically, we find a capacity to have phenomenal consciousness which appears at once computationally, neurologically, and even physically intractable.
The upshot is that we clearly have capacities like understanding and phenomenal consciousness which foil our best attempts at modeling them computationally.
Yet we also seem to have capacities which are similarly challenging to the point where we honestly wonder whether we ourselves can be said to have them. In cases like this, our attempt to understand them by building an artificial intelligence which has the capacity is really an attemp to prove that they are possible even for us. Such is the Traditional Problem of Freedom of Will.
Recall that we can frame the problem as an argument:
|The Traditional Problem of Freedom of Will|
|1||Either determinism is true or determinism is not true.|
|2||If determinism is true, then freedom of will is impossible.|
|3||If determinism is not true, then freedom of will is impossible.|
|∴||4||Freedom of Will is impossible.||1,2&3|
Last time, we discussed the argument and the various responses that have been proposed to it. Specifically,
- The Hard Determinist concludes that the Universe is deterministic, and so much the worse for Freedom of Will, which must be rejected as an absurdity or mere superstition in light of the ironclad laws of nature governing the totality of behavior, animate and inanimate.
- The Soft Determinist or Compatibilist argues instead that even if the Universe were deterministic, there is still a sense of Freedom of Will which is compatible with any laws of nature we might discover and which suffices to ensure our ordinary social practices of holding people blameworthy or praiseworthy are still justifiable. Thus the Compatibilist rejects the second premise of the argument.
- The Libertarian argues that even if the Universe is not deterministic, so events can and do happen at random, this is no threat to Freedom of Will since there is a special, undetermined yet determining, kind of causation we enjoy whereby we cause our actions but are not ourselves in turn caused to do so. Usually called 'agent causation', the Libertarian rejects the argument's third premise.
- Simply because the logic requires it, there is what might be called Hard Indeterminism, which holds that the Universe is not deterministic, and so much the worse for Freedom of Will since what we do is spurious.
Set aside the above solution space and consider, however, that we clearly do find in the Universe locally deterministic or mechanistic processes regardless of whether the Universe itself is fundamentally or globally deterministic or not, as the deceptively-named Quantum Mechanics concludes.
A Turing Machine, for example, shows how a purely mechanistic process is undertaken, which makes sense when one recalls that Turing's original problem was to describe an effective procedure, aka mechanical method.
Let us further abandon the term 'freedom of will', since it begs many questions having to do with the proper notion of freedom and what counts as the will in the first place. Instead, let us replace it with 'autonomous agency', since the real question seems to be whether an agent--that is, something capable of acting--can do so of its own accord. The apparent absurdity of even asking,
Can a robot be programmed to act of its own accord?
drives the point home. For why should we think any robot following a program acts of its own accord? Likewise, why should we pretend we ourselves are not following a sort of biological program--a program written in us by evolution and our environment? If the sum of our behavioral repertoire is simply the result of complex interactions between neurological mechanisms, then we may seem to ourselves and others to be making choices and forming intentions accordingly, but it is a conceit or fantasy on our part.
There are, however, two questions that must be unpacked if we are to make any headway on this problem. The first is whether robot agency, autonomous or otherwise, is even possible in the first place. The second is whether autonomous agency is possible, setting aside the question of mere agency. Today I argued that we can sharpen the first question by drawing a distinction between derived agency and original agency. Further, in view of a robot's fundamentally mechanistic nature (a counterpart, if you will, to our own neuro-mechanical nature), we are forced to take up the mantel of compatibilism if we are going to be able to secure autonomy. To that end, we discussed Frankfurt's approach to see if it might hold any promise as a way into the problem.