Thursday 4/5

Thursday 4/5

Machine Autonomy II: Against Machine Agency

Assignments

Readings

Texts

Notes

Lecture Quiz Questions

Note that the questions below refer to our readings and discussion from Tuesday, 4/3.

  • What is the traditional problem of freedom of will, and why does it matter?
  • What is the compatibilist response to the traditional problem of freedom of will?
  • What is the libertarian response to the traditional problem of freedom of will?

Synopsis

Recall that, having spent the first half of the semester carefully making out the positive case for the possibility of Artificial Intelligence and, given Dretske's Dictum, the possibility of understanding our own minds, we are now examining skeptical arguments. What are these arguments?

First, they all conclude that there is some feature of our cognitive capacities which doom any attempt to build something that has a mind. That is, we might build something that passes the (unrestricted) Turing Test, yet it would be a false positive: We would have no reason to think it has some important or pervasive feature of our cognition which it must have to be truly or genuinely intelligent. It would be a fake or a fraud. Because it passed the Turing Test, there would be those who argue it is intelligent because the perfect imitation of intelligence just is intelligence. We would know better, because it lacked cognitive capacities--unrevealed by the Turing Test--we conclude essential to intelligence. Thus if intentionality is the mark of the mental, yet computational states lack intentionality, then artificial intelligence via computation is impossible. Searle-in-the-Chinese-Room does not understand Chinese, after all.

Second, none of them answer the question, how do we do it? That is, we recognize the essential cognitive capacities in ourselves, but like the driver of the car who has no clue how the car works, we have no idea how we ourselves come to have these capacities. Presumably if we were able to somehow unravel some of these mysterious capacities in terms of their neurological bases, we would have some insight how an artificial intelligence might then be designed to have the same cognitive capacities. Yet some capacities we find essential to genuine intelligence, like the capacity to have subjective experience, are so challenging we can't imagine what more information we might glean about our own neurophysiology that would show how we have them. With regard to subjective experience specifically, we find a capacity to have phenomenal consciousness which appears at once computationally, neurologically, and even physically intractable.

The upshot is that we clearly have capacities like understanding and phenomenal consciousness which foil our best attempts at modeling them computationally.

Yet we also seem to have capacities which are similarly challenging to the point where we wonder whether we ourselves can be said to have them. In cases like this, our attempt to understand them by building an artificial intelligence which has the capacity is really an attemp to prove that they are possible even for us. Such is the Traditional Problem of Freedom of Will.

Recall that we can frame the problem as an argument:

The Traditional Problem of Freedom of Will
 
  1 Either determinism is true or determinism is not true.  
  2 If determinism is true, then freedom of will is impossible.  
  3 If determinism is not true, then freedom of will is impossible.  
4 Freedom of Will is impossible. 1,2&3
 

Last time, we discussed the argument and the various responses that have been proposed to it.

Consider, however, that we find in the Universe locally deterministic or mechanistic processes regardless of whether the Universe itself is fundamentally or globally deterministic or not, as the deceptively-named Quantum Mechanics concludes.

A Turing Machine, for example, shows how a purely mechanistic process is undertaken, which makes sense when one recalls that Turing's original problem was to describe an effective procedure, aka mechanical method.

Let us further abandon the term 'freedom of will', since it begs many questions having to do with the proper notion of freedom and what counts as the will in the first place. Instead, let us replace it with 'autonomous agency', since the real question seems to be whether an agent--that is, something capable of acting--can do so of its own accord. The apparent absurdity of even asking,

Can a robot be programmed to act of its own accord?

drives the point home. For why should we think any robot following a program acts of its own accord? Likewise, why should we pretend we ourselves are not following a sort of biological program--a program written in us by evolution and our environment? If the sum of our behavioral repertoire is simply the result of complex interactions between neurological mechanisms, then we may seem to ourselves and others to be making choices and forming intentions accordingly, but it is a conceit or fantasy on our part.

There are, however, two questions that must be unpacked if we are to make any headway on this problem. The first, and the one we took up today, is whether robot agency, autonomous or otherwise, is even possible in the first place. To see why not, we considered Baker's argument and Wolf's argument. Next time we consider whether autonomous agency is possible, setting aside the question of mere agency.