Baker's Argument
In brief, Baker argues that machines cannot act since actions require intentions, intentions require a first-person perspective, and no amount of third-person information supplied to a machine can enable the machine to jump the gap to a first-person perspective. Thus quoting,
1 | In order to be an agent, an entity must be able to formulate intentions. | ||
2 | In order to formulate intentions, an entity must have an irreducible first-person perspective. | ||
3 | Machines lack an irreducible first-person perspective. | ||
∴ | 3 | Machines are not agents. | 1,2&3 |
Baker has not, however, stated her argument correctly. It is not just that machines are not agents or do not happen presently to be agents, since that allows that at some point in the future machines may be agents – or at least that machines can in principle be agents. Baker's conclusion is actually much stronger. As she outlines her own project, "[w]ithout denying that artificial models of intelligence may be useful for suggesting hypotheses to psychologists and neurophysiologists, I shall argue that there is a radical limitation to applying such models to human intelligence. And this limitation is exactly the reason why computers can't act."
Note that 'computers can't act' is substantially stronger than 'machines are not agents'. Apparently Baker wants to argue that it is impossible for machines to act, which is presumably more difficult than arguing that we don’t at this time happen to have the technical sophistication to create machine agents. Revising Baker's extracted argument to bring it in line with her proposed conclusion, however, requires some corresponding strengthening of premise (3), as follows:
1 | In order to be an agent, an entity must be able to formulate intentions. | ||
2 | In order to formulate intentions, an entity must have an irreducible first-person perspective. | ||
3 | Machines necessarily lack an irreducible first-person perspective. | ||
∴ | 3 | Machines cannot be agents. | 1,2&3 |
What support does Baker give for premises (1), (2), and (3)?
(1) is true, Baker thinks, because agency implies intentionality. She takes this to be virtually self-evident: the hallmark of agency is the ability to form intentions, where intentions are understood on Castaneda's model of being a "dispositional mental state of endorsingly thinking such thoughts as ‘I shall do A’." (2) and (3), on the other hand, require an account of the first-person perspective such that
- The first person perspective is necessary for the ability to form intentions, and
- Machines necessarily lack it.
As Baker construes it, the first person perspective (FPP) has at least two essential properties. First, the FPP is irreducible, where the irreducibility in this case is due to a linguistic property of the words used to refer to persons. In particular, first person pronouns cannot be replaced with descriptions salve veritate. "First-person indicators are not simply substitutes for names or descriptions of ourselves." Thus Oedipus can, without absurdity, demand that the killer of Laius be found.
According to Baker, the second essential property of the FPP is that it is necessary for the ability to "conceive of one's thoughts as one's own." Baker calls this 'second-order consciousness'. For example, "if X cannot make first-person reference, then X may be conscious of the contents of his own thoughts, but not conscious that they are his own." In such a case, X fails to have second-order consciousness. It follows that "an entity which can think of propositions at all enjoys self-consciousness if and only if he can make irreducible first-person reference." Since the ability to form intentions is understood on Castaneda's model as the ability to endorsingly think propositions such as "I shall do A", and since such propositions essentially involve first-person reference, it is clear why the first person perspective is necessary for the ability to form intentions. So we have some reason to think that (2) is true, but why should we think that machines necessarily lack the first-person perspective?
Baker's justification for premise (3) is summed up by her claim that "[c]omputers cannot make the same kind of reference to themselves that self-conscious beings make, and this difference points to a fundamental difference between humans and computers--namely, that humans, but not computers, have an irreducible first-person perspective."
To make the case that computers are necessarily handicapped in that they cannot refer to themselves in the same way that self-conscious entities do, she invites us to consider what would have to be the case for a first person perspective to be programmable:
a) FPP can be the result of information processing.
b) First-person episodes can be the result of transformations on discrete input via specifiable rules.
Since information processing, as it is usually conceived, is nothing more than the transformation of discrete input via specifiable rules, (a) and (b) are distinct conditions only if the first person perspective is distinct from first-person episodes. A case can be made that the FPP is nothing more than serial first-person episodes, but the issue hinges on what Baker means by "first-person episodes". Unfortunately, Baker never explains what she means by "first-person episodes." Leaving the twin puzzles of what first-person episodes are and their relationship to the FPP unsolved, we shall treat (a) and (b) as distinct conditions on the programmability of the FPP.
Machines necessarily lack an irreducible first-person perspective since both (a) and (b) are false. (b) is straightforwardly false, since "the world we dwell in cannot be represented as some number of independent facts ordered by formalizable rules." Worse, (a) is false since it presupposes that the FPP can be generated by a rule governed process, yet the FPP "is not the result of any rule-governed process." That is to say, "no amount of third-person information about oneself ever compels a shift to first person knowledge."
Although Baker does not explain what she means by "third-person information" and "first person knowledge," the point, presumably, is that there is an unbridgeable gap between the third-person statements and the first-person statements presupposed by the FPP. Yet since the possibility of an FPP being the result of information processing depends on bridging this gap, it follows that the FPP cannot be the result of information processing. Hence it is impossible for machines, having only the resource of information processing as they do, to have an irreducible first-person perspective. Setting the argument out, we have
1 | If machines were able to have a FPP, then the FPP can be the result of transformations on discrete input via specifiable rules. | ||
2 | If the FPP can be the result of transformations on discrete input via specifiable rules, then there exists some amount of third-person information which compels a shift to first-person knowledge. | ||
3 | No amount of third-person information compels a shift to first-person knowledge. | ||
∴ | 4 | First-person episodes cannot be the result of transformations on discrete input via specifiable rules. | 2&3 |
∴ | 5 | Machines necessarily lack an irreducible first-person perspective. | 1&4 |
Baker's central argument clearly stands or falls with this argument. The problem with is that this argument assumes an excessively narrow conception of machines and programming.
Here is a simple way of thinking about machines and programming as this argument would have it. There was at one time – for all I know, there may still be – a child's toy which was essentially a wind-up car. The car came with a series of small plastic disks, with notches around the circumference, which could be fitted over a rotating spindle in the middle of the car. The disks acted as a cam, actuating a lever which turned the wheels when the lever hit a notch in the side of the disk. Each disk had a distinct pattern of notches and resulted in a distinct route. Thus, placing a particular disk on the car's spindle 'programs' the car to follow a particular route.
Insofar as it requires that programming be restricted to transformations on discrete input via specifiable rules, the above argument treats all machines as analogous to the toy car and programming as analogous to carving out new notches on a disk used in the toy car. Certainly the argument allows for machines which are much more complicated than the toy car, but the basic relationship between program and machine behavior is the same throughout. The program determines the machine's behavior, while the program itself is in turn determined by the programmer. It is the point of premise (2) that, if an irreducible FPP were programmable, it would have to be because the third-person information which can be coded by the programmer suffices for a first-person perspective, since all the machine has access to is what can be coded by a programmer.
Why should we think that a machine's only source of information is what the programmer codes? Here are a few reasons to think that machines are not so restricted:
- Given appropriate sensory modalities and appropriate recognition routines, machines are able to gain information about their environment without that information having been programmed in advance. It would be as if the toy car had an echo-locator on the front and a controlling disk which notched itself in reaction to obstacles so as to maneuver around them.
- Machines can be so constructed as to ‘learn’ by a variety of techniques. Even classical conditioning techniques have been used. The point is merely that suitably constructed, a machine can put together information about its environment and itself which is not coded in advance by the programmer and which is not available other than by, for example, trial and error. It would be as if the toy car had a navigation goal and could adjust the notches in its disk according to whether it is closer or farther from its goal.
- Machines can evolve. Programs evolve through a process of mutation and extinction. Code in the form of so-called genetic algorithms is replicated and mutated. Unsuccessful mutations are culled, while successful algorithms are used as the basis for the next generation. Using this method one can develop a program for performing a particular task without having any knowledge of how the program goes about performing the task. Strictly speaking, there is no programmer for such programs. Here the analogy with the toy car breaks down somewhat. It's as if the toy car started out with a series of disks of differing notch configurations and the car can take a disk and either throw it out or use it as a template for further disks, depending on whether or not a given disk results in the car being stuck against an obstacle, for instance.
- Programs can be written which write their own programs. A program can spawn an indefinite number of programs, including an exact copy of itself. It need not be the case that the programmer be able to predict what future code will be generated, since that code may be partially the result of information the machine gathers, via sensory modalities, from its environment. So, again, in a real sense there is no programmer for these programs. The toy car in this case starts out with a disk which itself generates disks and these disks may incorporate information about obstacles or what have you.
Examples like these show that machines can have access to information and utilize it in ways which are completely beyond the purview of the programmer. So while it may not be the case that a programmer can code an irreducible FPP directly, as the above argument would have it, it is still open to speculation whether or not another method of gaining information and self-programming will result in a machine with an irreducible FPP.
The criticism comes down to this: The fundamental puzzle is not one of figuring out whether an FPP can be programmed in one way or another, as Argument C supposes. Rather, the fundamental problem with which Baker is struggling is the question of whether or not an irreducible FPP is computable, whether, that is, an irreducible FPP is something which can be broken down into a series of simple steps or procedures. Notice that this is a very different question from the question Baker asks. Baker asks: Is it possible for a programmer to author a FPP? But the important question is: Is the FPP such that it can issue from small, precise procedures?
In defense of Baker's argument, neither I nor anyone else has the slightest clue how to answer the latter question. Not only do we not have any idea about whether a FPP might be computable, we don't even have an idea – what really comes to the same thing – as to what a FPP is in the first place. So even though it might be helpful to learn that her argument asks the wrong question, it is still open to Baker to replace the argument with
1 | If a machine were able to have a FPP, then a FPP is computable. | ||
2 | A FPP is not computable. | ||
∴ | 3 | Machines necessarily lack an irreducible first-person perspective. | 1&2 |
Argument D is unsatisfying until we get conclusive arguments for premise (2) Still, for all we now know (2) might be true. Given what we know about machines and programming and what little we happen to know about FPP's, there is at least a presumption in favor of (2) inasmuch as the computability of the FPP is part of the apparently intractable problem of machine consciousness.