Term Paper Topics

Term Paper Topics

Instructions

On this page I've provided a menu of reasonably specific problems I've spelled out as best I could. To be sure, there are many more problems besides these, but you must select one of the problems from this list as the starting point for your term paper.

Read through each of these problems carefully. Be sure that you understand each problem and that you understand why it is a problem. You might also do some prefatory research: I strongly recommend visiting the Stanford Encyclopedia of Philosophy. Try to find a problem you find genuinely fascinating. It should be a problem which spurs you to wonder and to thinking about possible solutions. It should also be a problem you think important, something you want to continue studying regardless of the extent to which we cover it in class. This is extremely important. Being excited to explore a topic will make the term paper projects--involved, lengthy, and challenging though it is--will help to make it a labor of love, something you enjoy doing, and not simply labor. Bored, uninspired, and unimaginative writers write boring, uninspiring, and uninteresting (shallow) writing. The more keen you personally are to grapple seriously with a puzzle, the more eager you will be to share your enthusiasm and and convince readers of its importance.

1. Inverted Qualia

Let us start with the platitude that mental states are different if they have different functions. A belief is different from a desire because they have distinct functional characteristics. For example, it would make no sense to desire a car you believe you have. Desires bear on how the world ought to be; beliefs bear on how the world is. Thus desires and beliefs are functionally distinct. Mental states that had no difference in function could not be considered distinct mental states.

Consider, however, the following story.

Sidney is just like you and me, except that Sidney's phenomenal experience is, and always has been, just the opposite. For example, when you and I are having an experience of redness, Sidney is having an experience of greenness. We see the ripe tomato as a bright red; Sidney sees the ripe tomato as a bright green.

Nevertheless, Sidney grew up just like this. What Sidney sees as green she has learned to call “red”. Sidney's linguistic and non-linguistic behavior is just exactly like ours, except that Sidney's actual phenomenal experience is just the opposite of ours.

The upshot is that what Sidney experiences is radically unlike what we experience, yet her mental states are functionally exactly like ours. So what it is like to be me, what it is like to be you, and what it is like to be Sidney, cannot be a matter of the functional features of our mental states.

The possibility of qualia inversion is problematic because it appears to show that mental states can differ without differing in function. It appears (pun intended) there are only two possibilities:

  1. If mental states can differ without differing in function, then it follows that Machine Functionalism is false. Contrary to current theory and findings in philosophy, psychology, and neurobiology, we are not meat machines. Worse, given Dretske's Dictum and Nagel's Argument that no subjective fact can be determined from any collection of objective facts--viz., if there is something it is like to be a bat, there is nothing we can learn about what it is like to be a bat by dissecting bats, studying bat behavior, etc.--it follows that we can never understand the mind.
  2. Alternatively, we are meat machines, but qualia are irrelevant to the mental states that have them. Qualia are inessential or accidental features of mental states. Qualia may differ, invert, or even be entirely absent--but see the problem of the Philosophical Zombie below--while their associated mental states are functionally identical. Yet why, in that case, do we have phenomenal consciousness at all?

2. Original Intentionality

Words on a page have at most derived intentionality, since prior to being viewed by someone to whom they convey meaning, the words are simply marks on a page. They are about nothing until we take them to be about something or mean something. Unlike the marks on the page, we enjoy original intentionality. We have mental states that are essentially and originally intentional. That is, mental states by their very nature are intentional, and nothing else is needed for them to be intentional. No viewer or reader is needed; they themselves suffice for their own intentionality.

Without rehashing the whole of Searle's Chinese Room Thought Experiment, we have the following argument.

 

1.

Cognitive functions exhibit original intentionality.

Brentano's Thesis

 

2.

The rule-governed manipulation of strings of symbols exhibits at most derived intentionality.

Chinese Room Thought Experiment

 

3.

Turing Machine Computability is the rule-governed manipulation of strings of symbols.

Premise

4.

Turing Machine Computability exhibits at most derived intentionality.

2&3

 

5.

If (1) cognitive functions exhibit original intentionality and (4)Turing Machine Computability exhibits at most derived intentionality, then cognitive functions are not Turing Machine Computable.

Premise

6.

Cognitive functions are not Turing Machine Computable.

1,4&5

Yet if any function calculable by some effective procedure is Turing Machine Computable (the Church-Turing Thesis) and we cannot understand the mind if we cannot build it (Dretske's Dictum), then it seems that the original intentionality necessarily exhibited by cognitive functions precludes our understanding minds.

3. Hypercomputability and the Quantum Computer

It may be that we are meat machines even though not all cognitive functions are Turing Machine Computable. How could this be?

Consider that there are at most countably many Turing Machine Computable functions but uncountably many functions. Thus the totality of functions is much, much larger than the relatively small but infinite number of Turing Machine Computable functions.

Let us say that some procedure computes a function which is not Turing Machine Computable iff it hypercomputes.

Perhaps, then, we are meat machines, but we hypercompute: We are hypercomputers since our cognitive capacities depend on hypercomputation. That is, cognitive functions can only be calculated by physical systems whose operations somehow exceed those any person with pencil, paper, and (finite) rulebook could calculate or, given the Church Turing Thesis, a digital computer, which is just a physical system implementing a Universal Turing Machine.

Hypercomputation is an exciting possibility, as it gives us some reason to think that the terribly difficult skeptical challenges we have been facing can in principle be met without jettisoning the assumption of Machine Functionalism. We are meat machines, just much more extraordinary kinds of machines than we first thought!

There is still Dretske's Dictum to consider: You don't understand it if you don't know how to build it. The problem with the proposition that humans hypercompute is that we then have no understanding of the mind because we don't know how to build hypercomputers--viz., we only know how to build good old fashioned digital computers that implement the Universal Turing Machine in a physical system.

If humans hypercompute, which is the last best hope for machine functionalism if the skeptical arguments we've been considering succeed, then our question necessarily becomes whether there are any other physical systems that might implement not merely a Universal Turing Machine but a hypercomputer.

One of the most promising new technologies which might in principle be used to implement a hypercomputer takes advantage of quantum effects. Instead of the cells or bits over which both probabalistic and deterministic Turing Machines hover, Quantum Turing Machines utilize Qubits which contain a superposition of '1' and '0'. The heads of the Quantum Turing Machine also exists in quantum states, so Quantum Turing Machines have the capacity to take many inputs simultaneously and compute their outputs, also simultaneously.

The mundane hope is that the resulting quantum computer would overcome at least some of the (temporal) complexity limitations which constrain the efficient computability of Turing Machine computable functions, which would represent a vast improvement in the speed of ordinary digital computers.

A more sophisticated hope is that a suitably designed quantum computer--some physical system implementing a Universal Quantum Turing Machine--would also have the capacity to hypercompute, or compute functions which cannot be computed regardless of spacial or temporal constraints by any physical system implementing a Universal Turing Machine.

To be sure, this is exotic technology. Nevertheless, its implications for our efforts are important. Yet there is a deeper philosophical puzzle quite apart from the question of whether such technology is even possible: Since quantum processes are themselves closed to observation under Heisenberg's Uncertainty Principle, would the ability to build a quantum hypercomputer at all satisfy Dretske's Dictum with respect to understanding minds? Moreover, why should we think the human brain actually is a hypercomputer, which it must be if cognitive functions presuppose hypercomputation?

4. Belief and the Disjunction Problem

Recall Brentano's characterization of mental states:

Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction toward an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself, although they do not do so in the same way. In presentation, something is presented, in judgment something is affirmed or denied, in love loved, in hate hated, in desire desired and so on.

This intentional inexistence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves.

Thus we have Brentano's Thesis: Intentionality is the mark of the mental. Now revisit a portion of our earlier discussion of Brentano's Thesis.

Recast in our terms, Brentano's Thesis is the claim that a function is cognitive iff it exhibits original intentionality, where original intentionality is understood to be the relationship the function has in representing the objects in the world and their states-of-affairs. Intentionality, however, is an extremely puzzling property. Borrowing from--and supplementing, just a bit--Michael Thau's excellent Consciousness and Cognition, four paradoxes emerge from intentionality. Although the same paradoxes trouble other cognitive functions if Brentano is right, let us cast them as problems for belief specifically.

  1. Beliefs can represent in absentia. How do we account for my believing that extraterrestrial aliens are responsible for UFO sightings if there are no extraterrestrial aliens? In that case I am mistaken, of course. But even mistaken, if intentionality is a relation between a belief and the content of the belief, or the state-of-affairs the belief represents, then how can my belief have any content if there are no extraterrestrial aliens? Beliefs are representational, even when there is no thing being represented. Yet if representation is a relation, there must be something to stand in the relation. This is what Brentano is talking about when he uses the apparently absurd phrase, the 'inexistence of an object'.
  2. Beliefs can represent indeterminately. Contrast my belief that the cat is in the cat-tree with my belief that some cat or other is in the cat-tree. In the first case, my belief is about a specific, and specifiable, animal. In the second case, my belief is about some specific yet un-specifiable or indeterminate animal. Or consider another cognitive function, my desire to own a cat. The content of my desire is not even a specific cat, but it is some cat. Again, if representation is a relation, there must be some specific, determinate thing that stands in the relation.
  3. Beliefs can represent differentially. My belief that Frank has degree in psychology is perfectly consistent with my belief that the manager of B&J's Pizza does not have a degree despite the fact that Frank is the manager of B&J's pizza. Thus my beliefs represent the same object, Frank, in different--in this case, even incompatible--ways.
  4. Beliefs can represent mistakenly. My belief that there is a horse in the horse-trailer represents the state-of-affairs of horse's being in the horse-trailer even if the furry brown ear I glimpsed through the dusty window and which caused me to believe that there is a horse in the horse-trailer is attached to a cow. My belief, in short, represents a horse, but the object so represented is not a horse. It is a cow.

Focus for a moment on the problem that beliefs can represent mistakenly. Suppose instead of the case as given that I, taking my morning constitutional early one foggy morning, stride by a farmer's field and see a brown shape off in the distance which causes me to form the belief that there is a horse in the field. Thus

Don believes that there is a horse in the field.

That is, the content of my belief is of a certain state of affairs, namely the state of affairs of the horse's being in the field. My belief has this content insofar as it represents the horse as being in the field. Further, I bear a relationship to this representation of the horse as being in the field: I believe it, which is just to say that I hold the content of my representation obtains.

Nevertheless, that which caused me to form the belief that the horse is in the field is compatible with there being a cow in the field. The content of my belief that the horse is in the field is strictly compatible with the disjunctive content,

the horse is in the field or the cow is in the field.

Yet this is puzzling, because it seems that I don't hold that the state of affairs of the horse's being in the field obtains. Rather, I hold that the disjunctive state of affairs either the horse is in the field or the cow is in the field obtains.

In general, any mental representation caused by any state of affairs represents indefinitely many disjunctive states of affairs just as well. If so, then the content of our beliefs are about everything and, hence, nothing. Our beliefs are promiscuous and indiscriminate, because they do not allow us to distinguish the very states of affairs they are supposed to be about.

This is one way of characterizing the Disjunction Problem. Any theory of mental representation must solve the problem, yet it is altogether unclear just how to proceed.

5. The Knowledge Argument

Following Jackson, let us suppose that in some far-future time, Mary has become a neuroscientist without peer. Impressively, she has a true theory of the human brain. She can account for every fact of the brain. Moreover, the physics of light and color are also completed science. She knows all the facts there are to know about physics.

There is one curiosity about Mary, however. She has lived her whole life in a room devoid of the color red. She has never seen the color red. She does not even know it exists as a distinct color.

One day Mary is let out of her room, whereupon she for the first time sees the color red in the form of a red apple.

It seems that Mary, despite her complete and true theory of the human brain, has learned something new.

She has learned what it is like to see the color red.

 

1.

Mary knows all the physical facts.

Premise

 

2.

If Mary learns something new by learning what it is like to see the color red, then Mary does not know all the facts.

Premise

 

3.

Mary learns something new by learning what it is like to see the color red.

Premise

4.

Mary does not know all the facts.

2&3

 

5.

If Mary knows all the physical facts and Mary does not know all the facts, then not all facts are physical facts.

Premise

6.

Not all facts are physical facts.

1,4&5

In particular, facts about phenomenal consciousness are not physical facts.

That Mary has learned something new despite having a complete and true theory of the human brain apparently implies that phenomenal consciousness escapes physicalism. That is, physicalism (of whatever kind, machine functionalist or not) is false given phenomenal consciousness.

We cannot, then, be meat machines in light of phenomenal consciousness because we are not entirely meat.

6. Akrasia

Consider the following story:

I like chocolate ice-cream. In fact, I like chocolate ice-cream very, very much. But I know that it is not good for me to eat chocolate ice-cream, particularly when I scoop it onto fudge brownies and dowse the whole affair with chocolate syrup.

One evening I return home from campus and consider fixing just such a chocolate ice-cream sundae. I weigh all the reasons for it (they are delicious and I am hungry) and all the reasons against it (it's really, really fattening, and I am already too fat). I decide that, all things considered, it would be better for me to abstain from having the chocolate ice-cream sundae.

I then calmly, deliberately, and intentionally go to the kitchen, fix, and eat a large chocolate ice-cream sundae.

This is the problem of Akrasia, sometimes also called the problem of Weakness of the Will. Where to have autonomy is to enjoy self-control, akrasia is a curious failure of self-control--curious, that is, because although it might be tempting to say that I was overcome by desire, I wasn't. I intentionally made and ate the sundae; there was nothing frantic or wanton in my action. Yet I acted so after consciously deliberating about what would be best to do and deciding that it would be best to not have the sundae. If asked, “But didn't you just conclude that you shouldn't have the sundae?”, I would respond, between mouthfuls of sundae, “Yes, that is quite correct. But here we are anyway. I'm as surprised about it as you.”

Thus it seems that my actions contradict my reasons for them. Yet I am not raving or mad.

Intentionally performing an action the agent judges worse than an available and incompatible alternative suggests self-deception and perhaps irrationality. If actions are evidence of dispositions and dispositions reliably indicate beliefs, then the agent apparently holds the contradictory belief that P and not P: “I shall refrain from acting yet I shall so act.” This is the problem of weakness of will or akrasia:

What in Anglo-Saxon philosophical circles is called the problem of weakness of will concerns what worried Socrates: the problem of how an agent can choose to take what they believe to be the worse course, overcome by passion. The English expression would not, or at least not primarily, bring this sort of case to mind, but rather such examples as dilatoriness, procrastination, lack of moral courage and failure to push plans through. The Greek word 'akrasia', on the other hand, means 'lack of control', and that certainly suggests the Socratic sort of example. [Gosling (1990), p. 97]

The phrase 'lack of control' should be taken literally. Akrasia is problematic because the weak of will or incontinent somehow fail to act as they themselves think they should--it is as if they are not the authors of their own actions, which makes the intentionality with which they act all the more puzzling.

The tension between best judgment and intentional action the akrates presents, a tension wholly lacking in those who enjoy abundant kratos or power of self-control (enkrateia) [cf Mele (1987), p.4], is so great Socrates concluded it was simply absurd. Apparent cases are impossible, since “no one who either knows or believes that there is another possible course of action, better than the one he is following, will ever continue on his present course when he might choose the better.” [Protagoras 358c]. Yet Aristotle famously dismisses Socrates' conclusion, since it “contradicts the plain phenomena.” [Nicomachean Ethics 1145b27]

Let us assume Aristotle is correct: Conceptual difficulties notwithstanding, akrasia is a puzzling yet common feature of human agency. Solving the puzzle requires explaining how the akrates' intentional action can deviate so remarkably from her best judgment. How, that is, does it happen that the akrates lacks self-control for actions she herself presumably controls? Is the akrates fundamentally irrational? Self-deceived? Temporarily insane?

Let us set the problem a bit more precisely. Following Davidson, “How is Weakness of the Will Possible?”, we shall define akrasia as follows:

D: An agent A performs an action X akratically iff a) A does X intentionally, b) A believes that there is an alternative action Y open to A, and c) A judges that, all things considered, it would be better to do Y than X.

Now consider the following three principles:

P1: If A wants to do X more than she wants to do Y and she believes herself free to do either X or Y, then A will intentionally do X if she does either X or Y intentionally.

P2: If A judges that it would be better to do X than to do Y, then she wants to do X more than she wants to do Y.

P3: There are akratic actions.

The problem of Akrasia can be restated thusly: P1, P2, and P3 are true but contradictory. That is, each principle is true, but together they imply a contradiction. Hence they cannot all be true.

adapted from Berkich, D. "A Puzzle about Akrasia" (Teorema Vol. 26 No. 3, Fall 2007)

7. Super Blindsight

The phenomenon of Blindsight has been extensively studied by psychologists. It turns out that some people have blind areas in their field of vision, often due to damage in the occipital cortex. They sincerely report that their blindspot is blank to them. That is, they have no phenomenal experience of perception in that area.

Curiously, though, people with blindsight have the ability to make accurate "guesses" of what is happening in the visual field of their blindspot, despite having no conscious experience of it. Indeed, the eye focuses on objects in the blindspot just as if it were consciously directed to do so, and guesses about the position, orientation, presence, and sometimes even color of objects in the blindspot are much, much better than mere chance, provided that the person is presented with alternatives from which to choose. That is, the person with blindsight cannot just report on what is occurring in the field of their blindspot. They must be presented with alternatives. Yet their choices among alternatives, despite their protestations to the contrary, are not mere guesses.

Following Michael Tye, let us suppose that Marti develops super blindsight. Marti is just like a person with blindsight -- she's had blindsight her entire life -- except that she has trained herself to report on events in her blindspot without having been presented with alternatives. Moreover, she is so accurate that she comes to believe what she reports with just as much confidence as a normally sighted person would. In other words, when you or I utter "I believe that that is a red rose", we attach a certain strength or degree to the belief. When Marti asserts "I believe that that is a red rose", she has the same degree of belief. Experience has taught her that her reports are veridical.

The problem of Super Blindsight may be given as a question:

What is the difference between Marti's belief that that is a red rose and our belief that that is a red rose, given that Marti has absolutely no conscious, phenomenal experience of a red rose?

8. Philosophical Zombies

For each of us, there is something it is like to be us. We know what it is like to see the rose and to smell the rose. We know what it is like to smell the wine and taste the wine. We have rich inner mental lives without which there would be nothing it would be like to be us.

There is, however, a discomforting possibility. It is conceivable that there be a microphysical duplicate of me, a being just like me in every respect, that functions exactly as I do but which has no phenomenal experience whatsoever. He has no experience of what it is like to see the rose or taste the wine.

He is a microphysical and functional duplicate of me, but having no inner mental life, he is a zombie. There is nothing it is like to be my zombie. Yet my zombie is functionally indistinguishable from me. No one could ever tell the difference between me and my zombie.

The possibility of zombies is troubling, however, because it tells us that facts about phenomenal consciousness do not depend on the microphysical facts of my composition or even my functional characteristics. Worse, the possibility of my having a zombie duplicate implies that the rich inner mental life I so value and fervently believe has everything to do with my intentions, goals, and resulting behaviors has in fact no bearing whatsoever on what I do or why I do it.

9. Personal Identity

The unit of interest in Minds and Machines is not really the mind per se, but the person. The classic artificial intelligences of science fiction fame like Robby the Robot, HAL, or Commander Data are of particular speculative interest because the totality of their functional capacities combine not just for mere mentality but personhood. Yet persons are deeply puzzling, as a few simple thought experiments demonstrate.

Suppose I step into a transporter that works as follows. First, the transporter records the state of every particle in my body, call this my “template”. The template is then sent to another transporter at a distant location which uses it to construct an exact, molecule for molecule, particle for particle, duplicate of me. At the same time, the transporter into which I stepped obliterates my body by reducing it to all the molecules from which I was formerly composed.

Question: Is the person that steps off the distant transporter the same person or different than the person who stepped on the first transporter?

Let us change the story a bit. Suppose that everything proceeds as above except that the first transporter fails to obliterate me.

Question: Are there two persons, one standing on the first transporter, the second stepping off the distant transporter? Or is there just one person, me, in two locations?

Consider an altogether different story. Suppose that my brain is carefully removed from my skull and placed in a special vat. Each nerve to and from my brain is hooked up to an extremely complicated radio transceiver. Inside my now-empty skull the mate to the radio transceiver is installed in such a way that where once there was a continuous neural path there is a continuous radio path. That is, each severed nerve is re-linked precisely as it was originally linked when my brain occupied my skull via the radio. When the switches are thrown on the radios (the one with my brain in the vat and the one in the skull of my body), I see just as I used to see, feel just as I used to feel, and otherwise can't tell that my brain has been removed and is now floating in a vat. I even go to the vat and look at my brain, trying to imagine that I am that brain in the vat.

There is a curious virtue to being a brain in vat. If anything were to happen to my body, all that would be required would be to hook up another body with a similar radio.

Question: If I do get in an accident and am killed, and if another body is obtained, is it still me in the new body?

Suppose it all happens as above but add the following wrinkle. What if I (my body?) were to rob a bank and get caught.

Question: Should my brain be put in prison, leaving my body out to enjoy the good life? Or should my body be put in prison, leaving my brain free to do as it wishes?

Suppose my brain is split in a lab accident and, in an emergency operation, each half is hooked-up to a different body.

Question: Am I now two persons? Or one person in two different places?

It is not clear how one should answer these thought experiments. They are, of course, stock and trade science fiction examples. Nevertheless, such puzzles raise an important question.

It seems that there are only three possibilities for personal identity:

Either

1) Person A = Person B because they have the same body (bodily continuity suffices for personal continuity);

or

2) Person A = Person B because they have the same psychology (psychological continuity suffices for personal continuity);

or

3) Person A = Person B because they have the same non-physical and psychologically vacant thing, usually called a soul.

Even if (3) were a live possibility under the assumption of physicalism, it is not clear how having the same soul ensures personal identity, since it is not clear what souls are in the first place. We only know what they are not.

Yet neither (1) nor (2) appears to work in light of the above thought experiments. So the problem of personal identity is serious: We have no idea what persons are.

10. Mechanism and Autonomy

Human persons are generally convinced that they are autonomous. That is, human persons are convinced that, when not restricted from doing so, they act of their own accord.

Notice what is at stake if we are mistaken. Many, if not most, of our social institutions and practices presume that one's actions are self-initiated and self-directed. For example, it would make no sense to appreciate someone's holding the door for you if they did not do it of their own accord. Likewise, it would make no sense to blame someone for cheating if they could not have done otherwise. Blameworthiness and praiseworthiness depend on an agent being the author of her own actions.

The ancestor to the problem of Mechanism and Autonomy is the problem of Free Will and Determinism. Loosely speaking, let us say that Determinism is the thesis that every event is causally determined. Then we have the following dilemma:

Either Determinism is true or Determinism is not true.

If Determinism is true, then Free Will is impossible. (Since our actions would not be the result of our reasons for them. Rather, they would be determined by antecedent causes.)

IF Determinism is not true, then Free Will is impossible. (Since our actions would not be determined by causes; they would, instead, be determined by nothing at all beyond mere chance. Yet free will seems as incompatible with mere chance as causal determinism.)

Therefore,

Free Will is impossible.

Nevertheless,

Free Will exists. (Unless we are all deluding ourselves.)

The problem of Free Will and Determinism, however, is not as pressing as the problem of Mechanism and Autonomy. To the best of our knowledge, Determinism is not true, but some local processes may be purely mechanistic in an otherwise stochastic universe. Just as gears turn in a precisely regular way in a watch, perhaps the human mind is the result of purely mechanistic processes.

Machine Functionalism, the view that we are meat machines, underscores this problem. For if all the mental events that make up our lives – our beliefs and desires, for example – are the result of underlying mechanistic processes, then we are not the authors of our actions. The mechanistic processes are. Thus Mechanism is incompatable with Autonomy, yet Autonomy is presupposed by many of our most important social practices (or so it seems.)

The problem Autonomy presents is made especially clear with a short story.

Suppose that Ted the Survivalist, in a fit of deepening paranoia, resolves to keep his gun trained on the door of his shack so as to kill anyone who might try to enter. After twenty hours of this Ted frightens himself by startling awake upon nodding off: for a few seconds at least he was vulnerable! As clever as he is paranoid, Ted fashions a simple system of strings and pulleys such that the gun fires dead-center into the doorway when the door is opened. Ted is free to sleep and go about his usual business, secure in the knowledge that anyone trying to enter his shack will be killed.

Ted’s contraption acts so as to kill any would-be attackers. It has this potential only insofar as Ted built it thus and so. The agency of the contraption, should it ever fire, is thus wholly derived from Ted’s original, albeit deranged, agency. In that sense it would be far better to put Ted in a secure psychiatric hospital than the contraption if, say, the postman were killed.

Let us press the example further. Suppose that Ted isn’t any ordinary paranoid survivalist. Ted is also a brilliant roboticist with considerable economic resources. To protect himself, Ted constructs a mobile adaptive live-fire robot, anticipating somewhat recent DARPA advances. The robot, which Ted affectionately dubs ‘R2D3', looks like a wheeled trash-can. It has three arms stuck out the sides that articulate in four locations and terminate in large, fully-automatic guns. On top of R2D3 stands a thin, retractable pole, and on the top of the pole is the its ‘head’. The ‘head’ is just a pair of side-by-side cameras which can swivel in nearly every direction. Ringing R2D3's base are sonar sensors which allow it to navigate from room to room and around the yard.

In operation, R2D3's head continuously bobs up and down and swivels back and forth as it scans its vicinity. R2D3's head orients on any movement and, using cues such as bi-lateral symmetry, zeroes in on any faces. It then compares key features of the face to an on-board database of such features. If there is, within a certain narrow tolerance which Ted keeps notching up as his paranoia deepens, a match in the database, then the object the robot is tracking is a “friend”. If it fails to find a match, the object is a “foe”, at least for a few milliseconds.

Fortunately R2D3 is adaptive in the sense that it updates its “friends” database whenever Ted himself opens the door and speaks in normal tones with a person. Thus Ted congratulates himself on having protected the postman--until, that is, the postman falls ill and his substitute walks into the yard.

Suppose R2D3, suffering no malfunction whatsoever, kills the substitute postman. Did it kill the substitute postman autonomously in such a way that we should blame R2D3 and not Ted? Surely not. Put another way, does it make any more sense to put R2D3 in the secure psychiatric hospital than Ted’s original string-and-gun contraption? Again, surely not. Of course R2D3 ought to be disabled, but not as a punitive measure. We disable the robot for precisely the same reason that we disable the string contraption: to avoid any more 'accidents'. The scrap heap is the appropriate end for R2D3. Ted, the lethal robot’s designer and programmer, is the one who gets to go to the secure psychiatric hospital.

Robots are designed and programmed: They can never be autonomous agents. Yet if autonomy is incompatible with the mechanisms underlying their behavior, how can autonomy be compatible with the mechanisms underlying our behavior? If Machine Functionalism is true, then how can we be any more autonomous than R2D3?

11. The Frame Problem

Let us revisit Dennett's description of the frame problem:

Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for itself. One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon. R1 located the room, and the key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT (Wagon, Room, t) would result in the battery being removed from the room. Straightaway it acted, and did succeed in getting the battery out of the room before the bomb went off. Unfortunately, however, the bomb was also on the wagon. R1 knew that the bomb was on the wagon in the room, but didn't realize that pulling the wagon would bring the bomb out along with the battery. Poor R1 had missed that obvious implication of its planned act.

Back to the drawing board. `The solution is obvious,' said the designers. `Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side-effects, by deducing these implications from the descriptions it uses in formulating its plans.' They called their next model, the robot-deducer, R1D1. They placed R1D1 in much the same predicament that R1 had succumbed to, and as it too hit upon the idea of PULLOUT (Wagon, Room, t) it began, as designed, to consider the implications of such a course of action. It had just finished deducing that pulling the wagon out of the room would not change the colour of the room's walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon - when the bomb exploded.

Back to the drawing board. `We must teach it the difference between relevant implications and irrelevant implications,' said the designers, `and teach it to ignore the irrelevant ones.' So they developed a method of tagging implications as either relevant or irrelevant to the project at hand, and installed the method in their next model, the robot-relevant-deducer, or R2D1 for short. When they subjected R2D1 to the test that had so unequivocally selected its ancestors for extinction, they were surprised to see it sitting, Hamlet-like, outside the room containing the ticking bomb, the native hue of its resolution sicklied o'er with the pale cast of thought, as Shakespeare (and more recently Fodor) has aptly put it. `Do something!' they yelled at it. 'I am,' it retorted. `I'm busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore, and...' the bomb went off.

All these robots suffer from the frame problem. If there is ever to be a robot with the fabled perspicacity and real-time adroitness of R2D2, robot-designers must solve the frame problem. It appears at first to be at best an annoying technical embarrassment in robotics, or merely a curious puzzle for the bemusement of people working in Artificial Intelligence (AI). I think, on the contrary, that it is a new, deep epistemological problem - accessible in principle but unnoticed by generations of philosophers - brought to light by the novel methods of AI, and still far from being solved. Many people in AI have come to have a similarly high regard for the seriousness of the frame problem. As one researcher has quipped, `We have given up the goal of designing an intelligent robot, and turned to the task of designing a gun that will destroy any intelligent robot that anyone else designs!'

The Frame Problem, as Dennett suggests, is not simply a problem for the goal of Artificial Intelligence. It is a mystery for us as well. As Dennett puts it,

I will try here to present an elementary, non-technical, philosophical introduction to the frame problem, and show why it is so interesting. I have no solution to offer, or even any original suggestions for where a solution might lie. It is hard enough, I have discovered, just to say clearly what the frame problem is - and is not. In fact, there is less than perfect agreement in usage within the AI research community. McCarthy and Hayes, who coined the term, use it to refer to a particular, narrowly conceived problem about representation that arises only for certain strategies for dealing with a broader problem about real-time planning systems. Others call this broader problem the frame problem-'the whole pudding,' as Hayes has called it (personal correspondence) - and this may not be mere terminological sloppiness. If 'solutions' to the narrowly conceived problem have the effect of driving a (deeper) difficulty into some other quarter of the broad problem, we might better reserve the title for this hard-to-corner difficulty. With apologies to McCarthy and Hayes for joining those who would appropriate their term, I am going to attempt an introduction to the whole pudding, calling it the frame problem. I will try in due course to describe the narrower version of the problem, 'the frame problem proper' if you like, and show something of its relation to the broader problem.

Since the frame problem, whatever it is, is certainly not solved yet (and may be, in its current guises, insoluble), the ideological foes of AI such as Hubert Dreyfus and John Searle are tempted to compose obituaries for the field, citing the frame problem as the cause of death. In What Computers Can't do (Dreyfus 1972), Dreyfus sought to show that AI was a fundamentally mistaken method for studying the mind, and in fact many of his somewhat impressionistic complaints about AI models and many of his declared insights into their intrinsic limitations can be seen to hover quite systematically in the neighbourhood of the frame problem. Dreyfus never explicitly mentions the frame problem, but is it perhaps the smoking pistol he was looking for but didn't quite know how to describe? Yes, I think AI can be seen to be holding a smoking pistol, but at least in its `whole pudding' guise it is everyone's problem, not just a problem for AI, which, like the good guy in many a mystery story, should be credited with a discovery, not accused of a crime.

Conceived broadly, the Frame Problem amounts to the epistemic problem of understanding just how plans are made and intentions realized. To be sure, we do sometimes make mistakes. Yet even in failure we are mysteriously able to associate relevant beliefs with our desires. Moreover, these desire-relevant beliefs are really relevant, in the sense that they accurately characterize enough of the actual world for us to successfully satisfy our desires (or for the word 'failure' to make sense.) We know what to attend to and what to ignore in our environment, a capacity we bring with terrible ease to each new situation and goal. How, though, is this possible? Why aren't we always blowing ourselves up like hapless robots?

12. Externalism

The traditional model of linguistic communication that has us using linguistic behavior (speaking, writing, etc.) to convey our ideas runs afoul of the Twin Earth Thought Experiment. As Putnam explains it, those seeking to understand communication have unreflectively and uncritically made two assumptions. Quoting Putnam,

(I) That knowing the meaning of a term is just a matter of being in a certain psychological state (in the sense of "psychological state," in which states of memory and psychological dispositions are "psychological states"; no one thought that knowing the meaning of a word was a continuous state of consciousness, of course).

(II) That the meaning of a term (in the sense of "intension") determines its extension (in the sense that sameness of intension entails sameness of extension).

which Putnam clarifies by adding,

Let A and B be any two terms which differ in extension. By assumption (II) they must differ in meaning (in the sense of "intension"). By assumption (I), knowing the meaning of A and knowing the meaning of B are psychological states in the narrow sense-for this is how we shall construe assumption (I). But these psychological states must determine the extension of the terms A and B just as much as the meanings ("intensions") do.

That is,

...if S is the sort of psychological state we have been discussing--a psychological state of the form knowing that I is the meaning of A, where I is an "intension" and A is a term-then the same necessary and sufficient condition for falling into the extension of A "works" in every logically possible world in which the speaker is in the psychological state S. For the state S determines the intension I, and by assumption (II) the intension amounts to a necessary and sufficient condition for membership in the extension.

The argument Putnam gives to show that "these two assumptions are not jointly satisfied by any notion, let alone any notion of meaning" is simple, elegant, and devastating:

That psychological state does not determine extension will now be shown with the aid of a little science-fiction. For the purpose of the following science-fiction examples, we shall suppose that somewhere in the galaxy there is a planet we shall call Twin Earth. Twin Earth is very much like Earth; in fact, people on Twin Earth even speak English. In fact, apart from the differences we shall specify in our science-fiction examples, the reader may suppose that Twin Earth is exactly like Earth. He may even suppose that he has a Doppelgänger-an identical copy-on Twin Earth, if he wishes, although my stories will not depend on this.

Although some of the people on Twin Earth (say, the ones who call themselves "Americans" and the ones who call themselves "Canadians" and the ones who call themselves "Englishmen," etc.) speak English, there are, not surprisingly, a few tiny differences which we will now describe between the dialects of English spoken on Twin Earth and Standard English. These differences themselves depend on some of the peculiarities of Twin Earth.

One of the peculiarities of Twin Earth is that the liquid called "water" is not H2O but a different liquid whose chemical formula is very long and complicated. I shall abbreviate this chemical formula simply as XYZ. I shall suppose that XYZ is indistinguishable from water at normal temperatures and pressures. In particular, it tastes like water and it quenches thirst like water. Also, I shall suppose that the oceans and lakes and seas of Twin Earth contain XYZ and not water, that it rains XYZ on Twin Earth and not water, etc.

If a spaceship from Earth ever visits Twin Earth, then the supposition at first will be that "water" has the same meaning on Earth and on Twin Earth. This supposition will be corrected when it is discovered that "water" on Twin Earth is XYZ, and the Earthian spaceship will report somewhat as follows:

"On Twin Earth the word 'water' means XYZ."

(It is this sort of use of the word "means" which accounts for the doctrine that extension is one sense of "meaning," by the way. But note that although "means" does mean something like has as extension in this example, one would not say

"On Twin Earth the meaning of the word 'water' is XYZ."

unless, possibly, the fact that "water is XYZ" was known to every adult speaker of English on Twin Earth. We can account for this in terms of the theory of meaning we develop below; for the moment we just remark that although the verb "means" sometimes means "has as extension," the nominalization "meaning" never means "extension.")

Symmetrically, if a spaceship from Twin Earth ever visits Earth, then the supposition at first will be that the word "water" has the same meaning on Twin Earth and on Earth. This supposition will be corrected when it is discovered that' "water" on Earth is H2O, and the Twin Earthian spaceship will report

"On Earth[6] the word 'water' means H20."

>Note that there is no problem about the extension of the term "water." The word simply has two different meanings (as we say) in the sense in which it is used on Twin Earth, the sense of waterTE, what we call "water" simply isn't water; while in the sense in which it is used on Earth, the sense of waterE, what the Twin Earthians call "water" simply isn't water. The extension of "water" in the sense of waterE is the set of all wholes consisting of H2O molecules, or something like that; the extension of water in the sense of waterTE is the set of all wholes consisting of XYZ molecules, or something like that.

Thus psychological state does not determine extension because the extension of the words I and my Twin Earth counterpart use can differ even though we are in identical psychological states. "Meanings", as Putnam memorably put it, "just ain't in the head!"

The implications of this for the philosophy of mind are unsettling. As Putnam explains,

Of course, denying that meanings are in the head must have consequences for the philosophy of mind, but at the time I wrote those words I was unsure as to just what those consequences were. After all, such accomplishments as knowing the meaning of words and using words meaningfully are paradigmatic "mental abilities"; yet, I was not sure, when I wrote "The Meaning of 'Meaning,'" whether the moral of that essay should be that we shouldn't think of the meanings of words as lying in the mind at all, or whether (like John Dewey and William James) we should stop thinking of the mind as something "in the head" and think of it rather as a system of environment-involving capacities and interactions. In the end, I equivocated between these views. I said, on the one hand, that "meanings just ain't in the head," and, on the other hand, that the notion of the mind is ambiguous, and that, in one sense of "mental state" (I called mental states, in this supposed sense, 'narrow mental states'), our mental states are entirely in our heads, and in another sense (I called mental states in this supposed second sense "broad mental states"), a sense which includes such states as knowing the meaning of a word, our mental states are individuated by our relations to our environment and to other speakers and not simply by what goes on in our brains. Subsequently, under the influence of Tyler Burge and more recently of John McDowell as well, I have come to think that this conceded too much to the idea that the mind can be thought of as a private theater (situated inside the head).

However it might be fleshed-out, the conclusion that mind is not a private theater situated inside the head is counterintuitive. Among other things, it seems to undermine first-person authority. I presumably enjoy privileged access to my psychological states: I know my beliefs, desires, and intentions long before anyone else does, because the only way they can find out is if I tell them. If, however, my mind depends on broader sociolinguistic facts well beyond what happens inside my head, then it is not clear what, if any, first-person authority I enjoy. What are the implications of externalism for the philosophy of mind, and should we, can we, resist them?

13. A Pseudoscience?

Evolutionary psychology is a relatively new field of inquiry within psychology which purports to provide explanations of social and behavioral phenomena in terms of the adaptive advantage the phenomena presumably conferred on our ancestors. Three examples come to mind. First, college women are reported to wear more attractive clothing when they are ovulating because, as evolutionary psychology has it, doing so is in response to unconscious drives to improve the opportunity for successful procreation. Second, women are supposed to prefer wealthy so-called 'alpha' males because they need protection and support before, during, and for some time after childbirth. Third, men tend to stray from monogamous relationships because it is to their selective advantage to have as many children as possible without incurring the costs of care and upbringing, while women seek stable monogamous relationships because it is to their selective advantage to ensure that the few children they can have, given lengthy gestation, will thrive.

The organizing idea seems to be that our minds are machines that were made (evolved) to suit certain purposes. If we understand the making of the machine, if you will, then we can understand the machine itself and why it behaves the way it does.

Skeptics like Jerry Fodor find the explanations of evolutionary psychology ad hoc and peculiarly suited to the desires of well-to-do male evolutionary psychologists. Thus in his review of Steven Pinker's “How the Mind Works” (London Review of Books, (January 1998, Vol. 22, No. 2), Fodor, referring to evolutionary psychology as “Psychological Darwinism” and with considerable sarcasm, writes that

A lot of the fun of Pinker’s book is his attempt to deduce human psychology from the assumption that our minds are adaptations for transmitting our genes. His last chapters are devoted to this and they range very broadly; including, so help me, one on the meaning of life. Pinker would like to convince us that the predictions that the selfish-gene theory makes about how our minds must be organised are independently plausible. But this project doesn’t fare well. Prima facie, the picture of the mind, indeed of human nature in general, that psychological Darwinism suggest is preposterous; a sort of jumped up, down-market version of original sin. Psychological Darwinism is a kind of conspiracy theory; that is, it explains behaviour by imputing an interest (viz in the proliferation of the genome) that the agent of the behaviour does not acknowledge. When literal conspiracies are alleged, duplicity is generally part of the charge: ‘He wasn’t making confetti; he was shredding the evidence. He did X in aid of Y, and then he lied about his motive.’ But in the kind of conspiracy theories psychologists like best, the motive is supposed to be inaccessible even to the agent, who is thus perfectly sincere in denying the imputation. In the extreme case, it’s hardly even the agent to whom the motive is attributed. Freudian explanations provide a familiar example: What seemed to be merely Jones’s slip of the tongue was the unconscious expression of a libidinous impulse. But not Jones’s libidinous impulse, really; one that his Id had on his behalf. Likewise, for the psychological Darwinist: what seemed to be your, after all, unsurprising interest in your child’s well-being turns out to be your genes’ conspiracy to propagate themselves. Not your conspiracy, notice, but theirs.

The literature of Psychological Darwinism is full of what appear to be fallacies of rationalisation: arguments where the evidence offered that an interest in Y is the motive for a creature’s behaviour is primarily that an interest in Y would rationalise the behaviour if it were the creature’s motive. Pinker’s book provides so many examples that one hardly knows where to start. Here he is on friendship:

Once you have made yourself valuable to someone, the person becomes valuable to you. You value him or her because if you were ever in trouble, they would have a stake – albeit a selfish stake – in getting you out. But now that you value the person, they should value you even more . . . because of your stake in rescuing him or her from hard times . . . This runaway process is what we call friendship.’

And here he is on why we like to read fiction: ‘Fictional narratives supply us with a mental catalogue of the fatal conundrums we might face someday and the outcomes of strategies we could deploy in them. What are the options if I were to suspect that my uncle killed my father, took his position, and married my mother?’ Good question. Or what if it turns out that, having just used the ring that I got by kidnapping a dwarf to pay off the giants who built me my new castle, I should discover that it is the very ring that I need in order to continue to be immortal and rule the world? It’s important to think out the options betimes, because a thing like that could happen to anyone and you can never have too much insurance. At one point Pinker quotes H.L. Mencken’s wisecrack that ‘the most common of all follies is to believe passionately in the palpably not true.’ Quite so. I suppose it could turn out that one’s interest in having friends, or in reading fictions, or in Wagner’s operas, is really at heart prudential. But the claim affronts a robust, and I should think salubrious, intuition that there are lots and lots of things that we care about simply for themselves. Reductionism about this plurality of goals, when not Philistine or cheaply cynical, often sounds simply funny. Thus the joke about the lawyer who is offered sex by a beautiful girl. ‘Well, I guess so,’ he replies, ‘but what’s in it for me?’ Does wanting to have a beautiful woman – or, for that matter, a good read – really require a further motive to explain it? Pinker duly supplies the explanation that you wouldn’t have thought that you needed. ‘Both sexes want a spouse who has developed normally and is free of infection . . . We haven’t evolved stethoscopes or tongue-depressors, but an eye for beauty does some of the same things . . . Luxuriant hair is always pleasing, possibly because . . . long hair implies a long history of good health.’

Critics also object to the assumption that many human behaviors are both unconscious and largely determined by genetic heritage. Yet studies in evolutionary psychology are often reported in the press as important and insightful results from science without, note, any questions about the status of the field as a science. Are the explanations of evolutionary psychology defensibly scientific or are they merely self-interested pseudoscientific speculations?

14. The Problem for Psychology: Psychophysical Laws

It is a commonplace that Biology is reducible to Chemistry and Chemistry is reducible to Physics. That is, Biology is fundamentally Chemistry, Chemistry fundamentally Physics. Put another, we can 'explain' biological facts and laws in terms of (more fundamental) chemical facts and laws, and we can explain chemical facts and laws in terms of (more fundamental) physical facts and laws.

Of course, the vocabulary of Biology is very different from the vocabulary of Chemistry, and the vocabulary of Chemistry is very different from the vocabulary of Physics. So we need Bridge-Laws which connect the vocabulary of one science to the vocabulary of another and explain how a law in one science can ultimately be translated into the laws of a more fundamental science.

All of this is not without philosophical controversy, but it is the standard view of science.

Consider Psychology as a science. Many, if not most, cognitive scientists hold that psychology is ultimately reducible to biology, much as any good physicalist would hold. There are, however, two problems:

  1. We cannot imagine what bridge laws between psychology and biology would even look like; and,
  2. There don't seem to be any psychological laws which might make psychology a science in the first place.

Thus there seems to be an explanatory gap such that psychological facts about, for example, phenomenal consciousness cannot be explained by reducing them to biological facts. For imagine they could be so reduced. Then it should be the case that we could tell from an organism's biological facts what it is like to be that organism, yet this seems precisely to be what we cannot learn.

In general there seems that what we call the hard sciences, which are in principle reducible one to the other and, ultimately, the physics that presumably underwrites all of them. Yet contrast the presumed reducibility via bridge-laws found in the hard sciences with the apparent irreducibility of the so-called soft sciences: Psychology, Sociology, Anthropology, and so on. Thus the absence of psychophysical laws seems on the face of it to create a rift in the sciences between the hard sciences and the soft sciences. It is but one short step further to condemn the soft sciences as sciences at all. After all, science permits explanation and prediction based on the discover of natural laws, yet there are no laws when it comes to the psychological. Thus psychology and its associated sciences, lacking any laws, fail to be reducible to actual sciences and, in the final analysis, fail to be sciences at all.

15. The Computational Turn

Incorporating ever-cheaper and vastly more powerful computational resources has at once streamlined scientific inquiry and opened many more horizons of investigation up to now thought unassailable.

On the side of observation, computational and robotic resources accumulate, organize, and disseminate observational data itself obtained by remote and laboratory sensors of increasing sensitivity and sophistication. Tycho Brahe's passion for accurate astronomical observation and meticulous record-keeping, which raised the bar for serious scientific observation at his time and set it for several hundred years after, has become the automatic, daily standard for nearly every science on a scale and with a precision he could not have imagined possible.

On the side of theory, computational models move beyond static systems of equations at the core of mathematical models and physical laws to help predict the behavior of enormously complex physical systems, be they ecological, cellular, neurological, meteorological, or astronomical. Fine-tuning and combining the models, extracting statistical correlations among large data sets, and judiciously deploying deep learning techniques combine to permit increasingly reliable predictions where formerly we would have expected the equivalent of a scientific shrug and a guess.

Scarcely any domain of scientific observation has neglected to incorporate some or all of the many advantages the computational turn provides for the twin sakes of more robust empirical observation and more predictive theory construction, to say nothing of the advantages gained in collaboration, data sharing and curation, and model building and testing. Thus 'computational turn' understates the transformation in the practice of science building for the past forty years or so. 'Computer revolution' is, however, cliche, passive, and overwrought, given the thoughtful, methodical, and painstaking work that has gone into making this turn.

Set aside the inevitable bugs, glitches, and errors. Current scientific observation and theory built on sophisticated computational resources ranging from ip-enabled remote sensor packages to multi-cluster super-computers is as far beyond the baconian scientific enterprise as GPS navigation is to sextant and soundings, and for strictly analogous reasons. Firmly and fastidiously dedicated to satisfying our curiosity about the natural world, scientific practice computationally augmented could only more ably---speedily and completely---explore our many intellectual terrae incognitae.

The journey is fraught, however, with epistemological hazards, not least among them the widespread deployment of computationally mediated observation creating a firehose of data to be computationally analyzed and modeled. On the one hand, computer science is thereby elevated to a privileged, unquestioned status like mathematics or logic even as the use of computational methods changes the very character of science itself. On the other hand, science, which was formerly thought to both explain and predict by discovering and testing theories about the natural phenomena in question, provides better and better predictive power at the cost of explanation. That is, identifying strongly correlated variables vastly improves our predictive powers without providing any insight into why those variables are correlated. Further, computational models are useful so long as they improve prediction, but there is no reason to think the resulting models better reflect the reality they model in doing so.

The Computational Turn is thus as troubling as it is advantageous. It requires that computer science be certain in the way mathematics is usually certain--less a science and more an applied mathematics, as it were--while in practice splintering off explanation from prediction, promoting the latter while demoting the former. Is explanation any longer a proper goal of scientific inquiry, and what is computer science qua science?

16. The Biomimetic Revolution

John Haugeland usefully distinguishes between GOFAI (Good Old-Fashioned Artificial Intelligence) and NFAI (New-Fangled Artificial Intelligence). Where GOFAI is characterized by the straightforward computation of cognitive functions by means of the rule-governed manipulations of strings of symbols, NFAI takes seriously the path to cognition carved by evolutionary processes and attempts to mimic those processes, whether developmentally in the case of modeling cognition on the facts of human development, or even more directly by morphology (modeling evolutionary forms in robotics--think robot fish or cockroaches, for example) or neural nets (simulating the functionality of neurological systems--think about the Brain Simulator reply as Searle discusses it).

Of considerable recent interest since Google's Alpha Go beat the world champion at the game of Go using a system of three neural nets, the advent of so-called generative AI has witnessed the development of startlingly sophisticated neural nets which can generate functional programming code and even human-like text (Chat-GPT or Google's Genesis, for example), images (Stable Diffusion or DALL-E) and video (SORA). These and neural net systems like them have sparked something of a gold-rush in the investment world, with venture capitalists clamoring to fund the next great development in neural net technology.

Indeed, the CEO of NVIDIA, which is at the forefront of developing chip technology to more efficiently implement neural nets, has suggested to the technology sector that there is no longer any point in learning how to write computer code, since that is something generative-AI will be able to manage itself. Even Tyler Perry, the Atlanta-based film producer and director, has reportedly suspended an 800 million dollar expansion of his production facilities after having been given a demonstration of SORA's ability to render text instructions into videography.

These developments raise an intriguing question: in a capitalist society, will there be any jobs that cannot be better handled by properly trained neural networks? Even before the excitement over generative AI, various economists and think-tanks were estimating that up to 40% of all careers are vulnerable to elimination by automation. The possibilities posed by generative AI would only seem to increase those odds. And yet, because capitalism is so deeply entrenched, it is unclear whether any strategies or coping mechanisms might exist whereby the biomimetic revolution will not result in massive job loss. How, then, might we responsibly respond to the economic challenges posed by the continued success of the biomimetic revolution?

17. Blaming Robots

Here are a few more or less reasonable attributions of blame:

  1. Taking the trash out to the bin in the dim light before dawn, I trod painfully on a rock. Cursing, I blame myself for not bothering to turn on the driveway lights.
  2. Going to get a glass of water in the middle of the night, I trod painfully on a plastic dinosaur. Cursing, I blame myself for not having tidied up after my toddler had gone to bed.
  3. Going to the bathroom in the middle of the night, I trod most painfully on a lego. Cursing, I blame my five-year-old for having ignored my instructions to tidy up before going to bed and myself for failing to verify the instructions had been followed.
  4. While retrieving a box from the back of the garage, I trod painfully on a bolt. Cursing, I blame my wife for neglecting to properly police her bicycle repair station.

We can quibble here and there. Maybe I should also do more to teach my toddler that she needs to put her toys up when she's done playing with them. It could be that my wife was called away mid-repair to dole out hugs and calm tears---I do seem, after all, to keep treading on painful things. And I'm certain my five-year-old is capable of offering a staggering array of excuses.

On the other hand it would be weird, maybe even perverse or downright pathological, for me to seriously, literally, and sincerely blame the rock, the dinosaur, the lego, or the bolt. What would that even look like? "Attacked again by that wild rock! That tricky dinosaur haunts my every step! The lego's malice is unspeakable! I cannot abide bolts of its character, just laying around seeking to inflict as much pain as possible!" On this we can all agree.

Technology surely muddles matters. In the heat of the moment, I might curse the ATM for its having digested my check without crediting my account. I might swear in frustration after the GPS blindly deposits me in the midst of highway construction. Or I might brutally hit its stop button having caught my robot vacuum blithely painting the carpet with my dog's latest accident.

Calmer reflection reveals the truth: resenting the technology for what seem like acts of malice is just as nonsensical and in precisely the same way as resenting the rock, the toy dinosaur, the lego, or the bolt for my pain. The ATM malfunctioned for want of proper maintenance or, possibly, adequate design. Blame the maintainers or the designers. The GPS had not been updated to reflect current road conditions. Blame those responsible for collecting and committing updates. The robot vacuum was merely mindlessly, meaninglessly, and heedlessly following its algorithm, which is otherwise outstandingly successful when unobstructed or unsoiled by pets. Blame the dog or, better, myself for not having walked the dog in time. At most, a newer, more capable model which can detect pet accidents is warranted.

But for the rock, these are all mere artifacts. Some do more than others, presupposing complex and sophisticated mechanisms and controls characteristic of contemporary technological advances. Attributing evil intentions, malign beliefs, and malicious desires may come easier in confronting such complexity while being for all that just as weird and absurd as blaming, well, the rock for what I might very much want to describe at the time as the pain it inflicted on my foot, yet clearly doing so gets the attribution of agency altogether backwards. The rock did nothing, intended nothing, believed nothing, and desired nothing. I stepped on it. I hurt myself on it. Similarly, I trusted the ATM because every time in the past my account was properly credited, I navigated by the GPS because every time in the past doing so got me to my destination, and I started the vacuum robot to clean because I failed to consider my dog might have done that.

To be clear, there is a natural sense in which we blame technology which is (mostly) unproblematic. After all, the ATM did malfunction. The GPS did fail to identify an optimal route. The robot vacuum did make a mess for want of additional functional capacity. Technology does sometimes fail to serve the ends for which we use it. Foiled, we blame it in precisely the sense in which we blame a broken watch for being late or a frozen lug nut for being unable to change a car tire.

Blaming technology like the broken watch or frozen lug nut is common, ordinary, and, for the most part, philosophically uninteresting. Otherwise useful artifacts don't always work in the way we design and engineer them to work or, indeed, have come to expect them to work. They are blameworthy in the innocuous, minimal sense in which they sometimes break or fail to function to our expectations, hardly a confounding concept.

Yet we must ask, will blaming qua malfunction or inadequate design always be the most we can justifiably attribute to technological artifacts? In light of the startling sophistication of technological artifacts, is there a point in their development at which it no longer makes any sense to blame, say, maintenance, design, or engineered capacity, and we can and should properly and reasonably blame the artifact itself not by dint of any malfunction but by virtue of irresponsibility or moral failing?

Put another way, at what point in increasing sophistication do robots cease to be mere causal conduits, designed and programmed as such, and become themselves agents to be held worthy of resentment or gratitude, as the case may be?