Thursday 10/31

Thursday 10/31

Is Artificial Intelligence Possible?

Readings

Texts

Notes

Synopsis

In discussing the Mind-Body problem and some of the solutions to it, we suggested the contemporary view that the mind is what the brain does. Thus the brain is seen as a kind of neurological computer whose functioning results in what we identify as the mind, which explains why it is the properties we ascribe to mind and those we ascribe to bodies are so radically different while maintaining an underlying physicalism. The mental just is physical, like everything else, yet mental events just are neural-functions in play.

This is an important thought, for it suggests that Artificial Intelligence on a par with human intelligence (or, maybe, even better) is a very real possibility. After all, if mental events are understood in terms of functional states, whatever has those functional states (like, say, an ordinary computer) will likewise have the capacity for mind.

Naturally, though, if we're going to create an artificial intelligence, we have to grasp,

What is intelligence?

In his unusually accessible essay Computing Machinery and Intelligence, Turing suggests that the question of whether a machine is intelligent is hopelessly ill-formed and perhaps unanswerable in that form. After all, there are many things we mean by 'intelligent', and it is not clear what all of them have to do with one another. Turing proposed that we replace the question of machine intelligence with an imitation game in which an interlocutor interrogates a machine and a person by teletype (today: chatroom) to determine which is which. This is a strictly behavioral test. If the machine can fool the interlocutor better than average number of times, then the machine is behaviorally indistinguishable from a person insofar as 'verbal' behavior is concerned.

Consider the above in light of the following quotes about Turing's Computing Machinery and Intelligence and the Turing Test:

In fact Turing believed, or at very least saw no reason not to believe, much more than this: that there would, eventually, turn out to be no essential difference between what could be achieved by a human intellect in, say, mathematics, and what could be achieved by a machine. The 1950 paper was intended not so much as a penetrating contribution to philosophy but as propaganda. Turing thought the time had come for philosophers and mathematicians and scientists to take seriously the fact that computers were not merely calculating engines but were capable of behaviour which must be accounted as intelligent; he sought to persuade people that this was so. He wrote this paper--unlike his mathematical papers--quickly and with enjoyment. I can remember him reading aloud to me some of the passages--always with a smile, sometimes with a giggle. Some of the discussions of the paper I have read load it with more significance than it was intended to bear. I shall discuss it no further.

Robin Gandy, "Human versus Machine Intelligence." In P. Millican and A. Clark, eds. 1996. Machines and Thought: The Legacy of Alan Turing, Volume 1. Oxford: Oxford University Press. p. 125.

Turing cast his test in terms of simulation or imitation: a non-human system will be deemed intelligent if it acts so like an ordinary person in certain respects that other ordinary people can't tell (from these actions alone) that it isn't one. But the imitation idea itself isn't the important part of Turing's proposal. What's important is rather the specific sort of behavior that Turing chose for his test: he specified verbal behavior. A system is surely intelligent, he said, if it can carry on an ordinary conversation like an ordinary person (via electronic means, to avoid any influence due to appearance, tone of voice, and so on).

This is a daring and radical simplification. There are many ways in which intelligence is manifested. Why single out talking for special emphasis? Remember: Turing didn't suggest that talking in this way is required to demonstrate intelligence, only that it's sufficient. So there's no worry about the test being too hard; the only question is whether it might be too lenient. We know, for instance, that there are systems that can regulate temperatures, generate intricate rhythms, or even fly airplanes without being, in any serious sense, intelligent. Why couldn't the ability to carry on ordinary conversations be like that?

Turing's answer is elegant and deep: talking is unique among intelligent abilities because it gathers within itself, at one remove, all others. One cannot generate rhythms or fly airplanes "about" talking, but one certainly can talk about rhythms and flying--not to mention poetry, sports, science, cooking, love, politics, and so on--and, if one doesn't know what one is talking about, it will soon become painfully obvious. Talking is not merely one intelligent ability among others, but also, and essentially, the ability to express intelligently a great many (maybe all) other intelligent abilities. And, without having those abilities in fact, at least to some degree, one cannot talk intelligently about them. That's why Turing's test is so compelling and powerful.

John Haugeland, 1997. "What is Mind Design?" In J. Haugeland, ed. Mind Design II: Philosophy, Psychology, and Artificial Intelligence. Cambridge, Mass.: MIT Press, A Bradford Book. pp. 3-4.

The Turing test in unadulterated, unrestricted from [sic], as Turing presented it, is plenty strong if well used. I am confident that no computer in the next twenty years is going to pass an unrestricted Turing test. They may well win the World Chess Championship or even a Nobel Prize in physics, but they won't pass the unrestricted Turing test. Nevertheless, it is not, I think, impossible in principle for a computer to pass the test, fair and square. I'm not running one of those A priori "computers can't think" arguments. I stand unabashedly ready, moreover, to declare that any computer that actually passes the unrestricted Turing test will be, in every theoretically interesting sense, a thinking thing.

Daniel Dennett, 1998. "Can Machines Think?" In D.C. Dennett, ed. Brainchildren: Essays on Designing Minds. Cambridge, Mass.: MIT Press, A Bradford Book. p. 20.

Now, Turing's key insight can be put quite simply as the proposition that

The perfect imitation of intelligence is intelligence.

(Or, at least, it is enough to satisfy any questions about 'intelligence' we could meaningfully ask.)

It bears noting that psychometrics, the study of the measure of psychological phenomena, has thus far born out his suspicion: We really have no idea what intelligence is.

Suppose, though, that we reject Turing's dismissal of the intelligence question. If we think that the question of intelligence is meaningful apart from the Turing Test, then we must ask about the relationship between a machine's passing the test and whether the machine is intelligent.

It would be a mistake to argue that passing the Turing Test is both necessary and sufficient for intelligence. That is, it would be a mistake to assert that

1. X passes the Turing Test if, and only if, X is intelligent.

(1) is clearly false. Consider that (1) expresses the conjunction of two propositions:

2. If X passes the Turing Test, then X is intelligent. (The Turing Test is a sufficient condition on, or suffices for, intelligence.)

3. If X is intelligent, then X passes the Turing Test. (The Turing Test is a necessary condition on, or necessary for, intelligence.)

(3), though, is false. If the question of intelligence is at all meaningful, one can be intelligent without passing the Turing Test. There are lots of reasons why an intelligent being might lose the Turing Test. Inability to type would be one reason; speaking another language would be another.

The interesting question, from our standpoint, is whether passing the Turing Test is a sufficient condition on intelligence--i.e., is (2) true?

There are roughly two skeptical responses to the assertion (2) that passing the Turing Test suffices for intelligence: The Turing Test is too strong a condition on intelligence, or the Turing Test is too weak a condition on intelligence.

The Turing Test is Too Strong (aka The Problem of False Negatives).

First, one might argue, as researchers in computer science and robotics sometimes do, that the Turing Test is simply too strong a condition on intelligence because intelligence is not always expressed verbally. Witness the Mars rovers Spirit and Opportunity. The rovers would fail the Turing Test miserably, yet it can be argued that they have behavioral capacities on a par with insect-level intelligence. Intelligence has evolved, and so too with our machines. It may be a very long time before we can create machines that can reliably pass the Turing Test, yet we should not for that reason refuse to see the fantastically complicated and sophisticated behavioral repertoire of their predecessors as anything but intelligence.

In short, this objection holds that the Turing Test is too strong in the sense that it ignores as intelligent many things that should rightfully be considered intelligent. Perhaps dog and dolphin-lovers will concur.

The Turing Test is Too Weak (aka The Problem of False Positives).

A much more common objection to the Turing Test is that it is simply too weak. According to this objection, the Turing Test could in principle result in false positives: things that pass the test but ought not be considered intelligent. The most common source of these objections has been the philosophical community, and many of these bear careful scrutiny, as we shall see.

Whether the Turing Test is too strong, too weak, or, as an android Goldilocks might wish, just right, it is clear that Turing has done much to help us sharpen the debate over the possibility of artificial intelligence.

Here are links for further reading about Alan Turing and the Turing Test you may find interesting:

Now, in thinking about whether the Turing Test is too weak, consider this youtube news report.

Watson went on the second day (2/15) to trounce his human competitors. More recently IBM has announced that it is licensing Watson as a medical assistant, which raises interesting questions about what precisely it will mean to be a physician. This article in the NY Times neatly summarizes the IBM project and where they hope to take it next.

Does Watson understand the questions it is answering, however? This is the challenge Searle presents with the justly famous Chinese Room Thought Experiment.

The genius of Searle's Chinese Room Thought Experiment is to make vivid a point we can well appreciate from our work constructing Turing Machines: No function computable by a Turing Machine could be cognitive, since Turing Machines operate by the rule-governed manipulation of strings of symbols. But nothing like that, so the argument goes, could exhibit genuine understanding.

To find out what a machine might understand, Searle puts himself in the machine's position and asks, what would I understand in this context?

Searle imagines himself in a locked room where he is given pages with Chinese writing on them. He does not know Chinese. He does not even recognize the writing as Chinese per se. To him, these are meaningless squiggles. But he also has a rule-book, written in English, which dictates just how he should group the Chinese pages he has with any additional Chinese pages he might be given. The rules in the rule-book are purely formal. They tell him that a page with squiggles of this sort should be grouped with a page with squiggles of that sort but not with squiggles of the other sort. The new groupings mean no more to Searle than the original ordering. It's all just symbol-play, so far as he is concerned. Still, the rule-book is very good. To the Chinese-speaker reading the Searle-processed pages outside the room, whatever is in the room is being posed questions in Chinese and is answering them quite satisfactorily, also in Chinese.

The analogy, of course, is that a machine is in exactly the same position as Searle. Compare, for instance, Searle to R2D2. The robot is good at matching key features of faces with features stored in its database. But the matching is purely formal in exactly the same way that Searle's matching of pages is purely formal. It could not be said that the robot recognizes, say, Ted any more than it could be said that Searle understands Chinese. Even if R2D2 is given a mouth and facial features such that it smiles when it recognizes a friend and frowns when it sees a foe, so that to all outward appearances the robot understands its context and what it is seeing , the robot is not seeing at all. It is merely performing an arithmetical operation - matching pixels in one array with pixels in another array according to purely formal rules - in almost exactly the same way that Searle is matching pages with pages. R2D2 does not understand what it is doing any more than Searle understands what he is doing following the rulebook.

Moreover, if Searle is correct, no amount of redesigning will ever result in a robot which understands what it is doing, since no matter how clever or complicated the rule-book, it is still just a rule-book. Yet if a machine cannot, in principle, understand what it is doing, then it cannot be intelligent.

Searle's Argument
 
  1 If it is possible for machines to be intelligent, then machines must understand what it is that they are doing.  
  2 Nothing which operates only according to purely formal rules can understand what it is doing.  
  3 Necessarily, machines operate only according to purely formal rules.  
4 Machines cannot understand what it is that they are doing. 2&3
5 Machines cannot be intelligent. 1&4
 

The fact that neither Searle-in-the-Chinese-Room nor any combination of Searle with the Chinese room understands Chinese even while passing the Turing Test in Chinese is a serious matter indeed. Since there are only countably infinitely many Turing Machine Computable functions compared to the uncountable infinite totality of functions, it is entirely possible that cognitive functions escape Turing Machine Computability. The problem is not just that the Turing Test is too weak--other, less complicated arguments can be given for rejecting the thesis that the perfect imitation of intelligence is intelligence. Rather, the Chinse Room Thought Experiment seems to show that congnitive functions are not Turing Machine Computable at all. In light of the Church-Turing Thesis, the most we can say is that if we are meat machines, we cannot understand our own minds. We cannot build minds unless we duplicate exactly what we already have using exactly the same materials and structures that evolved in us. Such duplication is not entirely without its merits, but it gets us no nearer to artificial intelligence than we were when we began. Moreover, it suggests that there is something crucial about the particular substance and structure of us that makes it impossible to replicate cognitive functionality in, say, silicon.

Searle neatly summarizes and clarifies his argument as follows:

By way of concluding I want to try to state some of the general philosophical points implicit in the argument. For clarity I will try to do it in a question-and-answer fashion, and I begin with that old chestnut of a question:

"Could a machine think?"The answer is, obviously, yes. We are precisely such machines.

"Yes, but could an artifact, a man-made machine, think?"

Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question.

"OK, but could a digital computer think?"

If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.

"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"

This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.

"Why not?"

Because the formal symbol manipulations by themselves don't have any [understanding]; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such [understanding] as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have [understanding] (a man), and we program him with the formal program, you can see that the formal program carries no additional [understanding]. It adds nothing, for example, to a man's ability to understand Chinese.

If you feel hazy about its substance, this 60 second video explanation of the Chinese Room Thought Experiment may help.

To be sure, the Chinese Room Thought Experiment did not itself go unchallenged when Searle presented it. Opposition was incisive and fierce, if not decisive. Searle himself discusses these responses and his answers to them in his Minds, Brains, and Programs. In brief,

The Systems Reply

Of course Searle-in-the-room doesn't understand Chinese, but the entire room including Searle-in-the-room does.'

  • It is difficult to understand out adding a rule-book of symbol manipulations, bits of paper, pencil, and a room to Searle-in-the-room results in an understanding of Chinese. What could the rest of the apparatus do for understanding Chinese?
  • Imagine Searle internalizes the Chinese Room by memorizing the rule-book and performing its manipulations in his head. So internalized, Searle is the system, yet Searle clearly does not understand Chinese.

The Robot Reply

It may be that neither Searle-in-the-room nor the entire room including Searle-in-the-room understands Chinese, but if we situate Searle and the Chinese Room in a robot and supplement his rule-book to enable the robot to navigate its environment, receive audio input and provide audio output, and activate manipulators and motors to move about its environment, then the Robot, interacting as it does with the Chinese, understand Chinese.

  • While it is true that genuine causal interaction with the environment is required for understanding, the interaction given by Searle-in-the-robot nevertheless consists of nothing more than the rule-governed manipulation of strings of symbols. It's just that some of these strings of symbols represent data from the robot's camera-eyes, other strings of symbols represent data from the robot's microphone-ears, and the manipulated symbol-strings merely--and quite emptily--result in a coordinated series of motor activations.

The Brain Simulator Reply

If we develop circuits to mimic the functionality of human neurophysiology in such a way that we precisely replicate the patterns of nerve-firings we find in native Chinese Speakers, then we will have a system that understands Chinese

  • Wasn't the point of (Strong) Artificial Intelligence that we could build an artificial intelligence without necessarily knowing how the brain underwrites human intelligence? So supposing that we in fact need to precisely mimic the brain seems an odd response, to say the least.
  • Suppose Searle-in-the-room is instead given the task of regulating water-flow through very many pipes in such a way as to perfectly model the activity of nerve-synapses in the native Chinese speaker. Searle-in-the-plumbing has no more understanding of Chinese than Searle-in-the-room.

Next time we will watch the movie, Ex Machina, which neatly illustrates many of the puzzles we've been considering this past week.