The Chinese Room

The Chinese Room

The classic argument against the possibility of a machine understanding what it is doing is Searle's Chinese Room Thought Experiment.

To find out what a machine might understand, Searle puts himself in the machine's position and asks, what would I understand in this context?

The Chinese Room Thought Experiment

Searle imagines himself in a locked room where he is given pages with Chinese writing on them. He does not know Chinese. He does not even recognize the writing as Chinese per se. To him, these are meaningless squiggles. But he also has a rule-book, written in English, which dictates just how he should group the Chinese pages he has with any additional Chinese pages he might be given. The rules in the rule-book are purely formal. They tell him that a page with squiggles of this sort should be grouped with a page with squiggles of that sort but not with squiggles of the other sort. The new groupings mean no more to Searle than the original ordering. It's all just symbol-play, so far as he is concerned. Still, the rule-book is very good. To the Chinese-speaker reading the Searle-processed pages outside the room, whatever is in the room is being posed questions in Chinese and is answering them quite satisfactorily, also in Chinese.

The analogy, of course, is that a machine is in exactly the same position as Searle. Compare, for instance, Searle to R2D2. The robot is good at matching key features of faces with features stored in its database. But the matching is purely formal in exactly the same way that Searle's matching of pages is purely formal. It could not be said that the robot recognizes, say, Ted any more than it could be said that Searle understands Chinese. Even if R2D2 is given a mouth and facial features such that it smiles when it recognizes a friend and frowns when it sees a foe, so that to all outward appearances the robot understands its context and what it is seeing , the robot is not seeing at all. It is merely performing an arithmetical operation - matching pixels in one array with pixels in another array according to purely formal rules - in almost exactly the same way that Searle is matching pages with pages. R2D2 does not understand what it is doing any more than Searle understands what he is doing following the rulebook.

It is precisely because R2D2 has no capacity to understand what it is doing that the thought of putting the robot in a psychiatric hospital is absurd. Moreover, if Searle is correct, no amount of redesigning will ever result in a robot which understands what it is doing, since no matter how clever or complicated the rule-book, it is still just a rule-book. Yet if a machine cannot, in principle, understand what it is doing, then it cannot be intelligent.

Searle's Argument
  1 If it is possible for machines to be intelligent, then machines must understand what it is that they are doing.  
  2 Nothing which operates only according to purely formal rules can understand what it is doing.  
  3 Necessarily, machines operate only according to purely formal rules.  
4 Machines cannot understand what it is that they are doing. 2&3
5 Machines cannot be intelligent. 1&4

Of course, there have been many, many criticisms of Searle's thought experiment. In the same article, Searle presents replies to some of these criticisms. Suffice it to say that the Chinese Room Thought Experiment poses a serious challenge to the possibility of Artificial Intelligence.