The Chinese Room: Criticisms and Replies

The Chinese Room: Criticisms and Replies

The Systems Reply

Of course Searle-in-the-room doesn't understand Chinese, but the entire room including Searle-in-the-room does.'

  • It is difficult to understand out adding a rule-book of symbol manipulations, bits of paper, pencil, and a room to Searle-in-the-room results in an understanding of Chinese. What could the rest of the apparatus do for understanding Chinese?
  • Imagine Searle internalizes the Chinese Room by memorizing the rule-book and performing its manipulations in his head. So internalized, Searle is the system, yet Searle clearly does not understand Chinese.

The Robot Reply

It may be that neither Searle-in-the-room nor the entire room including Searle-in-the-room understands Chinese, but if we situate Searle and the Chinese Room in a robot and supplement his rule-book to enable the robot to navigate its environment, receive audio input and provide audio output, and activate manipulators and motors to move about its environment, then the Robot, interacting as it does with the Chinese, understand Chinese.

  • While it is true that genuine causal interaction with the environment is required for understanding, the interaction given by Searle-in-the-robot nevertheless consists of nothing more than the rule-governed manipulation of strings of symbols. It's just that some of these strings of symbols represent data from the robot's camera-eyes, other strings of symbols represent data from the robot's microphone-ears, and the manipulated symbol-strings merely--and quite emptily--result in a coordinated series of motor activations.

The Brain Simulator Reply

If we develop circuits to mimic the functionality of human neurophysiology in such a way that we precisely replicate the patterns of nerve-firings we find in native Chinese Speakers, then we will have a system that understands Chinese

  • Wasn't the point of (Strong) Artificial Intelligence that we could build an artificial intelligence without necessarily knowing how the brain underwrites human intelligence? That is, if cognitive functions are multiply realizable and Turing Machine Computable, then we shouldn't have to mimic the brain's capacity to 'compute' cognitive functions in order to compute cognitive functions.
  • Suppose Searle-in-the-room is instead given the task of regulating water-flow through very many pipes in such a way as to perfectly model the activity of nerve-synapses in the native Chinese speaker. Searle-in-the-plumbing has no more understanding of Chinese than Searle-in-the-room.

The Combination Reply

Alternatively, consider the brain simulator installed in a robot in such a way that the behavior of the entire system is indistinguishable from the behavior of a native Chinese speaker. Surely the system can correctly be said to understand Chinese.

  • This amounts to giving-up on the central claim of (Strong) Artificial Intelligence—that a Universal Turing Machine can compute cognitive functions. Yet Searle-in-the-plumbing-in-the-robot does not understand Chinese, and knowing that we had a system consisting of Searle-in-the-plumbing-in-the-robot should disabuse us of our impression that the entire system understands Chinese.

The Other Minds Reply

Since the Chinese Room is behaviorally indistinguishable from a native Chinese speaker, and all one has to go on in attributing an understanding of Chinese to the Chinese speaker is behavior, then the either the Chinese Room understands Chinese or no one does.

  • We take it as a fact to be explained that native Chinese speakers understand Chinese. The question is not about how we correctly attribute mental states, but what it is we are attributing when we do so. In particular, knowing the inner workings of the Chinese room convinces us that Searle-in-the-room does not understand Chinese.

The Many Mansions Reply

Artificial Intelligence will eventually develop the technological resources necessary to solve the Chinese Room conundrum.

  • That's all fine and good, but it doesn't do anything to help the project of (Strong) Artificial Intelligence.
  • Indeed, in light of the Church-Turing thesis, the technology anticipated would have to go far beyond anything we can currently conceive. That is, we may be able to develop the technology to hypercompute, but (Strong) Artificial Intelligence claims that we only need Turing Machine Computability to get the job done. (Searle does not quite put it this way, but the point serves to clarify his response.)