Latest In

News

Argument against the Chinese Room Argument

A short essay arguing against John Searle's "Chinese room argument".

Author:Hajra Shannon
Reviewer:Paula M. Graham
Jul 19, 2010804 Shares804K Views
In John Searle's Chinese room argument, he attempts to prove that it is impossible for a symbol-manipulator to be capable of thought. However, his argument is flawed and actually aids in the argument for the symbol system hypothesis. I will show that even the symbol-manipulator in his extreme example is capable of thought, although it still would be inferior to the human brain.
In John Searle's argument, an operator sits in an isolated room with rule books, an input slot, and an output slot. He performs various binary operations, manipulating the symbols given to him, and delivers the result of his operations through the output slot. John Searle states that the operator is unaware of the purpose of the program that he is performing and that the system cannot understand Chinese because the operator doesn't understand Chinese. Of course, the operator cannot understand Chinese, and neither can the books of rules. However, it is my argument that greater intelligence is formed that the operator need not be aware of and that it is a mistake for Searle to assume that the system is limited to the operator. When the operator starts manipulating symbols as per the instructions in his manuals, he creates a larger system. An intangible intelligence that does understand Chinese is formed and can understand and think as a human does.
To illustrate this in another way, consider the human brain, it consists of many individual neurons. With this biological collection of neurons that form the human brain we can mentate. Now imagine that every human in the world is given a small cubicle and mail-tubes for input and output. It is every human's responsibility to perform the job of a neuron, and with reference to a manual (based on physics and chemistry) they take input, and send output to other 'neurons' through their mail-tubes. This would perform to mirror the chemical reactions that occur in neurons as they receive input, and 'fire' output depending on the input received. In this way, the human population would act as a group mind. An individual would not necessarily know what the group mind is thinking, yet the group mind would be thinking. If we follow Searle's logic, we would have to conclude that since our neurons do not understand English, we cannot really understand English. This is false, and simply points out the error in Searle's argument. In the Chinese room argument, John Searle has simplified my example by using only one human, a more complicated manual, and lots of paper. However, even with one participant, the greater 'group-mind' does exist outside its constituent parts.
This contradicts John Searle's position that merely manipulating symbols will not enable the device to understand or think. He is incorrect because he assumes that the 'device' in question only encompasses the operator who performs the program. If one realizes that the 'device' is greater than the sum of its parts (even if it only has one operator), one can see that the device does actually have the potential for thought.
Although I believe that the system portrayed in John Searle's "Chinese room" is capable of understanding and thinking, I realize that it is severely crippled compared to a human intelligence. A human can think, and understand, but there are several areas in which humans would obviously be superior to the "Chinese room" system. One of the advantages of our brain is that it can store information for later use. We might say, remember that Frank hates donuts, and when we are told later that he is eating donuts, we could be surprised. Or we could be curious as to why he might have changed his feelings about donuts. In Searle's example, there is no mention as to whether or not the operator, Joe Soap, keeps information from past input somewhere in his journals, and so I assume that he does not. If the rules were changed to allow for computation of output based on current and previous input, the system could solve this problem.
Another advantage that the human brain offers is the ability to learn. Joe Soap's rule books are static, and therefore all incoming input would be treated the same way. To overcome this problem, either the rule books would be editable, or the memory which I previously discussed would be able to store information about how to deal with input. In order to continue my neuron-cubicle illustration, I would keep the rule books static (as the physical laws are), but allow information to be stored in memory that allows for different algorithms to be implemented using the static library of rules in the manual (a kind of basic logic).
Clearly the system described by John Searle is capable of thought, but it is also obvious that alterations have to be made to allow the system to realize the abilities of the human brain.
Hajra Shannon

Hajra Shannon

Author
Paula M. Graham

Paula M. Graham

Reviewer
Latest Articles
Popular Articles