Sie sind auf Seite 1von 5

Minds, Brains and Programs

According to Strong AI, the computer is not merely a tool in the study
of the mind. Rather, the appropriately programmed computer really is a
mind said to understand and have other cognitive states.
Searle is here going to object to claims that an appropriately
programmed computer can have cognitive states and can explain
human cognition by referring to Roger Schanks projects.
One can describe Schanks program as follows. The aim of the program
is to simulate the human ability to understand stories. It is characteristic
of human beings answer questions about the story even though the
information that they give was never explicitly stated in the story.
Schanks machines can similarly answer question in this fashion. To
do this they have a representation of the sort of information that human
beings have about restaurants the machine will print out answers of the
sort that we would expect human beings to give if told similar stories.

Partisans of Strong AI claim that in the machine can literally be said


to understand the story and that what the machine and its program
do explains the human ability to understand the story and answer
questions about it
Both claims to Searle seem to be totally unsupported by Schanks
work. (That is not to say that Schank himself is totally committed to
these claims.)
One way to test any theory of the mind is to ask oneself what it would
be like if my mind actually worked on the principles that the theory
says all minds work on. Let us apply this test to the Schank program.
Suppose I am locked in a room and given a large batch of Chinese
writingI know no ChineseNow suppose that after this first batch
of Chinese writing I am given a second batch together with a set of
rules for correlating the second batch with the first batchI can [now]
identify the symbols entirely by their shapes. Now suppose also that I
am given a third batch together with some instructions that enable me
to correlate elements of this third batch with the first two batches.
Unknown to me, the people who are giving me all these symbols call
the first batch a Script the second batch a story and the third batch
questions. They call the symbols I give them back in response to the
third batch answers to the questions and the set of rule in English that
they gave me, they call the program.
Nobody just looking at my answers can tell that I dont speak a word
of Chinese. As far as the Chinese is concerned, I simply behave like a
computer.
In light of the claims above
1. Quite obvious in the example that I do not understand a word of
Chinese. I have inputs and outputs that are indistinguishable from those
of the native Chinese speakerI still understand nothing. For the same
reasons, Schanks computer understands nothing of any stories
2. The claim that the program explains human understanding is false
because the the computer and its program do not provide sufficient
conditions of understanding since the computer and the program are
functioning and there is no understanding.

1. The Systems Reply: - This reply is basically that, in the room


example, understanding is not be ascribed to the mere individual;
rather it is being ascribed to this whole system of which he is part
MY RESPONSE to the systems theory is quite simple Let the
individual internalize all of these elements of the system We can even
get rid of the room and suppose he works outdoors. All the same, he
understands nothing of the Chinese and a fortiori neither does the
system, because there isnt anything in the system that isnt him. If he
doesnt understand, then there is no way the system could understand
because the system is just a part of him.
A counter to this could be that the man as a formal symbol manipulation
system really does understand Chinese. The whole point of the original
example was to argue that such symbol manipulation by itself couldnt
be sufficient for understanding Chinese in any literal sense
The only motivation for saying there must be a subsystem in me that
understands Chinese is that I have a program and I can pass the Turing
test; I can fool native Chinese speakers. But precisely one of the points
at issue is the adequacy of the Turing test. The example shows that
there could be two systems both of which pass the Turing test, but only
one of which understands.
Furthermore, the systems reply would appear to lead to consequences
that are independently assure. It looks like all sorts of no cognitive
subsystems are going to turn out to be cognitive If we accept the
systems reply, then it is hard to see how we avoid saying are all
understanding subsystems.

2. The Robot Reply (Yale) - This argument goes; Suppose we wrote a


different kind of program from Schanks program. Suppose we put a
computer inside a robot. This computer would not just take formal
symbols as input and give out formal symbols as output, but would
operate in such a way that the robot does something very much like
perceiving, walking, moving about- anything you like. All of this
would be controlled by its computer brain. Such a robot would,
unlike Schanks computer, have genuine understanding and other
mental states.

REPLY- The addition of such perceptual and motor capacities adds


nothing by way of understanding to Schanks original program. The
robot receives input via its perceptual apparatus, and instructions are
given to its motor apparatus without the motor knowing any of these
facts. All the robot does is follow formal instructions about
manipulating formal symbols.

3. The Brain Simulator Reply (Berkeley and M.I.T)- This argument


goes; Suppose we design a program that doesnt represent information
that we have about the world, but simulates the actual sequence of
neuron firings at the synapses of the brain of a native Chinese speaker
when he understands stories in Chinese and gives answers to the Surely
in such a case we would have to say that the machine understood the
stories; and if we refuse to say that, wouldnt we also have to deny that
native Chinese speakers understood the stories?
REPLY- I thought the whole idea of strong AI is that we dont need to
know how the brain works to know how the mind worksOn the
assumptions of strong AI, the mind is to the brain as the program is to
the hardware, and thus we can understand the mind without doing
neurophysiology. It we had to know how the brain worked to do AI, we
wouldnt do AI.
Even so, if we input into the man Chinese, fire neurons and
consequently, the man replies in Chinese, this doesnt mean he
understands. Again, instructions are merely being followed.

4. The Combination Reply (Berkeley and Stanford) - This argument


goes; While each of the previous three replies might not be completely
convincing by itself, it you take all three, they are collectively more
convincing. Imagine a robot with a brain shaped computer lodged in its
cranial cavity, imagine the computer programmed with all the synapses
of a human brain, imagine the whole behaviour of the robot is
indistinguishable from human behaviour, and new think of the whole
thing as a unified system and not just a computer with inputs and
outputs. Surely in such a case, we would have to ascribe intentionality
to the system
REPLY- I agree that in such a case we would find it rational to accept
that the robot had intentionality, as long as we knew nothing more
about it. If we knew independently how to account for its behaviours
without such assumptions we would not attribute intentionality to it,
especially if we knew it had a formal program.
We would regard the robot as an ingenious mechanical dummy. The
hypothesis that the dummy has a mind would now be unwarranted and
unnecessaryHe doesnt see what comes into the robots eyes, he
doesnt intend to move the robots arm, and he doesnt understand any
of the remarks made to or by the robot.
Comment:-
1. Brain is the hardware and mind is the software. If something goes wrong with
the hard ware, it is very difficult to correct it. But if something goes wrong with
the software - like depression, drug addiction, suicidal tendencies, neurosis etc,
they try changing the corrupted software in mental hospitals or finally may reboot
it also.
2. Brain = Physical structures.

Mind = Subjective impressions generated by processes in the physical


structures.

You have a brain when youre dead, but you dont have a mind, since there are
no neural processes and no internally-generated subjective impressions.
3. The mind is most definitely not completely separate. Whenever something
makes you angry, you are convinced you are mad and your body reacts and
your blood begins to pump faster. You can calm your mood/blood pressure
down by letting out some anger. Therefore the mind is part of the body