Sie sind auf Seite 1von 12

A Functionalist Account of Meaning

(A Functional Response to the Chinese Room Argument)

Chris Harden

The goal in this paper is to show that Searle’s Chinese room argument is not enough to

defeat functionalism as a useful explanatory tool for mentality. Also, I want to try to provide an

answer to the primary challenge that Searle’s argument poses to functionalism: the question of

whether or not functionalism can provide an account of the meaning that he claims emerges

from, or is somehow connected to, mentality.

First, I will provide an overview of functionalism and its major claims. This will lead into

an analysis of Searle’s Chinese room argument where I will try to point out the problems with

the argument, as they apply to functionalist claims. I then want to point out a very valid and

useful challenge to functionalism that emerges from Searle’s argument and provide some insight

into the solution to the challenge.

The brand of functionalism put forth by Hilary Putnam in the late sixties and throughout

the seventies was a direct response to the behavioristic and reductionistic tendencies that had

emerged within the social sciences since the early parts of the century. Behaviorism had reduced

the realm of psychological meaning to what were overtly observable input stimuli and behavioral

outputs. The identity theorists further restricted the subject domain to observable psychoneural

activity in the brain. Putnam’s proposal was well suited to synthesize many of the positive

achievements of the behaviorists’ and the identity theorists’ years of empirical research and

observation. His proposal was bold in its claim to account for inner mental events. Both the

behaviorist and identity theorist ardently avoided discussions of inner mental states for more than

half a century. Putnam’s primary argument against reductionism was his view against narrowing
the scope of the domain of what it is to have mentality. This is his classic argument of the

multiple realizability of mental beings.

Putnam’s argument toward the multiple realizability of mental properties simply wants to

say that various biological organisms may experience the same or very similar mental states that

may relate to very different physical states. Pain states, for example, may produce c-fiber

excitation in humans but a much different physical or neural state in a creature such as an

octopus. The claim, then, is that a full account of mentality must take into consideration these

variations or possible variations in internal states among all things that might be classified as

having mentality. The goal here is to widen the umbrella of mentality to abstract from biological

and physiological details so that a wider variety of diverse systems can be said to instantiate

similar psychological regularities. It is to ask, “What it is that all mental states share in common

by virtue of which they all fall under a single psychological category?”

“I propose the hypothesis that pain, or the state of being in pain, is a functional state

of the whole organism.” (Putnam, The Nature of Mental States, Rosenthal, pg. 199)

According to functionalism, mentality is not just behavioral or neural but rather mentality

is functional. Mentality is here conceived in terms of functional concepts which are specified by

a job description. Pain, for example, is conceived of as a tissue-damage indicator. An organism

then has the capacity to be in a pain state when equipped with the proper mechanism to detect

tissue-damage in which such a mechanism causally responds to such damage. To be sure, the

functionalist claim is that mental kinds of phenomenon are causal-functional kinds of

phenomenon.

Since functionalism, behaviorism, and mind-brain identity all speak of sensory input and

output or stimuli and responses, functionalism can be conceived as a broader, more generalized
form of these previous behavioristic trends with two important distinctions. Firstly, the

functionalist incorporates internal mental states as something meaningful. These mental states

are taken as real with the power to interact causally with other physiological and overt behavioral

states. Secondly, they differ in how input and output are construed. While the behaviorist can

only accept observable physical stimulus and observable behavioral responses, the functionalist

allows for mental states themselves to play a causal role in the instantiation of a given mental

state. The functionalist takes the position of mental realism in considering mental states as

having true ontological status. In this way functionalism claims to provide a more holistic view

of mentality.

Functionalism was originally formulated by Putnam in terms of Turing machines as

conceived by Alan M. Turing. Simply put, a Turing machine is a simple computational machine

designed to take in some input, compute it according to the algorithm laid out in its machine

table, and then output some result. The machine table is a complete set of rules governing the

machines functions and so it has been suggested that a particular turing machine may be directly

identified with its machine table or algorithm design. The important thing for functionalism is

that the machine itself along with the various algorithms displayed by the machine table can be

designed in a great variety of ways such that it still produces the same output for a specific input.

It follows, then, that two physical systems that are input-output equivalent may not be

realizations of the same Turing machine. Internal states are conceived in terms of the various

machine designs and algorithms displayed on the machine table.

To distinguish an autonomous mentality from a simple deterministic one, Putnam

conceived of the machine table or internal state of the machine as a set of probabalistic variable

designs. This is to say that a functional mentality operates off a finite set of possible internal
states. Each state carries with it some probability assignment according to the appropriateness of

the input / output conditions. This is a rough sketch of a functional conception of mental

autonomy and mental beings are thus conceived in terms of a functional probabalistic automaton.

“Although it goes without saying that the hypothesis is “mechanistic” in its

inspiration, it is a slightly remarkable fact that a system consisting of a body and a “soul”,

if such things there be, can perfectly well be a Probabilistic Automaton.” (Putnam, The

Nature of Mental States, Rosenthal, pg. 200)

Machine functionalism, then, is the claim that we can think of the mind precisely in terms

of such Turing machines. The thesis is, an organism that displays mentality is one where there

exists a Turing machine of appropriate complexity that gives a machine description of it, and its

mental states are to be identified with the internal states of the Turing machine. This says that the

given output of an organism due to a given input depends on the internal state of the organism at

the time in which the input is elicited. The explanatory role of social science is dependent on

accounting for such internal states. To say that a Turing machine M has a set of internal states (as

laid out by its machine table) <q1, q2, ...,qn> and that S physically realizes the machine

description of M is to say that there are real physical states in S, <Q1,Q2,...,Qn>, such that the

Qn’s instantiated by S are causally connected to the internal states of M,<q1,q2,...,qn>, and that

for all computational processes generated by M a set of isomorphic causal processes occur in S.

Against mind-brain identity theory this says that what constitutes an organism’s mentality are the

computational properties and the biological properties of the brain.

The first big problem that needs to be addressed by functionalism is in differentiating

what it means for two subjects to share the same mental state. For the machine functionalist if

two subjects are to instantiate the same mental state they must realize the same turing machine. If
two organisms realize the same turing machine then their total psychology must be identical due

to the isomorphic relation stated previously. This is highly counter-intuitive sense one cannot

conceive of two people believing that snow is white can lead to the conclusion that they share an

identical total psychology.

The first problem will be tackled more easily by looking now at machine functionalism’s

second big problem. This is the problem of how inputs and outputs are to be specified. If it fails

to address these problems, then machine functionalism will be left without the means of

differentiating something with true mentality from a mere computer simulation. In the case of the

former inputs and outputs consisting merely of strings and symbols is not fitting for something

with true mentality. Then there is a clear need to be able to specify what kinds of inputs and

outputs are suitable for an organism with mentality. The machine functionalist would reply that

for an organism that has realized a Turing machine to count as a psychological system, its

input/output specifications must be appropriate.

Back to the first problem and with consideration of the second, we can form a weaker

functionalist thesis : in consideration of the fact that input/output specifications of the two

organisms realizing the same turing machine may be different and that the individual

psychologies of each may be sensitive to input/output specifications. This is to say that to realize

the same Turing machine is not to possess an identical total psychology unless the input/output

specifications of the two systems be identically appropriate. For a human and an octopus to both

be in a pain state does not suggest that they share a totally identical psychology but rather that

there is some simpler Turing machine that accounts as a description of both in which pain is a

shared internal machine state. So what is necessary is that humans and octopus share a partial or

abbreviated psychology that encompasses pain states. This suggests that systems could share
various psychological states without sharing an identical total psychology. This suggests that all

that is necessary is to show a common sub-turing machine with a sub-group of isomorphic causal

relations.

“Turing’s proposal is to bypass these general theoretical questions about

appropriateness in favor of an “operational test” that looks at the performance capabilities

of computing machines vis-à-vis average humans who, as all sides would agree, are fully

mental.” (Kim, pg 96)

So he devised various forms of what are called Turing tests. Turing’s thesis was that if

two systems were input-output equivalent then they are eligible for the same psychological

status, which is not to say that they are psychologically identical. What follows from this

revamped machine functionalism is that two systems in which one turing machine provides a

correct machine description enjoy the same degree of mentality. Turing’s thesis, it seems to me,

assists only in blurring the distinction between real mentality and a simulation. John Searle

addressed this issue in terms of a thought experiment now known as his Chinese room argument.

The thought experiment runs as follows: First, picture a person who knows nothing about

the Chinese language and then place this individual into an isolated room with a book of

syntactical rules for combining characters in many various ways. For simplicity, assume that the

person has mastered the rule book and knows all of the rules for combining characters. Then an

outside observer could feed sets of Chinese characters into the room and combinations of those

characters could be output by the individual inside the room. If the rule book, mastered by the

person in the isolated room, were the syntactical rules for forming correct grammatical

statements in Chinese then it would appear to a speaker of Chinese that the person in the box was

also a speaker of Chinese. In reality the person in the box has no understanding of the data that
she is outputting. The person in the box is operating purely off syntax whereas a speaker of the

language understands that language in terms of semantics. No meaning or understanding

emerges for the individual in the room and therefore meaning doesn’t emerge syntactically. To

be sure, the distinction then is that computational simulations are purely syntactic whereas

human mentality is semantic.

“But precisely one of the points at issue is the adequacy of the Turing test. The

example shows that there could be two “systems”, both of which pass the Turing test, but

only one of which understands; and it is no argument against this point to say that since

they both pass the Turing test they must both understand, since this claim fails to meet the

argument that the system in me that understands English has a great deal more than the

system that merely processes Chinese.” (Searle, Minds, Brains, and Programs, Rosenthal, pg

513)

“Because the formal symbol manipulations by themselves don’t have any

intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the

symbols don’t symbolize anuthing. In the linguistic jargon, they have only a syntax but no

semantics.” (Searle, In Rosenthal, pg. 517)

This argument beautifully shows a very important distinction between computational

simulations and human mentality and thought, but does it really challenge functionalism as a

whole? Clearly implicit within Searle’s argument are the two notions of language and meaning.

His conception of understanding is semantic and not syntactic like that of a computational

machine; he makes this clear within his thesis. For Searle, semantic understanding emerges from

language use and social interaction. Does this mean that only beings with linguistic capacities are

eligible to fall under the category of a being with mentality? If so then this severely narrows the
field of what functionalism could account for as having mentality. Functionalism claims to have

been able to account for the mentality of an octopus as well as a human or even conceptual

entities such as martians, though it is hard to imagine that a pain state for an octopus has the

same meaning as it does for a human also in some pain state. It then seems plausible that there is

a special sub-group of linguistically capable mental beings for whom meaning plays a role in

their psychology. The only known linguistically capable mentality is human mentality. If

functionalism is to take Searle’s challenge seriously then its first step will have to be the

restriction of its subject domain of mental beings strictly to that of human mentality. We could

postulate otherworldly linguistically capable beings and so forth, but for the sake of clarity I shall

restrict the domain of mentality to that of human mentality. The question then seems to be

whether or not functionalism can account for the emergence of meaning within human mentality.

Before I can address this primary challenge to functionalism I first need to clarify some

of the crucial relationships between language and meaning as Searle himself might understand

them. This is important background for his conception of understanding and meaning that

challenges the functional conception of mentality. First, it is clear in Searle’s work on speech act

theory that he would have been familiar himself with what has become almost an axiomatic

thesis in the modern philosophy of language: that meaning can only be articulated through

language or by some symbolic mediation. Some may want to cling to the notions that their pets

express meanings. This may be true but an easy comparison shows that there is an important

distinction. Compare a seven-year old cat with a seven-year old child, both experiencing pain

states in which both beings are conscious. The child has the ability to articulate many things

from the nature of the accident to how she is feeling etc., whereas the cat is here at a loss. The

previously stated thesis is much weaker than most language philosophers would accept in that it
does not exclude the possibility of non-linguistic beings from having a meaningful experience; it

only suggests that linguistically capable beings have the advantage of articulating these

experiences in a verifiable way. The thesis then, to be sure, is that all meaning is articulated

through some linguistic means.

Secondly, Searle would have been quite familiar with Wittgenstein’s private language

argument.(P.I., pt. I, sect. 258) It is impossible to harbor one’s own private language. Language

itself presupposes a social world of intersubjective activity. Meaning emerges out of language

being used within a social group with real interpersonal relations and activities. Syntax and

lexicon are after the fact formalizations of the organic process of linguistic development. Searle

accounts for such social interactions as a necessary part of his own theory of meaning : speech

act theory. (Martin, pg. 83) Searle’s challenge then presupposes a world of linguistically capable

beings situated within this shared medium and interacting socially with other linguistically

capable beings.

Searle’s challenge forces the functionalists to heavily restrict their subject domain while

at the same time pushes them to broaden the scope of their project. In terms of the former, it

turns the functionalists’ attention strictly towards human mentality or linguistically capable

mentalities. In terms of the latter, it proposes that functionalism should be able to provide more

than a psychology for this special class of mental beings but fosters the necessity to address

functional beings interacting in groups and thus begs a functionalist account of social

interaction.. Previously, functionalism was simply a psychological model for mentality but

Searle seems to suggest that language and intersubjective relations are important for a full

account of human mentality. This then suggests that functionalism should be able to account for

these things if it is going to claim to provide a full account of human mentality.


It seems to me that at least a minimum level of linguistic presupposition went into

Putnam’s concept of the probabalistic automaton. It seems as though symbolic mediation is a

sort of meta-condition to even have a machine table that instructs the agent internally to do

anything. Language is also presupposed in the comparison by outside observers of one table to

the next and their respective input / output correlations. This seems to suggest that language is at

least a necessary condition for the study and philosophizing of mentality. An articulate language

is at least one major thing that separates human beings from other known forms of life. This is

perhaps why we don’t see bears in the wild conducting social studies on their species to learn

about their own bear psychology.

If we take language to be a meta-condition for all human experience then we can agree

that it is at the heart of all attempts to describe any aspect of human agency. For mentality,

language is a necessary condition to even ask questions about or even realize one’s own

mentality. Functional beings ( with a linguistic capacity ) can be conceived as always already

being immersed in language. Language, it seems, is a capacity for biologically sophisticated

organism to articulate meaning, reflect, and therefore become aware of having mentality. This all

needs to be presupposed in any formulation of functionalism. So the more interesting question

might be how different linguistic world views might effect the internal states of agency and not

how the internal states of agency can account for language?

As for the social presuppositions, functionalism only has to say that for these linguistic

capable beings they, because of their extended capacity to articulate meaning, tend towards a

very complex social existence. Each agent is a functional being interacting with other functional

beings and thus producing the world of social interaction. It is in this realm that the functionalist

can really lay claim to the explication of meaning, for this is the realm in which meaning
emerges. It is as though functional beings who are linguistically capable form a circle of

development where they’re taught rules and try them out socially which lends them to have this

and that sort of meaning which is reinforced by the same usage throughout the community. This

can be taken as a rough conception of a functionalist account of social interaction.

In conclusion the functional thesis can be adjusted as such: mental beings with a

linguistic capacity also have a special sort of internal state that allows for the articulation of their

thought in terms of language. The meaning associated with such articulation emerges through the

interaction of functional beings with a linguistic capacity with other functional beings possessing

the same or a similar linguistic capacity.

On this account, the functionalist can also draw a distinction between a computational

machine and a mentality that “understands”. It shows that distinctions can be made, by the

functionalist, between syntactic engines and semantic engines. The question stands as to whether

the functionalist would want to investigate purely syntactic mentalities, or whether to even

describe a purely syntactic engine as having mentality. What is clear is that in the face of

Searle’s Chinese room argument the functionalist must draw some distinction between

computational machines as simple explanatory models and the complex mental capacity of a

linguistically capable being such as human agents.

Bibliography

Fodor, J.A.. “Searle on What Only Brains Can Do.” In The Nature of Mind, ed. David M.

Rosenthal. Oxford, 1991

Kim, Jaegwon. Philosophy of Mind. Westview Press, 1998

Martin, Robert M.. The Meaning of Language. MIT, 1987


Putnam, Hilary. “Brains and Behavior.” In The Nature of Mind, ed. David M. Rosenthal.

Oxford, 1991

Putnam, Hilary. “The Nature of Mental States.” In The Nature of Mind, ed. David M.

Rosenthal. Oxford, 1991

Searle, John R.. “Minds, Brains, and Programs.” In The Nature of Mind, ed. David M.

Rosenthal. Oxford, 1991

Wittgenstein, Ludwig. Philosophical Investigations. Oxford, 1958

Das könnte Ihnen auch gefallen