Sie sind auf Seite 1von 14

Navn: Rositsa Dekova Institusjon: Institutt for moderne fremmedsprk, Engelsk Seksjon Grad: Dr.Art.

Emnekode: KULT 8801 Semester: Vr, 2003

Tittel:
Paradigm Shifts in the Linguistic Field in the Second Half of 20th Century From Grammar towards Lexicon
This essay is aimed at presenting some of the major paradigms in linguistics in relation with the paradigm shifts that have occurred in the second half of 20th century. The theoretical approach adopted takes as a starting point one of the current theories (Kuhn 1996) on the structure of scientific revolutions. Special attention is paid on the so-called syntax-semantics interface and the increasingly central role of the lexicon in the contemporary linguistic theories. In addition, cognitive and neural approaches (Bates & Goodman 1997, 1999, Elman 1998) to the lexicon are used to evaluate the Grammar-Lexicon shift observed. As described by Kuhn, paradigms are scientific works that serve for a time implicitly to define the legitimate problems and methods of a research field for succeeding generations of practitioners. All those works share two essential characteristics that enabled them to act as paradigms. The achievement they have described was sufficiently unprecedented, so they could attract an enduring group of followers away from competing modes of scientific activity. Nevertheless, it was sufficiently open-ended to leave all sort of problems for the redefined group of practitioners to solve. These achievements are recounted today by elementary and advanced science textbooks. Normal science and scientific revolution are two of the main terms introduced in connection with the notion of paradigm. The first one applies to research based on one or more paradigms supplying a particular scientific community with the foundation for its further practice. Normal science is founded on the assumption that the scientific community knows what the world is like. Most of the time normal science suppresses fundamental novelties, as they are necessary disruptive to its basic commitments. But the very nature of the normal research ensures that a novelty shall not be suppressed for very long. When a problem appears, usually a normal problem, that can not be solved according to the known rules and procedures, then an anomaly is revealed that can not be aligned with the professional expectations. Thus normal science goes astray and a scientific revolution appears. As Shapin (Shapin 1996) correctly points out, there is no such thing as the Scientific Revolution. The idea of revolution as radical and irreversible reordering developed together with the unidirectional conceptions of time. Before that, from the antiquity through the early modern period, a revolution invoked the idea of periodically recurring cycle. And it is true that there can not be a single coherent story that could possibly capture all the aspects of science or its changes throughout the time. There was no singular and discrete event, localised in time and space, that can be pointed to as the Scientific Revolution.

Therefore it is more proper to talk about a scientific revolution or scientific revolutions within a particular research field. In his essay The Structure of Scientific Revolutions Kuhn describes the appearance of one such revolution to begin with some extraordinary investigations that at last lead the research field to a new set of commitments, i.e. a new basis for the practice of science. Those episodes in which the shift of professional commitments occurs are referred to as scientific revolutions. In these terms then, the usual developmental pattern of normal mature science is the successive transition from one paradigm to another via scientific revolution. And what we can call the pattern of problem solving within a research field is that each of the corresponding schools, deriving strength from its relation to a particular paradigm, emphasises the particular cluster of phenomena that its own theory could explain best. Other observations can be dealt with by ad hoc elaborations or can remain as problems for future research. Even though linguistics is considered relatively young compared to physics and chemistry, for example, it also falls into the description of normal science. Linguistics is the science of language and one of the frequently evoked definitions of linguistics calls language a path running from sound to meaning and calls linguistics the exploration of that path. Dating back to the ancient times and Aristotle many people cared about the object of linguistics language but no one was particularly concerned about defining it in a rigorous way. It was Saussure who first saw language as a social product and differentiated between parole (language in use) and langue (the system behind language use). More abstractly, these terms can be identified as behaviour and grammar. Thus parole is verbal activity: speaking, writing, listening, reading; while langue is the background system that makes linguistic behaviour possible. This was the birth of structuralism charting the system underlying the speech, not the speech itself. Later it was Leonard Bloomfield who formulated the methods of the linguistic science into a clearly defined and coherent doctrine. His ideas defined linguistics outlook at these times it was primarily a descriptive science, all the relevant psychological questions would have to be answered by behaviourism, and meaning was outside the scope of the scientific inquiry. Yet, structuralism had been regarded as very successful, giving the fullest representation ever made of the upward-looking technique (starting with sounds on moving upward to morphemes and syntax). The textbooks consisted of chapters devoted to all three phonology, morphology and syntax (usually called grammar). But due to the Neo-grammarians, whose efforts were aimed at achieving the prefect description of the words in a language, phonological and morphological analyses were rich and detailed, while syntactic analyses were haphazard and lacking a methodology. This theoretical approach towards language can still be observed within the frameworks describing classical languages, such as Latin, Old Greek, and Old English. By the middle of 20th century, however, the main gaps in this theory could not remain untouched anymore meaning and the mind and their relation to the production and comprehension of language. Namely this is what later has come to be known as syntaxsemantic interface.

I am interested in meaning and I would like to find a way to get at it. That revolutionary statement (cited in The Linguistic wars, Harris 1993) of Noam Chomsky initiates, in a way, the end of linguistic theories where anything related to the mind had to be ruled out of science. Meaning, though, came very slowly in the science of linguistics. Chomskys first transformational book Syntactic Structures (1957) appeared somewhat as an extension of the existing linguistic theory and Bloomfieldians easily saw his revisions as a methodological supplement to their concerns. Although the book forced its readers to look at familiar things from a completely different angle, transformation theory was primarily regarded as a very important advance in grammatical thinking, but not as a novelty. In that first book Chomsky considers a language L to be an infinite set of sentences, each finite in length and constructed out of a finite set of elements. The linguistic analysis must aim at separating the grammatical sequences (which are sentences of L) from the ungrammatical sequences (which are not sentences of L) and studying the structure of the grammatical sequences. A grammar of L, then, must be able to generate all the grammatical sequences of L and none of the ungrammatical. Thus it will mirror the behaviour of the speaker who, on the basis of a finite experience with the language, can produce or understand an indefinite number of new sentences. The strongest requirement he puts on the relation between a theory of linguistic structure (in his view the main objective of linguistics) and particular grammars is that the theory must provide a practical and mechanical method for actually constructing the grammar, given a corpus of utterances. Chomsky describes grammars as having a tripartite structure consisting of three sets (strings) of rules: phrase structure rules, morphophonemic rules and transformational rules. The first sequence of rules reconstructs the phrase structure strings that, via transformational rules, are carried into new strings to which morphophonemic rules can apply. Here are some of the examples of English rules Chomsky provide in his work: Phrase Structure Sentence>NP + VP VP>Verb + NP NP>NPsing NPpl N > man, ball, etc. Verb > Aux + V V > hit, take, walk, etc Transformational Structure Passive optional: Structural analysis: NP Aux V NP Structural change: X1 X2 X3 X4 > X4X2+be+enX3byX1 Morphophonemic Structure take > [teyk] take + pasv > [tuk] past > /d/ [D/]+ past >[D]+[id] (where D=[t] or [d]) [Cunv]+past > [Cunv]+[t] (where Cunv is an unvoiced consonant)

Although Chomsky has said to be interested in meaning, the theory outlined in this work was completely formal and non-semantic and his main conclusion was that grammar is autonomous and independent of meaning. This very first book, however, can easily fit the definition of paradigm it was sufficiently unprecedented and at the same time sufficiently open-ended to attract an enduring group of followers away from competing modes of the current scientific activity. It outlined the first of the four main grammatical models delineating Chomskys theoretical development, known as the early transformational theory. And in only few years, contemporary linguistics stopped being called Bloomfieldian and started being referred to as Chomskyan. A clear-cut paradigm shift has occurred.

Yet, it was almost ten years later when another of Chomskys works span the world of linguistics and is since placed among the biggest revolutions in the linguistic research society Aspects of the Theory of Syntax (Chomsky, 1965). This book laid down the basic ideas of a grammatical theory that was going to define the usual problems and methods of the research field for many generations of practitioners and went by the name the standard theory. Relating to the langue-parole distinction of Saussure and even further back in time to the Humboldtian conception (of underlying competence as a system of generative processes), Chomsky makes the fundamental distinction between competence (the native speakers knowledge of his language) and performance (the actual usage of language in concrete situations). The linguists aim, then, is to determine from the data of performance the underlying system of rules mastered by the speaker and used in his performance. Thus a grammar of a language must be a description of the ideal speakers competence. Such perfectly explicit grammar (one that does not depend on the understanding of the reader, but provides an explicit analysis of his contribution) Chomsky calls generative grammar. Since the knowledge of a language involves the ability to understand and produce indefinitely many sentences, a generative grammar must be a set of rules that can generate an indefinitely large number of structures. Chomsky sees this system of rules as analysable via three major components: the syntactic, phonological and semantic components of a generative grammar. The syntactic component specifies an infinite set of abstract formal objects, each of which incorporates all information relevant to a single interpretation of a particular sentence. The syntactic component, therefore, contains lexicon and each lexical item is specified in the lexicon in terms of its intrinsic semantic features. The phonological component determines the phonetic form of a sentence generated by the syntactic rules; and the semantic component provides the semantic interpretation of the sentence. Consequently, the syntactic component of a grammar had to specify (for each sentence) a deep structure that had to determine its semantic interpretation, and a surface structure that determines its phonetic interpretation, interpreted respectively by the semantic and the phonological components. As the theory evolved further, however, the level of deep structure failed to account for the semantic interpretation of the sentence, as intended, but was instead transmuted into purely syntactic level. The central idea of transformational grammar, though, is that those two structures are distinct and the surface structure is determined by repeated application of certain formal operations called grammatical transformations. The main focus was on the grammar rules and the lexicon component was presented merely as list of lexical entries, each consisting of a distinctive feature matrix. Thus the relevant information about each item was broken down into combinations of syntactic and semantic distinctive features. For example: Noun > (+N, +/-Common, +/-Animate, +/-Human, +/-Count, +/-Abstract) (boy, +Common, +Human, +Count, ) (Peter, -Common, +Human, ) Verb > (+V, +/-Progressive, +/-Transitive, +/-Abstract-Subject, +/-Animate-Object) Where the last three features can be replaced by a complex symbol containing the feature, just in case there is the environment to do so, e.g. these features must be introduced by rewriting rules that are restricted with respect to context.

For example: V > (+V, +Transitive)/ NP V > (+V, -Transitive)/ # Giving the lexicon a place within the syntactic component and introducing the deep structure representation that accounts for the semantic interpretation of the sentence is definitely a step further on the path to meaning, yet, in the very beginning of this path. Though Chomskys work had been consistently criticised for its non-semantic approach to grammar, it has been a rule model for many linguists for over 40 years. Not only in syntax has his seminal work established the theory fundamentals and shaped the outlines for further investigations. It has also given the background for advanced research and rethinking of the underlying principles within the theories of phonetics and semantics. In the middle 70s a new view of phonological structure (Goldsmith 1976) was stated. Namely that the unlimited set of phonological structures of a language is the product of an autonomous generative phonological component. This component is built not out of units derivable from syntactic structure by bracket erasure and readjustment, but consist of independent tiers (such as segmental and syllabic structure, metrical structure, and intonational structure), connected by association lines. This view of phonological structure became universally accepted by phonologists. A student of Chomsky, Ray Jackendoff wrote his Semantic Interpretation in Generative Grammar (1972) after Chomskys influential Aspects. In search for the place of semantic interpretations within the already established standard theory of syntax, Jackendoff introduced the basics of the so-called extended standard theory. The main idea was that semantic rules were the same as syntactic rules, namely, transformations that do not take place on a single level (deep structure level as proposed by Chomsky). He showed that surface structure played much more important role in semantic interpretation then it had been originally supposed and that it was not the deep structure that completely determined meaning. It was namely Jackendoff, who posited the foundation of X-Theory (ex-bar theory), universally applicable syntactic theory that defines the elements that can be combined to make up syntactic representations, and the ways in which they can be combined, that is, the rules of combination. Yet, Jackendoffs appeal for a much more serious study of semantics, and not one that is reduced to syntactic terms only, has set the outlines for his further theoretical development and given the basis for some of his future key works. In the early 80s Conceptual Semantics was developed (Jackendoff 1983), a new tradition of semantics as a combinatorial system of considerable complexity and subtlety. Thus semantics, too, was viewed as an autonomous generative component, not as derivative of syntactic generativity. Conceptual Semantics is concerned most directly with the form of the internal mental representations that constitute conceptual structure, and with the formal relations between this and other levels of representation. Conceptual structure is seen as the form in which speakers encode their construal of the world. Lexical concepts are the concepts expressed by the words in the sentence, which are the basic units out of which a sentential concept is construed. Learning a lexical concept, then, is construing a composite expression within the grammar of

lexical concepts and associating it with phonological and syntactic structures, and storing them together in the long-term memory as a usable unit for future access. The grammar of lexical concepts consists of a finite group of mental primitives and principles of mental combination that collectively determine the set of lexical concepts. The theory of conceptual semantics thus takes conceptual structure to be entirely parallel to the syntactic and phonological structure. This theory is further developed in Jackendoffs book Semantic Structures (1990), where he pays special attention to the lexicon and its entries and explores the composite nature of the conceptual structures. He introduces an organisation of the mental information structure involved in the language to include three autonomous levels of structure: phonological, syntactic and conceptual. Each of these levels has its own organisation into subcomponents, its own set of primitives and principles of combination, and is described by a set of formation rules that generates the well-formed structures of the level. There are also sets of correspondence rules that link the levels and rules of inference for the conceptual structure domain. In addition, correspondence rules between the linguistic levels and non-linguistic domains are also included. These rules determine the mapping from the auditory input into phonological structure, as well as the mapping from phonological structure into the motor output. On the level of conceptual structure, however, the correspondence rules determine the mapping between conceptual structure and other levels of mental representation that encode the input and output of vision, action, etc. Alongside, several other frameworks (Lexical-Functional Grammar, Head-Driven Phrase Structure Grammar, and Construction Grammar) developed, all of them partly based on Chomskys theoretical approach and at the same time substituting it with a more appropriate view. The scientific works outlining these frameworks shared the essential characteristics allowing for those theories to be also defined as paradigms. They can also serve as good examples to the so-called invisibility of revolutions (Kuhn 1996). As described by Kuhn, most of the scientific revolutions have been originally viewed simply as additions to the existing scientific knowledge, or in other words, as better editions of an already established paradigm. And only on a later stage of their development they have been perceived as new paradigms and the occurrence of a paradigm shift has been realised. The peculiarity, though, is the parallel existence of several almost equally influential paradigms in contemporary linguistics, each of them offering a different approach to syntax and lexicon within the syntax-semantic interface. The linguistic paradigms have undergone a major change in focus. They do not have to be universally accepted to fully function as paradigms. Thus the existence of several parallel paradigms is possible and even justified within the linguistic field. Although direct comparison is very difficult, a brief description of those paradigms will allow for presenting some of the general differences between them in respect to the paradigm shifts observed. The main distinction, however, regards not only the differences in the formal description of language, but also the extent to which lexicon takes place. Lexical-Functional Grammar is described (Kaplan & Bresnan 1982, Bresnan 2001) as a theory of grammar that has a flexible and mathematically well-defined grammar formalism. It has a constraint based, parallel correspondence architecture. There are no serial derivations and no deep structures, unlike the transformational grammar. Abstract relations are locally distributed as partial information across words and overt fragments of the structure, and may be monotonically synthesised in any order or in parallel. And the most radical difference, as

compared to other grammar formalisms, is that Lexical-Functional Grammar (hence LFG) is noncompositional, allowing the content of a constituent to vary depending on its context. The formal model of LFG embodies three general design principles: variability, universality, and monotonicity. The first one, variability, states that external structures vary across languages. The formal model of external structure in LFG is the c-structure, constituent structure or categorial structure. Fully inflected words are the terminal elements of the c-structures of the sentences, and every word belongs to exactly one node. A commonly used representation of c-structure is the context-free phrase structure tree. An example is S > NP VP (for English), where the rule could be different for the different languages. The second principle, universality, states that the internal structures are largely invariant across languages. The internal structure of a language is where the grammatical relations are represented, how syntactic functions are associated with semantic predicate argument relations. The formal model of internal structure in LFG is the f-structure, functional structure. At this level appear the concepts of subject (SUBJ), object (OBJ), predicator (PRED), etc. because they abstract away from expressions in terms of external order and category by taking as equivalent all the expressions that behave alike under the mapping to argument structure. The external structure and the internal structure are different but parallel syntactic structures and the units postulated for each structure do not generally converge. Since there are mismatches in category and configuration the theoretical vocabulary labelling the units of the f-structure is disjoint from that of external structure. In addition to the c-structure and f-structure, LFG postulates an a-structure (argument structure), and other dimensions of grammatical structure such as p-structure (prosodic structure) and -structure (semantic structure). The third principle, monotonicity, regards the correspondence between c-structures and fstructures, which in LFG is a piecewise, monotonic function. The correspondence mapping does not preserve sameness of form. It is designed to preserve inclusion relations between the information expressed by the external structure and the content of the internal structure, which implies that the grammatical relations of parts are preserved in the whole. This allows for partial information about abstract internal structure to be locally distributed in such a way that the global internal structure can be inferred from parts of the expression in any order. In general, LFG represents knowledge of language as localised, partial knowledge that is synthesised by constraint satisfaction. Due to its distinctive architectural properties and its explicit mathematical model, LFG has attracted interest among the linguists and even beyond linguistics proper, within the fields of psycholinguistics and computational linguistics. Head-Driven Phrase Structure Grammar (hence HPSG) is described (Pollard & Sag 1987, 1994) as an integrated theory of natural language syntax and semantics. There are a number of similarities between Government-Binding (GB) theory (the latest stage of Chomskys research framework) and HPSG. In both of them structure is determined by the interaction between lexical entries and parameterised universal principles of grammatical well formedness and some of the key principles in GB have their direct analogues in HPSG. At the same time, however, there are many differences between the two theories regarding both global theory architecture and technical details. Unlike GB, HPSG is nonderivational and it lacks the notion of transformation. In this respect it sides with LFG. The attributes of linguistic structure in HPSG are related not by movement but rather by structure sharing. That is, token identity between substructures of a given structure in accordance with lexical specifications or grammatical principles (or complex interactions between the two).

The number and the nature of the structural levels posited also differ significantly between the two theories. In HPSG it is assumed that all signs (words, sentences, or subsentential phrases) have the attributes PHONOLOGY (PHON) and SYNTAX-SEMANTICS (SYNSEM) and that all phrasal signs have the attribute DAUGHTERS (DTRS) as well. The value of PHON attribute is assumed to be some kind of feature representation of the signs sound content that serves the basis for phonological and phonetic interpretation. The nature of PHON, though, is not explored. PHON values are presented as lists of phoneme strings, or often just as lists of English orthographies. The SYNSEM attribute includes a complex of linguistic information that forms a natural class in the sense that it is precisely this information that has the potential of being subcategorised for by other signs. Since the value of SYNSEM is another structured object, it has attributes on its own called LOCAL (LOC) and NONLOCAL (NONLOC). NONLOC information figures centrally in the analysis of the unbounded dependency phenomena (wh-questions, relative clauses, itclefts, etc.). LOC information is in turn divided into CATEGORY, CONTENT, and CONTEXT attributes. These three pieces of information are viewed as attributes of a single structure, because it is assumed that only they are shared between a trace and its filler in an unbounded dependency. Further more the CATEGORY value contains the two attributes HEAD and SUBCAT. The HEAD value of a sign is roughly its part of speech information and is always structure-shared with the value of its phrasal projections. The SUBCAT value of a sign is a specification of what other signs it may combine with in order to become saturated. Thus the SUBCAT value is a list of synsem objects, corresponding to the SYNSEM values of the other signs selected as complements (in a broader sense including also subjects) by the sign in question. The CONTENT value, together with CONTEXT, specifies the signs contribution to semantic interpretation. While CONTENT values represent contributions to literal (truth-conditional) meaning, CONTEXT value (or rather its single attribute BACKGROUND) represents conditions on anchors that correspond to presuppositions or conventional implicatures. Within the general theory, signs fall into two disjoint subtypes, lexical signs and phrasal signs. In addition HPSG poses principles of grammar, which, when satisfied, will specify the well-formed phrases of a given language. These principles can be universal (Head Feature Principle, Subcategorization Principle, etc) or language specific (principles of constituent ordering). Thus HPSG formulates universal grammar and grammars of particular languages as a system of constraints on linguistic entities, modelled as labelled graphs of feature structures. Where Lexicon takes part in the particular grammar and is seen as a system of lexical entries (possibly interrelated by lexical rules). Construction Grammar (Fillmore 1988, Goldberg 1995) has grown largely out of work on frame semantics (Fillmore 1975, 1985) and its basic tenet is that traditional constructions are the basic units of language. It is assumed that constructions (as form-meaning correspondences) exist independently of particular verbs. That is, that constructions themselves carry meaning, independently of the words in the sentence and that particular semantic structures together with their associated formal expression must be recognised as constructions independent of the lexical items which instantiate them. Construction grammar is generative in the sense that it accounts for the fact that there are an infinite number of expressions that must be allowed by the grammar while there are an infinite number of other expressions that must be ruled out or disallowed. Like LFG and HPSG, Construction Grammar is nontransformational and no underlying syntactic and semantic forms are posited.

Within the theory of Construction grammar it is assumed that the basic means of clausal expression in a language is provided by a special subclass of constructions called argument structure constructions. Examples of English argument structure constructions (Goldberg 1995) include the following: Ditransitive Caused Motion Conative X CAUSES Y TO RECEIVE Z X CAUSES Y TO MOVE Z X DIRECT ACTION at Y Subj V Obj Obj2 Pat faxed Bill the letter. Subj V Obj Obl Pat sneezed the napkin off the table. Subj V Oblat Sam kicked Bill.

Further more the theory of Construction grammar posits roles as semantically constrained relational slots in the dynamic scene associated with the construction. A distinction is made between participant roles (delimited by the verbs semantics) and argument roles (associated with the construction). It intends to capture the fact that verbs are associated with framespecific roles, whereas constructions are associated with more general roles, such as agent, patient, and goal. Participant roles are viewed as instances of the more general argument roles and are expected to capture specific selectional restrictions as well. However, if a verb is associated with a construction, the participant roles of the verb may be semantically fused with argument roles of the construction. Where fusion is meant to capture the simultaneous semantic constraints on participant roles and argument roles. The possibility of roles fusing is therefore determined by the compatibility of their types and the rules of fusion are determined by two principles: 1.The Semantic Coherence Principle: Only roles, which are semantically compatible, can be fused. 2. The correspondence Principle: Each participant role that is lexically profiled and expressed must be fused with a profiled argument role. Thus the representation of an argument construction is assumed to consist in a pairing between a semantic level and a syntactic level as shown bellow: Ditransitive Construction: Sem CAUSE-RECEIVE |R R: instance, PRED means | | Syn V < agt rec pat > | : | < > | | | | | | SUBJ OBJ OBJ2

The semantics associated directly with the construction is CAUSE-RECEIVE <agt pat rec>. PRED is variable that is filled when a particular verb is integrated into the construction. The roles indicated by solid lines are obligatorily fused with participant roles, whereas the roles indicated by a dashed (dotted) line are not obligatorily fused with roles of the verb, that is, they can be contributed by the construction. The type of relation R specifies the way, in which

the verb is integrated into the construction. Sometimes a more specific relation like means or instances can replace R. For example (taken from Goldberg, 1995) the verb hand is associated with three profiled participants: hand, handee, handed (where the labels of these roles are of no theoretical significance and are only intended to identify the particular participants in the verbs frame semantics). Therefore there will be one-to-one correspondence between the profiled participants of hand and the profiled argument roles of the ditransitive construction: Composite Fused Structure: Ditransitive + hand Sem CAUSE-RECEIVE |R R: instance, HAND means | | Syn V < agt rec | : <hander handee | | | | SUBJ OBJ pat > | handed > | | OBJ2

As we see, the main difference, that sets Construction grammar apart from the other two theories, is that no strict division is assumed between lexicon and syntax. Both lexical and syntactic constructions are viewed as essentially the same type of declaratively represented data structure: both pair form and meaning. So far we have observed an apparent trend toward lexicalism in the recent linguistic theories. Within each of the three paradigms presented above (LFG, HPSG, and Construction Grammar) the richness and the diversity of linguistic forms within a particular language are captured almost entirely by the lexicon. Further more the lexicon is seen as consisting of complex propositional structures and productive rules that govern the way in which the elements are combined. The basis for the paradigm shifts, however, concerned largely the different approaches to the lexicon as part of the syntax-semantics interface. Thus, phenomena, previously handled by a separate grammatical component, have been moved into the lexicon (LFG, HPSG). And in Construction grammar the distinction between grammar and lexicon has disappeared altogether. It took several decades for syntax, phonology and semantics to be realised and described as parallel autonomous components of language production and understanding. It was finally time to pay the indispensable attention to mind and the way those three components were organised within the cognitive human capacities. The conception of mental organisation, or the so-called Representational Modularity, Jackendoff explores in his book The Architecture of Language Faculty (1997). The main idea of the modular view of language and cognitive capacities is that there is some finite number of distinct modules of the mind and each of these modules is responsible for a different representational format or a language of the mind. Each of these languages is a formal system with its own set of primitives and principles of combination. Following the idea of Representational Modularity Jackendoff proposes a tripartite parallel generative architecture where both phonology and semantics generate structures together with syntax. However, he

argues that only phonology and grammar are proper language systems while concepts are part of the lexicon but not of language itself, since language is not necessary for the use of conceptual structure. There are possible situations where non-linguistic organisms (primates or babies, for example) use conceptual structures as part of the encoding of their understanding of the world. Jackendoff sees Conceptual structure as much richer and including other types of thought. He also posits a system of interface modules in addition to the representational modules. An interface between system A and system B should consist of three components: A set of representations in system A to which the interface has access A set of representations in system B to which the interface has access A set of A-to-B correspondence rules. The correspondence rules, however, do not perform derivations in the standard sense of mapping a structure within a given format into another structure within the same format. Crucially, they map between one format of representation into another, as for example, phonetic representations into motor instructions during speech production. Conceptual structure, as seen by Jackendoff, is a central cognitive level of representation, interacting richly with the other central cognitive capacities (auditory, visual, spatial, etc.) In this view a rethinking of the term lexical item was necessary and Jackendoffs paper Whats in the Lexicon (2001) addresses precisely these issues. A widespread stereotype and a popular conception of language is that the memorised units of language (and therefore the ones stored in the lexicon) are words. Thus the terms lexical item and word are used interchangeably, and the term lexicon is to stand for all the words the speaker knows and therefore contains only non-predictable features. As it turns out, however, these assumptions deviate from psychological reality. Jackendoff argues that lexical items may also be smaller or bigger than grammatical words, not all grammatical words are lexical items, and that there are lexical items that contain no phonological form. A word is, then, seen as tripartite multiple interface rule, where the three components (phonology, syntax and semantics) can exist independently. This can be observed in the so-called defective words, where one of the components is missing. For example: interjections (ouch, hello) phonology and semantics, no syntax do-support, expletives (it, there) phonology and syntax, no semantics PRO (subject of infinitive as in He tried (PRO to leave)) syntax and semantics, but no phonology

A possible solution is seen in a heterogeneous theory of the composition of lexicon distinguishing between productive (for example, derivation, inflection) and semiproductive (irregular verbs) processes, depending on where morphology is, in lexicon or grammar, or in both. Following this proposal, cat is both a word and a lexical item; -s is a lexical item but not a word; the regular form cats is a word but not a lexical item, being made out of the lexical items cat and s ; and the irregular men is both a word and a lexical item. How is this information mentally represented? Earlier it has been speculated that this may be a single defining feature of the item or sets of necessary and sufficient conditions. It is most likely that mental representations are lists of linguistic features, which in addition may include input from other modalities.

What would be the mental representation of the word cat, for example? Phonology Syntax Conceptual Structure 3-D model /kt/ Noun Thing (a picture)

Audial representation Haptic representation Etc.

(meow) (warm, soft fur)

The crucial points in this lexicalist view are that no modular dissociation between grammar and lexicon is assumed and there is also a reduction of the traditional distance between linguistic theory and psycholinguistic and developmental considerations. Namely this sort of interrelationship between grammar and the lexicon has been reported (Bates and Goodman, 1997) with evidence from child acquisition and early language development in atypical populations. It has long been a well-known fact that children undergo a rapid acceleration in the growth of their vocabulary, usually at the age between 16 to 20 months. What Bates and Goodman have discovered is that there was a tight relationship between this vocabulary burst and the emergence of grammar. The evidence showed that the emergence and elaboration of grammar are highly dependent upon the vocabulary size throughout this period. In normal children a constant and lawful interchange between lexical and grammatical development is observed. The results for normal children were further compared with studies of early language development in several atypical populations (early talkers, children with focal brain injury, Williams Syndrome and Down Syndrome). The comparison showed that grammar and vocabulary do not dissociate. When one of these developmental landmarks was delayed or accelerated, so was the other. Thus among the other possible factors playing a role in the lexicon-grammar interface, the interrelation between the growth in the lexicon and the emergence of grammar takes an important place. The lexicalist approach has also theoretical consequences for the problem of learnability of language, which has traditionally been posed as how children learn rules of grammar, given knowledge of words. This traditional view has to be reconsidered on account of the data showing that learning words is often aided by knowledge of how words can be used in wellformed phrases, i.e. knowledge of grammar. Such data (Elman 1998) have been received from research on simple recurrent networks within the field of cognitive science. The network learning goes through three stages. In early learning, the network obeys closely the observed facts and is conservative in its predictions. At the second stage, the network learns generalisations about classes of words, thus being able to generalise to the novel use of familiar words, treating gaps as accidental. With additional training (stage three), the network learns to identify the gaps as such and treat them as a diagnostic of a systematic property of words usage. Thus an important relationship between growth in the lexicon and the emergence of grammar is observed, reminiscent of that reported by Bates and Goodman (1997) for children.

For the network, to know a word means to know how to use it, i.e. to know its grammatical properties. This knowledge, however, does not require exhaustive experience with all the possible contexts in which this word can appear. When there is sufficient experience with other words which can collectively establish a category, the network will extrapolate to novel usage. Thus the overall grammatical knowledge increases to the corpus size (hence the increased number of extrapolations to novel uses), independently of experience. In this respect, the role of vocabulary size in generalisation, and the role of generalisation in supporting grammar provides an account for the vocabulary-grammar relationship. Chronologically following the main paradigms defining the linguistic field in the second half of 20th century, we have also accounted for the paradigm shifts that have occurred in linguistics with respect to the syntax-semantic interface. The crucial novelty, however, is seen in the increasingly central role of the lexicon that has replaced the role, earlier performed by grammar. As Kuhn points out, when paradigms change, that leads to a new perception of the world, as though the world itself has changed. Scientists adopt new instruments and look in new places. Within the paradigm shifts observed, there are also new instruments to use and new places to look. Following the trend toward lexicon, linguists adopt more holistic view of how language is placed within the larger domain of intelligent human behaviour and how it functions in synergy with other cognitive capacities. They use the data from psycholinguistics, neurolinguistics and cognitive science to study new various aspects of the processes involved in language production and comprehension.

References:
Bates, E. & J. Goodman 1997. On the inseparability of Grammar and Lexicon: evidence from acquisition, aphasia, and real-time processing. In: Language and Cognitive processes, 12(5/6). Bates, E. & J. Goodman 1999. On the emergence of grammar from the lexicon. In B. MacWhinney (Ed.), The emergence of language. Mahwah, NJ: Lawrence Earlbaum. Bloomfield, L. 1993. Language. New York: Holt, Rinehart & Winston. Bresnan, J. 2001. Lexical-Functional Syntax. Blackwell Publishers Inc. Chomsky, N. 1957. Syntactic Structures, Mouton & Co . S-Gravenhage Chomsky, N. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press. Fillmore, C. 1975. An Alternative to Checklist Theories of Meaning. BLS 1, 123-131. Fillmore, C. 1985. Frames and the Semantics of Understanding. Quaderni di Semantica 6(2):222-53. Fillmore, C. 1988. The Mechanisms of Construction Grammar. BLS 14, 35-55 Goldberg, A. 1995. Constructions: A Construction Grammar Approach to Argument Structure. Chicago: The University of Chicago Press. Elman, J. 1998. Generalisations, simple recurrent networks, and the emergence of structure. In M.A. Gernsbacher & S. Derry (Eds.), Proceedings of the 20th Annual Conference of the Cognitive science Society. Mahway, NJ: Lawrence Erlbaum Associates. Goldsmith, J. 1976. Autosegmental phonology. Doctoral dissertation, MIT Harris, R. 1993. The Linguistic Wars. Oxford: Oxford University Press. Jackendoff, R. 1972. Semantic Interpretation in Generative grammar. Cambridge, MA: MIT Press Jackendoff, R. 1983. Semantic and Cognition. Cambridge, MA: MIT Press. Jackendoff, R. 1992. What is a Concept? In: Frames, Fields and Contrast. Lawrance Earlbaum Associates, Publishers. Jackendoff, R. 1997. The Architecture of the language faculty. Cambridge, MA: MIT Press Jackendoff, R. 2001. Whats in the Lexicon? In: Noteboom, S. At al. (eds.) Storage and Computation in the Language Faculty. Kaplan, R. and Joan Bresnan. 1982. Lexical-functional grammar: a formal system for grammatical representations. In The Mental Representation of grammatical relations. Cambridge, MA: MIT Press. Kuhn, Th. 1996, The Structure of Scientific Revolutions. 3rd ed. Chicago: University of Chicago Press. (1st ed., 1962) Pollard, C. & I. Sag 1994. Head-Driven Phrase Structure Grammar. Chicago: University of Chicago Press. Shapin, S.1996. The Scientific Revolution. Chicago: University of Chicago Press

Das könnte Ihnen auch gefallen