Sie sind auf Seite 1von 6

Thinking Like a Linguist

Neal Goldfarb
www.LAWnLinguistics.com

Presentation at University of Pennsylvania Law School


October 11, 2016
Few lawyers would rely on high-school civics in arguing a point of constitutional law, or
on high-school biology in handling a case about a biotech patent. But when it comes to
interpreting legal texts such as statutes, lawyers and judges typically fall back on what they
learned in high school or junior high, with an assist from a dictionary or perhaps from
Strunk & White or The Chicago Manual of Style.
Those resources arent really adequate to the task, because they dont provide the
analytical tools and vocabulary that would be enable lawyers and judges them to deal with
language-related issues with the same sophistication that they bring to other aspects of legal
analysis. Most lawyers (like most people in general) have never taken a course at higher than
a high-school level in how to analyze language. It is doubtful that such a course has ever
been offered by any law school, and most lawyers are probably unaware that there is
anything significant that such a course could teach.
But there is. Thanks to research in linguistics (and related fields such as lexicography), a
great deal has been learned about how language workslanguage in general and English in
particular. And that knowledge can be useful in analyzing issues of word meaning or
grammar that arise in legal interpretation. It can provide methods of analysis, as illustrated
by two of the readings. It can also give insight into the semantics of particular grammatical
constructions by making explicit the implicit knowledge that enables native speakers of
English to use and understand the language, as illustrated by the third reading. And beyond
those practical benefits, insights from recent work in lexicography and corpus linguistics
raise serious questions on the courts near-total reliance on dictionaries to decide questions
of word meaning. (See Lexicographers on dictionaries and word meanings, pages 34, below.)
Readings
Brief for the Project on Government Oversight et al. as amici curiae, FCC v. AT&T, Inc., 131
S. Ct. 1177 (2011). Read: Question Presented, Introduction and Summary of Argument,
and Argument sections A and B. (Optional: For more information about the corpora
that are discussed in the brief, see Linguistic corpora, pages 45, below.)
Brief for Professors of Linguistics as amici curiae, Rodearmel v. Clinton, 666 F. Supp. 2d 123
(D.D.C. 2009) (3-judge court) (dismissed for lack of standing). The introduction/sum-
mary on pages 1-2 will be more understandable if you read the Statement of the Case
first, so that you understand the issue that the brief deals with.
Syntactic Ambiguity (blog post on LAWnLinguistics).
Thinking Like a Linguist

Background on tree diagrams


for Syntactic Ambiguity post
A sentence (S) can consist of a noun phrase (NP) followed by a verb phrase (VP); a verb
phrase can consist of a verb followed by a noun phrase:

A verb phrase can be modified by an adverb (Adv) or a prepositional phrase (PP) or


both, in which case the verb phrase and its modifier(s) combine to make a larger verb
phrase:

2
Thinking Like a Linguist

Lexicographers on dictionaries and word meanings1


B.T. Sue Atkins & Michael Rundell, The Oxford Guide to Practical
Lexicography (2008). www.lexmasterclass.com/people/
[The convention of dividing dictionary entries into separate numbered senses] rests on two
(unarticulated) assumptions:
first, that there is a sort of Platonic inventory of senses out there (so if the diction-
ary says word W has N senses, it cant possibly have N 1 or N + 2 senses)
second, that each sense is mutually exclusive and has clear boundaries (so if a
specific occurrence of [a word] is assigned to sense 5, it cannot also belong to sense
6).
[N]either of [these] assumptionscan be sustained in the face of the linguistic evidence.
And in response to these complicated realities, we find that different dictionaries divide up a
words semantic space in widely differing ways.
The disjunction between lexicographic practice and linguistic reality has often been
commented on. Apresjan, for example, points out that: Dictionaries greatly exaggerate the
measure of discreteness of meanings, and are inclined to set clear-cut borders where a closer
examination ...reveals only a vague intermediate area of overlapping meanings (Apresjan
1973: 9).
***
Most people would agree that words have meanings, sometimes multiple meanings. But
meanings and dictionary senses arent the same thing at all. Meanings exist in infinite
numbers of discrete communicative events, while the senses in a dictionary represent
lexicographers attempts to impose some order on this babel. We do this by making
generalizations (or abstractions) from the mass of available language data. These generali-
zations aim to make explicit the meaning distinctions whichin normal communication
humans deal with unconsciously and effortlessly. As such, the senses we describe do not
have (and do not claim) any special status as authoritative statements about language.

B.T. Sue Atkins, Practical Lexicography and its Relation to Dictionary Making,
Dictionaries: Journal of the Dictionary Society of North America (1992/93)
Faced with the overwhelming richness and subtlety of the language in a computerized
corpus, I no longer believe that it is possible to give a faithful, far less a true, account of the
meaning of a word within the constraints of the traditional [dictionary] entry structure.

1. Paragraphing has been altered in some places, and in the interest of readability, not all deletions
have been indicated.

3
Thinking Like a Linguist

Patrick Hanks, Lexical Analysis: Norms and Exploitations (2013)


www.patrickhanks.com
A modern dictionary, with its neat lists of numbered senses, offers the comforting
prospect of certainty to linguistic inquirers. It suggests, Here is a menu of choices, a list of
all and only the words of the language, with all and only their true meanings. All you have to
do is to choose the right one, plug it into its linguistic context, andhey presto!you have
an interpretation, disambiguated from all other possible interpretations.
Other factors, too, encourage this traditional view of a dictionary entry as a statement of
criteria or conditionsnecessary and sufficient conditionsfor the correct use of a
word.The very word definition implies identifying boundaries: a tool for deciding between
correct and incorrect uses of a word.This is a traditional view of word meaning that goes
right back (via Leibniz) to Aristotles doctrine of essences, essential properties that are
distinguished from accidental properties.
Contrasting with this account is a view of word meaning as an open-ended phenomenon
and of dictionary definitions as vague, impressionistic accounts of word meaning. In the
words of Bolinger, they are
a series of hints and associations ... a nosegay of faded metaphors. (Bolinger 1965)

Any attempt to write a completely analytical definition of any common word in a natural
language is absurd. Experience is far too diverse for that. What a good dictionary offers
instead is a typification: the dictionary definition summarizes what the lexicographer finds
to be the most typical common features, in his [or her] experience, of the use, context, and
collocations of the word.Necessary and sufficient conditions are fine for a great number
of purposes in the construction of scientific concepts, but they are defective as tools for the
description of natural language or human cognitive processes.

4
Thinking Like a Linguist

Linguistic corpora
A linguistic corpus (plural = corpora) is a computerized database of real-world texts that
enables users to research the real-life use of English. With a corpus that is sufficiently large,
it is possible to identify patterns of use that would otherwise be impossible to see.
Several large corpora hosted by Brigham Young University are available for public use
without charge. These include the Corpus of Contemporary American English (COCA)
and the Corpus of Historical American English (COHA). COCA consists of roughly 520
million words taken from more than 160,000 separate texts from the period 19902015.
These texts are equally divided between five genres: spoken language, fiction, popular
magazines, newspapers, and academic journals. COHA, in turn, consists of approximately
400 million words from the period 1810s2000s (20 million words per decade).
Each word in these corpora is tagged with its part of speech (noun, verb, adjective, etc.),
and the interface makes it possible to perform different kinds of linguistically-oriented
searches. For example, one can search for the collocates of any given wordi.e., the
words that occur together with the target word. This is useful because seeing a words
collocates can provide insight into how the word is used and therefore what it means.
This is the search request in COCA that will produce a list of the nouns that are
modified by the word personal, listed in order of frequency:

This is the beginning of the search results:

5
Thinking Like a Linguist

This is part of the KWIC (Key Words in Context) display for personal life:

A similar display can be called up for each of the other collocates. And for each line in the
KWIC display, it is possible to call up a longer excerpt that provides more of the context.
Justice Thomas Lee of the Utah Supreme Court has used corpus research in concurring
opinions in several statutory interpretation cases, most notably in State v. Rasabout, 356 P.3d
1258, 127590 (Utah 2015) (Lee, J., concurring in part and concurring in the judgment).
However, the rest of the court hasnt yet climbed onto the corpus-linguistic bandwagon.
Despite the Utah courts wariness, the Michigan Supreme Court recently became the
first appellate court in the country (and possibly in the world) to endorse the use of corpus
linguistics in statutory interpretation. People v. Harris, N.W.2d 499 Mich. 332, 2016
WL 3449466 at *5 & nn.2934 (2016); see also id., 2016 WL 3449466 at * 11 n.14 (Markman,
J., concurring part and dissenting in part).

Further reading:
Stephen C. Mouritsen, The Dictionary Is Not A Fortress: Definitional Fallacies and A Corpus-
Based Approach to Plain Meaning, 2010 B.Y.U. L. Rev. 1915 (2010).
Stephen C. Mouritsen, Hard Cases and Hard Data: Assessing Corpus Linguistics As an
Empirical Path to Plain Meaning, 13 Colum. Sci. & Tech. L. Rev. 156 (2012).
Recent Case, Statutory InterpretationInterpretative ToolsUtah Supreme Court Debates
Judicial Use of Corpus Linguistics State v. Rasabout, 356 P.3d 1258 (Utah 2015), 129
Harv. L. Rev. 1468 (2016).
James C Phillips, Daniel M. Ortner, Thomas R. Lee, Corpus Linguistics & Original Public
Meaning: A New Tool to Make Originalism More Empirical, 126 Yale L.J. Forum 20 (2016),
http://www.yalelawjournal.org/forum/corpus-linguistics-original-public-meaning.
Lawrence M. Solan, Can Corpus Linguistics Help Make Originalism Scientific?, 126 Yale L.J.
Forum 57 (2016), http://www.yalelawjournal.org/forum/can-corpus-linguistics-help-
make-originalism-scientific.

Das könnte Ihnen auch gefallen