Sie sind auf Seite 1von 33

2

Ling 610: Semantics and Generative Grammar


Foundations and Basic Tools

UMass Amherst, Fall 2005

Christopher Potts

September 8, 2005
4 CONTENTS

4 Direct and indirect interpretation 21


4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Direct interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.1 Schematic overview . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.2 Lexical items, directly interpreted . . . . . . . . . . . . . . . . . . 21
Contents 4.3 Indirect interpretation . . . . . . . . . . . .
4.3.1 Lexical items, indirectly interpreted
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
22
4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

I Foundational issues 7
II Tools 23
1 Phrases, meanings, utterances 9
1.1 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5 Basic set theory in linguistics 25
1.1.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.1.2 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.2 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.1.3 Pragmatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.2 Contrasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.2.2 Set membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.3 Some common syntax–semantics terminology . . . . . . . . . . . . . . . . 10 5.2.3 Central properties of sets . . . . . . . . . . . . . . . . . . . . . . . 27
1.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.2.4 Basic linguistic application . . . . . . . . . . . . . . . . . . . . . . 28
5.3 Ordered n-tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2 Truth conditions 11 5.4 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5.5 Set-theoretic relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.1 Historical origins . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5.5.1 Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.2 Flexibility on the topic of truth and falsity . . . . . . . . . . . . . . 12 5.5.2 Union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.3 An illustration and commentary . . . . . . . . . . . . . . . . . . . 12 5.5.3 Complementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.4 Object language and metalanguage . . . . . . . . . . . . . . . . . 12 5.5.4 Entailment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.5.5 Synonymy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.1 Syntax-alone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.2 Dynamics and use . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.6.1 Ditransitive verbs, part 1 . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.3 Beyond truth conditions . . . . . . . . . . . . . . . . . . . . . . . 14 5.6.2 Ditransitive verbs, part 2 . . . . . . . . . . . . . . . . . . . . . . . 32
2.3 Questions and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.6.3 and and ∩ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.3.1 Metalanguages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.6.4 Challenges for the and -as-∩ hypothesis . . . . . . . . . . . . . . . 33
2.3.2 Markerese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.6.5 Characterizing modifiers set-theoretically . . . . . . . . . . . . . . 33
2.3.3 Tarski’s hierarchy [philosophical question] . . . . . . . . . . . . . 15 5.6.6 Russell’s Paradox [philosophical] . . . . . . . . . . . . . . . . . . 33

3 Compositionality 17 6 Propositional logic 35


3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 6.1 Why PL? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Kinds of compositionality . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 6.2 PL: The logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Unconstrained compositionality . . . . . . . . . . . . . . . . . . . 18 6.2.1 Propositional letters [lexicon] . . . . . . . . . . . . . . . . . . . . 35
3.2.2 Rule-to-rule compositionality . . . . . . . . . . . . . . . . . . . . 19 6.2.2 Well-formed formulae [syntax] . . . . . . . . . . . . . . . . . . . . 35
3.2.3 Context-free semantics . . . . . . . . . . . . . . . . . . . . . . . . 19 6.2.3 Interpretation [semantics] . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Questions and discussion topics . . . . . . . . . . . . . . . . . . . . . . . 19 6.3 Commentary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.1 Idioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.3.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.2 Indexicals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6.3.2 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.3 (Almost) half-empty or (almost) half-full? . . . . . . . . . . . . . . 20 6.4 Truth-tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3
CONTENTS 5 6 CONTENTS

6.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 8.3.7 What’s wrong with these sentences? . . . . . . . . . . . . . . . . . 52


6.4.2 The tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 8.3.8 The job of semantics? . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.3 Calculating equivalences . . . . . . . . . . . . . . . . . . . . . . . 37 8.3.9 Building a fragment . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.4 Intensionality in PL . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.5 Compositionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 9 Semantic types 55
6.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 9.1 Types: Your semantic workspace . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.1 PL and set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 9.1.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.2 Sheffer stroke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 9.1.2 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6.3 Exclusive disjunction . . . . . . . . . . . . . . . . . . . . . . . . . 39 9.2 Natural language types can be quite high . . . . . . . . . . . . . . . . . . . 56
9.3 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7 Functions 41 9.3.1 A type or not? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.1 Technical specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 9.3.2 What’s the difference? . . . . . . . . . . . . . . . . . . . . . . . . 56
7.2 Functions and sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 9.3.3 Types in PTQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2.1 Characteristic sets . . . . . . . . . . . . . . . . . . . . . . . . . . 42 9.3.4 Expressive types . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2.2 Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . 42
7.3 Why a theory of functions? . . . . . . . . . . . . . . . . . . . . . . . . . . 42 10 PL to lambdas 59
7.3.1 Meanings for one and all . . . . . . . . . . . . . . . . . . . . . . . 42 10.1 Propositional logic from a functional perspective . . . . . . . . . . . . . . 59
7.3.2 Ontological flexibility . . . . . . . . . . . . . . . . . . . . . . . . 42 10.1.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.3.3 Keenan’s functional principle . . . . . . . . . . . . . . . . . . . . 42 10.2 Syntax for PL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 10.2.1 Types for PL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.4.1 Characteristic set . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 10.2.2 Terms for PL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.4.2 Characteristic function . . . . . . . . . . . . . . . . . . . . . . . . 43 10.3 Semantics for PL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.4.3 Is it a function? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 10.3.1 Term interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.4 Parsetree comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8 Lambda calculus 45 10.5 Compositional parsetree interpretation . . . . . . . . . . . . . . . . . . . . 62
8.1 A lambda calculus defined . . . . . . . . . . . . . . . . . . . . . . . . . . 45
8.1.1 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 11 Expanding one’s expressive power 63
8.1.2 Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 11.1 In search of a suitable “machine” (Kaplan 1989:541) . . . . . . . . . . . . 63
8.1.3 Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
8.1.4 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
8.1.5 Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
8.1.6 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.2 Commentary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.2.1 Well-formed expressions . . . . . . . . . . . . . . . . . . . . . . . 47
8.2.2 Functional application . . . . . . . . . . . . . . . . . . . . . . . . 47
8.2.3 Lambda abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.2.4 Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
8.2.5 Substitution and assignment update . . . . . . . . . . . . . . . . . 49
8.3 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.3.1 Typing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.3.2 Novel types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.3.3 All and only Lisa’s properties . . . . . . . . . . . . . . . . . . . . 50
8.3.4 Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.3.5 Variable names . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.3.6 Assignments and contexts . . . . . . . . . . . . . . . . . . . . . . 51
Part I

Foundational issues

7
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

1.2 Contrasts
• Utterances are located in space–time and have agents (speakers). Neither phrases
nor meanings are located in space–time, nor do they have agents. They are abstract
objects.

Handout 1 • Phrases are inherently linguistic. Utterances are events that involve linguistic objects
(phrases), but they are not themselves linguistic. Meanings are not linguistic (but
rather very easily specified with language).
Phrases, meanings, utterances • Summary

located in space–time? linguistic?


phrases no yes
1.1 Conventions meaning no no
utterances yes no
1.1.1 Syntax
Syntactic objects are given in italics, or as trees or labeled bracketings. These are the 1.3 Some common syntax–semantics terminology
empirical focus of syntactic (and morphological) theory.
syntax semantics
! " declarative sentence proposition
i. the rhino ii. NP iii. grumpy [NP rhino ]
"! AP adjective (phrase) predicate/property
"
" !!
AP NP verb phrase predicate/property
transitive verb two-place relation
grumpy rhino quantified DP/QP generalized quantifier

When in doubt, stick ‘meaning’ onto the end of the syntactic term: ‘adjective meaning’
1.1.2 Semantics is as good as ‘property’; ‘declartive sentence meaning’ is clunky but precise. And so forth.
Meanings are underlined or simply described in prose.
1.4 Questions
i. the proposition that pigs fly
We would like to establish that we need phrases, meanings, and utterances, i.e., that none
ii. Pigs fly is a proposition. of these concepts reduces to one or both of the others. An effective way to do this is to find
examples of all of the following:
iii. Pigs fly is the meaning of Pigs fly.
i. A single phrase that expresses more than one meaning.
1.1.3 Pragmatics ii. A single meaning that is expressible with more than one phrase.
Utterances are inside quotation marks. iii. A single utterance that expresses more than one meaning.

i. When Bart said “Pigs fly”, he meant that rhinos dance. iv. A single meaning that is expressible with more than one utterance.

Pigs fly! v. A single utterance that involves more than one phrase.

vi. A single phrase involved in more than one utterance.


ii.

9 10
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

Of course, the fact that we are interested here primarily in the notion of truth
for a sentence does not exclude the possibility of a subsequent extension of this
notion to other kinds of objects. (Tarski 1944:537)
It is not far, intellectually or historically, from here to Montague Grammar (Montague
1974). For discussion of the foundations of that theory and its historical origins, I highly
Handout 2 recommend Partee 1997.

2.1.2 Flexibility on the topic of truth and falsity


Truth conditions As linguists, we should be wary of direct claims about the nature of reality, truth, or falsity.
We’re in the business of studying language, not physics, metaphysics, etc. For this reason,
we’ll use the number 1 for truth and the number 0 for falsity. And we’ll remain largely
silent on whether truth conditions are determined relative to an external reality, or mental
2.1 Basics representations, or the mutual beliefs of the participants in a discourse. Our theory should
be compatible with all these views. There might, after all, be a place for all of them.
Linguistic semantics tends to be about truth-conditions. The central goal is to obtain a
systematic procedure for determining, for each sentence, what conditions would have to be
like for that sentence to express a truth. 2.1.3 An illustration and commentary
In truth-conditional semantics, we often arrive at equations like the following:
2.1.1 Historical origins (2.1) The sentence Lisa is a linguist is interpreted as 1 if Lisa is a linguist, else it is
A. J. Ayer’s ‘principle of verification’ is one version of the central tenet of the logical interpreted as 0.
positivism of Bertrand Russell, the early Wittgenstein, and Gilbert Ryle (among others):
In the simple systems we’ll study initially, there are just two values for sentences, 1 and 0.
We say that a sentence is factually significant to any given person if, and only This is a simplification; we’ll need more and different values before long. But, for now,
if, he knows how to verify the proposition which it purports to express — that it means that we can state truth conditions in a way that makes them more obviously like
is, if he knows what observations would lead him, under certain conditions, to definitions (equality statements):
accept the proposition as being true, or reject it as being false. (Ayer 1936:48) (2.2) The sentence Lisa is a linguist is interpreted as 1 if and only if (iff) Lisa is a
linguist.
This sounds much like Tarski, the logician, philosopher, and mathematician who, in the
1920s, authored and explored the first definitions of truth of a sentence (Hodges Winter The phrase ‘if and only if’ has the same force as an equal sign. In logic and linguistics, it
2001). Here is an extended excerpt from one of his foundational papers: is often abbreviated to ‘iff’. Some equivalent formulations are ‘just in case’ and ‘exactly
when’.
The predicate ‘true ’ is sometimes used to refer to psychological phenomena
such as judgments or beliefs, sometimes to certain physical objects, namely,
linguistic expressions and specifically sentences, and sometimes to certain ideal 2.1.4 Object language and metalanguage
entities called ‘propositions’. By ‘sentence’ we understand here what is usu- The above equations, (2.1) and (2.2), have an air of circularity about them. After all, the
ally meant in grammar by ‘declarative sentence’; as regards the term ‘propo- string ‘Lisa is a linguist’ appears on both sides of ‘iff’ (our equal sign). But the definition
sition’, its meaning is notoriously a subject of lengthy disputations by various is not in fact circular. Let’s look closely at it to see why.
philosophers and logicians, and it seems never to have been made quite clear On the left side of ‘iff’ we have a sentence of a natural language. By convention, in
and unambiguous. For several reasons it appears most convenient to apply the these notes, those are given in italic font. The sentence is one of our objects of study —
term ‘true’ to sentences, and we shall follow this in course. something from our object language. On the right side of ‘iff’, we have a statement in
Consequently, we must always relate the notion of truth, like that of a sentence, the language that we are currently using to make theoretical statements and talk about the
to a specific language; for it is obvious that the same expression which is a true properties of our object language. This is our metalanguage. We don’t inquire into its
sentence in one language can be false or meaningless in another. properties. We have to rely on our mutual understanding of its terms and structures.

11 12
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

We needn’t use English as our metalanguage. And, in fact, we will very often resort to is groundhog to be true, then I don’t know the meaning of woodchuck or groundhog. It’s
formal languages for this purpose. A disadvantage of using English is that its complexity for this reason that dictionaries are not semantic theories. They provide (language-internal)
is formidable — enough so that we can spend our working lives as linguists studying it! translations, appealing always, at some point, to its readers’ knowledge of semantics.
It is often easier to have a simpler, more obviously regulated language for stating truth
conditions. Set theory (handout 5) is a natural choice; (2.3) is equivalent to (2.1) and (2.2).
2.2.2 Dynamics and use
In dynamic semantics, we move beyond the sentence-level and begin to study how dis-
(2.3) The sentence Lisa is a linguist is interpreted as 1 iff ∈ {x | x is a linguist} courses fit together, how new utterances affect the common ground (shared public be-
Statements of set theory have a rigorously defined interpretation. We don’t need to rely on liefs) of the discourse participants. The basic shift is one that concerns the role of truth-
mutual understanding to figure out what the metalanguage statement means. We can just conditions: rather than taking truth-conditions to be primary, we consider the context-
look to the definitions. For this reason, it is extremely useful — assuming one’s audience is change potential of a sentence to be its important semantic contribution. A sentence’s
also versed in set theory. But many audiences aren’t, making English (or some other natural context change potential is its potential to affect an information state. The notion is gener-
language) a better choice on many occasions. The bottom line should be this: what’s going ally derivative of the more basic notion of truth, but the shift is more than one of emphasis.
to best communicate your proposal? When we take a theory dynamic, we change just about everything about how it works, from
lexical entries to modes of semantic composition to general facts and theorems about its
design and limitations.
2.2 Alternatives Here’s a useful slogan from an influential early work in dynamic semantics:

2.2.1 Syntax-alone the meaning of a sentence does not lie in its truth conditions, but rather in the
way that it changes (the representation of) the information of the interpreter.
Prior to and during the rise of Montague Grammar (Montague 1974), the generative se- The utterance of a sentence brings us from a certain state of information to
manticists were doing pioneering work in uncovering subtle semantic distinctions and gen- another one. The meaning of a sentence lies in the way that it brings about
eralizations. But, from our perspective, their approach had one strange property: it never such a transition. (Groenendijk and Stokhof 1991:43)
got to the point of interpretation. Syntactic structures mapped to semantic structures —
expressions of a language called Markerese. But the semantic structures were just highly
abstract, feature-laden syntactic structures. They appeared not to be models of anything but 2.2.3 Beyond truth conditions
themselves. Truth-conditional semantics is a partial theory of meaning, in the sense that it has nothing
Here is the philosopher and linguist David Lewis on this point: to say about many of the meanings that we perceive. This is widely acknowledged, and
Semantic markers are symbols: items in the vocabulary of an artificial lan- linguists have worked hard to reach beyond its boundaries. Theories of pragmatics help us
guage we may call Semantic Markerese. Semantic interpretation by means of to predict what meanings a given sentence can give rise to relative to a specific context of
them amounts merely to a translation algorithm from the object language to an utterance. Theories of framing or connotations help us to understand how specific words
auxiliary language Markerese. But we can know the Markerese translation of make salient a complex web of additional meanings. And so forth. These are important de-
an English sentence without knowing the first thing about the meaning of the velopment. In these notes, I concentrate on truth-conditions for a handful of reasons. One
English sentence: namely, the conditions under which it would be true. Se- is historical: modern linguistic semantics is founded on truth-conditionality. But I believe
mantics with no treatment of truth conditions is not semantics. (Lewis that this historical explanation has an intellectual justification as well. Truth-conditional se-
1976:1) mantics is an excellent foundation on which to build more complex, more all-encompassing
theories of linguistic meaning.
A specific example helps to highlight the importance of this point. Assume that I speak
neither Italian nor Irish. Suppose that I learn that the Irish noun madra translates as cane
in Italian. Have I learned the meaning of madra or cane ? I have not. I’ve just learned a 2.3 Questions and discussion
bit about the translation function that takes Irish to Italian. To learn the meaning of madra,
I need to learn what conditions have to be like for a given object to count as having the 2.3.1 Metalanguages
property named by madra.
The same is true internal to a language. I might learn that the English words woodchuck Provide truth-conditions for the English sentences in (2.4) using anything but English as
and groundhog are synonymous. But if I don’t know what it takes for d is woodchuck or d your metalanguage.

13 14
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

(2.4) a. Petersburg is smaller than Moscow.


b. Chris is a linguist.

2.3.2 Markerese
Proponents of markerese (and related approaches) often point out that there are meaning
distinctions that are not truth-conditional. Tautologies (necessary truths) and contradictions
(necessary falsities) provide compelling illustrations:

(2.5) a. Superman is Clark Kent.


b. Two plus two equals four.

(2.6) a. Superman is not Clark Kent.


b. Two plus two equals seven.

First, explain why these might be problematic for truth-conditional approaches. Second,
imagine that you are a proponent of truth-conditional semantics. How would you react to
these examples?

2.3.3 Tarski’s hierarchy [philosophical question]


Suppose that we wanted to analyze our metalanguage using the tools of semantic analysis.
That is, suppose we wanted to turn it into an object language. To do this within the current
framework, we would then require a metalanguage. Okay, suppose we wanted to analyze
that language. . . . We have begun climbing Tarski’s hierarchy. Do you anticipate any
problems, linguistic or philosophical, if we work within this general framework?

15 16
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

3.2 Kinds of compositionality


Over the years, linguists, logicians, and semanticists have collectively discovered that there
is little or no empirical bite to definition (3.1). Linguists have developed intuitions about
what sort of analysis respects it and what sort of analysis doesn’t respect it, but these in-
Handout 3 tuitions are perhaps impossible to make precise. Technically speaking, just about anything
can be brought into accord with (3.1).

Compositionality In its most general form, the principle is nearly uncontroversial; some version
of it would appear to be an essential part of any account of how meanings are
assigned to novel sentences.
But the principle can be made precise only in conjunction with an explicit the-
3.1 Overview ory of meaning and of syntax, together with a fuller specification of what is
required by the relation “is a function of”. If the syntax is sufficiently uncon-
Here’s a very broad, oft-repeated statement of the principle of compositionality in linguistic strained and the semantics is sufficiently rich, there seems to be no doubt that
semantics: languages can be described compositionally. (Partee 1984:153)

(3.1) The meaning of an expression is a function of the meanings of its parts and the
But there are alternative statements of the general idea. Let’s explore a few of them
way they are syntactically combined. (Partee 1984:153)
now. It will turn out that context-free compositionality is the most promising as a genuine,
This seems simple enough. The definition harbors some technical notions behind common enforceable constraint on analyses.
language (‘is a function of’, ‘syntactically combined’), but the intuition behind it is clear:
we want predictability, and we want determinism. Our meaning for Chris smiled should be
unique and it should be fully determined by the meaning of Chris, the meaning of smiled, 3.2.1 Unconstrained compositionality
and some general principle or principles for putting these two meanings together. Statement (3.1) above is a version of unconstrained compositionality. For additional state-
Of course, for all we know, Chris and smiled might each be complex expressions them- ments like it, see Gamut 1991:5–6, Heim and Kratzer 1998:§1, and Janssen 1997, among
selves. This seems right, in fact, for smiled. But compositionality is in effect here as well: many others. The idea is central to the work of Richard Montague, perhaps the most im-
the meaning of smiled should be determined by the meanings of its parts and their mode portant figure in the history of linguistic semantics. Here is Barbara Partee commenting on
of combination. Once we have that meaning, we can use it to derive the meaning of Chris compositionality in Montague grammar:
smiled.
Where does this process of decomposition end? That’s an empirical question. For
example, it seems obvious that Chris is semantically atomic. This means that we stipulate Montague’s paper ‘Universal Grammar’ [UG] [. . . ] contains the most general
its meaning. It’s where we bottom out. It also seems likely that smiled is, as suggested statement of Montague’s formal framework for the description of language.
above, a complex expression. We could claim that it is made up of the root smile and the The central idea is that anything that should count as a grammar should be
past-tense morpheme. But there are plenty of cases in which it is not obvious where we able to be cast in the following form: the syntax is an algebra, the semantics
would ‘bottom out’. Idioms raise this question: when kick the bucket is used to mean is an algebra, and there is a homomorphism mapping elements of the syntactic
‘die’, should we, as semanticists, be looking at the syntactic subphrases of this expression, algebra onto elements of the semantic algebra. This very general definition
or should we just assign a meaning to the whole? leaves a great deal of freedom as to what sorts of things the elements and the
The other part of the empirical aspects of compositionality concerns the syntax. In a operations of these algebras are.
properly-designed linguistic theory, the ‘parts’ mentioned in (3.1) should be given to us [. . . ]
by the syntactic theory. For these notes (as for most linguists), we’ll say that the parts
correspond perfectly to the nodes in syntactic structures. This approach has much to rec- It is the homomorphism requirement, which is in effect the compositionality
ommend it, and it is also a useful way to be precise about how compositional interpretation requirement, that provides the most important constraint on UG in Montague’s
is controlled. sense [. . . ]. (Partee 1996:15–16)

17 18
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

3.2.2 Rule-to-rule compositionality 3.3.2 Indexicals


(Dowty 2003): First- and second-person pronouns, items like here and now, and adverbials like actually
are called indexicals. One of their most prominent features is that we must appeal to the
For each syntactic rule the grammar specifies a unique corresponding semantic context — to an utterance situation — to determine what their meanings are on individual
rule that applies to the meanings of the input expressions to yield a meaning instances of use. Does this mean that their semantics is inherently non-compositional? Why
for the newly formed expression. The nature of each compositional rule is not or why not? (This is a thorny topic; I’m after a thoughtful discussion, not an articulation of
dependent on the form of the syntactic rule (though must observe type-theoretic some preconceived, fixed answer.)
well-formedness).
3.3.3 (Almost) half-empty or (almost) half-full?
3.2.3 Context-free semantics
[This exercise is adapted from handouts by Barbara Partee.]
(Dowty 2003):
i. Are the phrases half-empty and half-full synonymous? How can you tell?
When a rule f combines ! , " (. . .) to form # , the corresponding semantic rule
g that produces the meaning # # of # , from ! # and " # , may depend only on ! # ii. Are the phrases almost half-empty and almost half-full synonymous? How can you
“as a whole”, it may not depend on any meanings from which ! # was formed tell?
compositionally by earlier derivational steps (similarly for " ).”
iii. What are the constituent structures of almost half-empty and almost half-full ? Sup-
[Note: It is a convention in Montague semantics to use primes for meanings. Thus, ! is port your answers with evidence.
a syntactic constituent and ! # is its semantics — either a function or a logical expression
iv. How are the answers you provided for (i)–(iii) relevant for compositionality?
with a functional interpretation.]

3.3 Questions and discussion topics


3.3.1 Idioms
Complex idioms are obvious challenges for compositionality in any of its forms. It seems
that, on their idiomatic uses, none of the expressions in (3.2) has a meaning that is pre-
dictable from the meanings of its parts:

(3.2) a. Ed kicked the bucket. (Ed died.)


b. Ed bought the farm. (Ed died.)
c. Ed kept tabs on Joe. (Ed tracked Joe’s actions.)

One strategy would be to say that phrases like kick the bucket are lexical items. Thus, they
are where compositionality bottoms out: atomic items with primitive (non-derived) mean-
ings. But the following examples seem to reveal that at least some idioms are syntactically
complex:

(3.3) a. ?? The bucket was kicked by Ed. (slightly odd on the idiomtic reading).
b. Tabs were kept on Joe (by Ed).

Articulate why these facts are challenging for compositionality, and outline some possible
resolutions.

19 20
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

4.3 Indirect interpretation


In a system of indirect interpretation, syntactic nodes map to logical formulae (expressions
in our meaning language). These logical formulae are then interpreted (mapped to) model-
theoretic objects:

Handout 4 (4.4)
[[·]]
natural language ! logic =⇒ model

We use ! for the translation function.


Direct and indirect interpretation
4.3.1 Lexical items, indirectly interpreted
(4.5) Bart translates as bart

4.1 Overview
[[bart]] =
Very broadly speaking, there are two ways that one can move from a natural-language
expression to its interpretation: one can go directly, or one can stop off at an intermediate (4.6) Simpson translates as simpson
logical language. There are advantages and disadvantages to both, and it might even be
possible to distinguish the two approaches empirically, though that remains one of the
# $
great open conceptual questions of linguistic semantics.
[[simpson]] = , , , ,

4.2 Direct interpretation


4.4 Discussion
4.2.1 Schematic overview
Many practitioners of indirect translation countenance a stopping-off point — the logical
In a system of direct interpretation, we map syntactic nodes directly to model-theoretic formulae — merely for convenience. The logical formulae might have a more obviously
objects: systematic structure than the natural-language expressions. Or the researcher might want
to stay clear of debates about syntactic phrasing, category-labels, and so forth. In these
[[·]] systems, the assumption is usually that we have two regular, structure-preserving mappings:
(4.1) natural language =⇒ model
translation and interpretation. The composition of these two operations is also a regular,
structure-preserving mapping. That is, the intermediate step is dispensable.
We use [[·]] for the interpretation function (see handouts 6 and 8).
But it’s possible to imagine systems in which the intermediate logical language is not
dispensable. It might provide information that is not present in the natural-language syntax
4.2.2 Lexical items, directly interpreted but that is nonetheless crucial for interpretation.

(4.2) Bart is interpreted as

# $
(4.3) Simpson is interpreted as , , , ,

21 22
Part II

Tools

23
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

The empty set


There are, by convention, two equivalent ways of specifying the empty (null) set: with 0/
(the empty-set symbol) and with { } (empty curly braces). The empty set is simply the set
with no members. There is only one empty set.

Handout 5 Venn diagrams


The curly-brace notation is abstract, and it has some misleading properties (e.g., it makes it
look as though the objects are ordered, when they are not — see below). And, especially for
Basic set theory in linguistics visual thinkers, it can be hard to work with them. Venn diagrams provide a more intuitive
depiction of sets. They are simply circles. We’ll work with them a lot. For now, here’s a a
Venn diagram of the set specified in (5.2):

5.1 Overview
This handout reviews some basics of set theory. However, its aims are very specific — I b 47
merely want to establish a connection between natural language semantics and the tools (5.3)
and concepts of set theory. For a more technical, systematic introduction to set theory in
linguistics, I recommend Partee et al. 1993.

5.2 Sets
A set is an abstract collection of objects. These can be real-world objects, concepts, others Predicate notation
sets, etc.
Very often, we don’t know, or can’t specify, the complete membership of a set. Predicate
notation is useful in these cases. For instance, here is a specification of the set of all natural
5.2.1 Notation numbers using predicate notation:
, -
(5.4) x | x is a natural number
Curly braces
This glossed as ‘the set of all x such that x is a natural number’. So the curly braces tell us
By convention, sets are specified using curly-braces. Commas usually separate the mem- that we are talking about a set, and the vertical line (sometimes a colon, :) is read as ‘such
bers. For example, here is a depiction of the set containing Bart Simpson, the Russian that’. It’s important to keep sight of the fact that this specification does not tell us about
character b, and the number 47: any specific x. The choice of x as the symbol in this specification is arbitrary. All of the
  following are equivalent to (5.4):

 
 , -
  (5.5) a. y | y is a natural number
(5.1) b, 47, , -
  b. n | n is a natural number

 
 , -
c. † | † is a natural number
A note of caution, though: use the variable symbol systematically. The following is differ-
And here is a picture of the set whose members are Lisa Simpson and the set above:
ent from (5.4) and (5.5):
   , -

 
 
  (5.6) x | y is a natural number
  
(5.2) , b, 47, To understand this specification, we do need to know what object y picks out. If y is a

 
 
 
   natural number, then everything is a member of this set. If y is not a natural number, then
nothing is a member of this set.

25 26
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

Recursive definitions No repetitions


Recursive definitions are another way of specifying infinite sets in a compact yet complete When specifying a set, repetitions of the same object are meaningless. For example, each
way. Consider for instance the following definition of a very boring fragment of English, of the following depicts the set containing only Bart Simpson:
here called E:    

 
 
 

   
(5.7) a. pigs fly is a member of the set E ,
b. if S is a member of the set E, then Chris knows S is a member of E 
 
 
 

   
c. nothing else is a member of E
If we need repetitions to be meaningful, we must again look to ordered tuples.
Here’s another way of specifying this set:
. /
(5.8) pigs fly, Chris knows pigs fly, Chris knows Chris knows pigs fly, . . . 5.2.4 Basic linguistic application
The ellipsis dots indicate that the pattern continues in this way. The most fundamental application for sets in linguistic semantics is that they provide us
with a foundation on which to build meanings for a wide variety of lexical items. The
simplest meanings of this sort are those for adjectives, common nouns, and intransitive
5.2.2 Set membership verbs. For all of these classes of lexical items, we can advance the simple semantic claim
To indicate that an object is a member of a set, we use a rounded lowercase Greek epsilon: that they denote (mean) sets of entities. The adjective orange denotes the set of all orange
∈. For instance, the following formula asserts that Bart Simpson is a member of the set of things, the intransitive verb run denotes the set of all runners, and so forth.
Simpsons: Thus, we are now in a position to state truth conditions (handout 2) for simple sentences
using set theory as our metalanguage (handout 2):
. /
(5.9) ∈ a | a is a Simpson , -
(5.11) Bart is short is interpreted as 1 iff ∈ y | y is short
‘Bart is a member of the set of all a such that a is a Simpson’
A slash through the set-membership symbol (or just about any other logical connective) , -
is its negation. Thus, (5.10) asserts that Burns is not a member of the set of Simpsons. (5.12) Lisa runs is interpreted as 1 iff ∈ z | z runs

. / , -
(5.13) Maggie is a baby is interpreted as 1 iff ∈ ! | ! is a baby
(5.10) ∈
/ d | d is a Simpson
‘Burns is not a member of the set of all d such that d is a Simpson’
5.3 Ordered n-tuples
5.2.3 Central properties of sets An ordered n-tuple is a finite sequence of n objects of any kind. We use angled brackets
to specify tuples. Thus, the ordered 2-tuple (ordered pair) whose first member is Bart and
Unordered
whose second member is Homer is written like this:
The members of a set are not ordered in any way. So the following are depictions of the 0 1
same set, namely, the set containing Bart and Lisa:
    (5.14) ,

 
 
 

   
Ordered tuples are distinguished from sets in two important ways. First, they are (of course)
, ,

 
 
 
 ordered, so the following are different tuples:
   
0 1 0 1
When we induce an ordering on a set, the result is an ordered tuple. These are discussed , ,
below.

27 28
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

Second, repetitions are meaningful for tuples. Example (5.15a) represents the 1-tuple 5.5.1 Intersection
whose first and only member is Marge Simpson, whereas (5.15b) represents the ordered
pair both of whose members are Marge Simpson: (5.20) Set intersection
The intersection of a set A with a set B is the set of all things that are in both A
and B. In symbols, A ∩ B.
0 1
Here are a two equivalent ways of specifying the union of the set containing Bart and Lisa
(5.15) a.
with the set containing Lisa, Maggie, and the number 17.

. / . /
0 1 (5.21) a. , ∩ , , 17
b. ,
b.
17
Ordered tuples are the members of relations, which are another fundamental building block
of meaning. Relations are our next topic. The intersection is the smallest circle, the one containing just Lisa.

Hypothesis The English word and is interpreted as ∩.


5.4 Relations
Transitive verbs, like intransitive verbs (and nouns and adjectives) are interpreted as sets. 5.5.2 Union
But now the sets contain ordered pairs. For example:
  (5.22) Set union
 0 1 0 1 0 1 
The union of a set A with a set B is the set of all things that are in A or B. In
(5.16) tease is interpreted as , , , , , symbols, A ∪ B.
 

  Here are a two equivalent ways of specifying the intersection of the set containing Bart
 0 1 0 1  and Lisa with the set containing Lisa, Maggie, and the number 17.
(5.17) annoy is interpreted as , , ,
 
. / . /
  (5.23) a. , ∪ , , 17
 0 1 
(5.18) confuse is interpreted as ,
 
b.
17
More generally
. / Hypothesis The English word or is interpreted as ∪.
(5.19) tease is interpreted as %x, y& | x teases y
5.5.3 Complementation
5.5 Set-theoretic relations The universe of discourse
Now that we have a grip on sets, we can start to study the relationships that hold between To define negation, we must first establish a domain of entities to talk about. The domain
sets. Our motivation for this is the notion that these relations might provide us with ready- is a set of individuals. I use U to pick it out:
made meanings for certain lexical items found in natural languages. Let’s proceed with
healthy skepticism, though. (5.24) U = the set of all entities in the discourse

29 30
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

Set-theoretic difference Some subset relations

The difference between two sets A and B is defined as follows: (5.27) Homer whines and complains entails Homer whines, because
, - 2. / . /3 . /
(5.25) A − B = x | x is in A and x is not in B
x | x whines ∩ y | y complains ⊆ x | x whines
When the first argument to − is the universe U , this operation is called set complementa-
. / . /
tion. (5.28) x | x a bachelor ⊆ y | y unmarried

Linguistic connection: Negated predicates


A special case to keep in mind Every set is a subset of itself (A ⊆ A, for any set A).
Recall that our predicates pick out sets of entities — subsets of the domain of discourse
U . So, what it means to be happy is simply to be a member of the set of happy things.
What does it mean to be not happy? Our hypothesis is that it means being a member of the
5.5.5 Synonymy
complement of the set of happy things: Two predicates are synonymous if and only if their interpretations are necessarily identical.
We can use ⊆ to express equality, which is our current theory of synonymy.
(5.26) for any predicate P with semantic value A, not P is interpreted as U − A
(5.29) A = B iff A ⊆ B and B ⊆ A
An example
For example

(5.30) bachelor is synonymous with unmarried man


# $
. / 2. / . /3
i. U = , , , , ,
x | x is a bachelor = y | y is a unmarried ∩ z | z is a man

ii. happy is interpreted as


,
, ,
- 5.6 Questions

iii. not happy is interpreted as 5.6.1 Ditransitive verbs, part 1


What kind of denotation should we assign to ditransitive verbs like give and show ? What
are the truth conditions for Bart introduced Lisa to Burns ?

, - , -
U− , , = , , 5.6.2 Ditransitive verbs, part 2
Does the view of ditransitive verbs devleped here allow us to specify the meanings of VPs
5.5.4 Entailment (i.e., the result of the verb combining with its direct object) in a compositional manner? If
so, how do we do this? If not, why not?
A sentence S entails a sentence S# if and only if the truth of S necessarily (in all imaginable
situations) guarantees the truth of S # .
5.6.3 and and ∩
The subset relation We can model entailments using the subset relation between sets. A Assume that the meaning of and is ∩. What is the denotation of the coordinate phrase
is a subset of B if and only if every member of A is also a member of B. In symbols, A ⊆ B. admire and respect ? What are the truth conditions for Marge admires and respects Lisa ?

31 32
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

5.6.4 Challenges for the and -as-∩ hypothesis The set A seems sensible enough. The set of all linguists is not itself a linguist, so it is a
member of A. In contrast, the set of all abstract concepts is an abstract concept, so it is a
We seem to have initial motivation for the hypothesis that the meaning of and is ∩. But
member of itself, hence it is not a member of A.
some examples seem to challenge this idea too. What is the relevance of the facts about ∩
But what about the set A itself? Herein lies the paradox. Assume A a member of itself,
in (5.31) to the examples in (5.32)? What is the appropriate response to this situation?
i.e., that it is true that A ∈ A. The only way that can be is if A ∈
/ A, as this is the defining
(5.31) a. ∩ is commutative: property of A. Now we’ve asserted both A ∈ A and A ∈ / A. Contradiction.
Okay, assume that A is not a member of itself, i.e., that A ∈/ A. But this is the defining
A ∩ B is equivalent to B ∩ A for all sets A and B
property for A. Hence, we must conclude A ∈ A. We’ve again contradicted ourselves.
b. ∩ is associative: Do you see a way out of the paradox?
(A ∩ B) ∩C is equivalent to A ∩ (B ∩C) for all sets A, B, and C

(5.32) a. Ed studies Russian and studies German.


b. Ed studies Russian and teaches German.
c. Ed doesn’t study Russian and German.
d. Driving home and drinking three beers is less dangerous that drinking three
beers and driving home.
e. Sue got married and fell in love.
f. Sue fell in love and got married.
g. Ed likes peanut butter and jelly and pickles.

5.6.5 Characterizing modifiers set-theoretically


Match each adjective with the meaning postulate governing its behavior (and others in its
class).

[[ADJ N]] = [[N]]


Canadian [[ADJ N]] = [[ADJ]] ∩ [[N]]
counterfeit [[ADJ]] ∩ [[N]] = 0/
skillful [[ADJ N]] ⊆ [[N]]
[[ADJ N]] ⊆ [[ADJ]]
[[ADJ N]] = [[ADJ]] ∪ [[N]]

Bonus (not required): do any of the unused meaning postulates on the right correctly char-
acterize real natural language items?

5.6.6 Russell’s Paradox [philosophical]


It was once thought that any property could define a set. The philosopher Bertrand Russell
exploded the notion. He asked us to consider the following set, call it A.
, -
(5.33) A= X |X ∈ /X

33 34
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

6.2.3 Interpretation [semantics]


i. If p is a propositional variable, then [[p]] is 1 or 0.
ii. [[¬$ ]] = 1 iff [[$ ]] = 0
iii. [[$ ∧ % ]] = 1 iff [[$ ]] = 1 and [[% ]] = 1
Handout 6 iv. [[$ ∨ % ]] = 1 iff [[$ ]] = 1 or [[% ]] = 1
v. [[$ → % ]] = 1 iff [[$ ]] = 0 or [[% ]] = 1
vi. [[$ ↔ % ]] = 1 iff [[$ ]] = [[% ]]
Propositional logic
6.3 Commentary

6.1 Why PL? 6.3.1 Syntax


i. In definition 6.2.2, $ and % are metavariables. They stand for sentences of PL of
Propositional logic (PL) is a very simple tool, but one that is nonetheless surprisingly use- any size and complexity. Anything that can be built using the rules can be put in the
ful. It provides a helpful illustration of the principle of compositionality (and its leniency). place of a metavariable (and nothing else can be so placed).
And it is the logical foundation for the more flexible and widely employed logics that we
will ultimately use to develop semantic theories (lambda calculi and Discourse Represen- ii. An important difference between PL and any natural language: for PL, it is not
tation Theory). an empirical matter whether a string is in the language or not. For NLs, it is an
empirical matter: we must survey speakers and probe intuitions.

6.2 PL: The logic iii. Like natural-language sentences, PL sentences have tree structure:
((p ∧ (p ↔
$#
q)) → ¬r)
$ !##
6.2.1 Propositional letters [lexicon] $$$
$ ! ##
#
(p ∧ (p ↔ q)) → ¬r
'%%
p, q, r, p# , q# , r#####, . . . ''&
& % )(
) (
p ∧ (p ↔ q) ¬ r
"!
"
" !!
6.2.2 Well-formed formulae [syntax] p ↔ q
i. Every propositional letter is a sentence of PL. [atomic formulae]
6.3.2 Semantics
ii. If $ is a sentence of PL, then ¬$ is a sentence of PL. [unary connective] i. Without a semantics, PL would just be a set of symbols, arranged in an orderly way
but without any meaning.
iii. If $ and % are sentences of PL, then all of the following are also sentences: [binary
connectives] ii. The interpretation function [[·]] takes all and only the well-formed formulae of PL as
its arguments, and the only objects it can return as values are 1 and 0.
a. ($ ∧ % )
iii. We stipulate the values for the propositional letters. The rest of the values follow
b. ($ ∨ % )
form the definition in 6.2.3.
c. ($ → % )
iv. We consider only valuations that correspond to the intuitive meanings for the con-
d. ($ ↔ % )
nectives. That is, if [[p]] is 1 and [[q]] is 1, then [[(p ∧ q)]] should be 1.
iv. Only that which can be generated by these clauses (= the grammar) in a finite number
of steps is a sentence of PL.

35 36
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

6.4 Truth-tables In intensional logic, propositions — the meanings for sentences — are sets of possible
worlds (equivalently, functions from possible worlds into truth-values).
6.4.1 Overview PL, though extremely limited, can provide us with a measure of intensionality. We
merely need to regard the interpretation functions — the rows of the truth tables — as
Truth-tables are a perspicuous way to represent the semantics of PL. The rows represent possible worlds. In turn, the columns represent a notion of proposition:
ways of valuing the propositional letters. If there is just one letter, as in the case of ¬,
then we have just two distinct interpretation functions. If there are two letters, the minimal (6.7) p q p∧q
requirement for a binary connective, then we have four distinct interpretation functions. [[·]]a 1 1 1
And so forth. [[·]]b 1 0 0
[[·]]c 0 1 0
6.4.2 The tables [[·]]d , 0 - , 0 - , 0 -
[[·]]a , [[·]]b [[·]]a, [[·]]c [[·]]a
(6.1) p ¬p
[[·]]a 1 0 Below each column, I’ve given the propositional denotation for each of the formulae. The
[[·]]c 0 1 individual interpretation functions still map these formulae to 1s and 0s. But if we consider
all these interpretations, we obtain a richer notion of meaning.
(6.2) p q p∧q (6.3) p q p∨q
[[·]]a 1 1 1 [[·]]a 1 1 1
[[·]]b 1 0 0 [[·]]b 1 0 1 6.5 Compositionality
[[·]]c 0 1 0 [[·]]c 0 1 1
[[·]]d 0 0 0 [[·]]d 0 0 0 PL is compositional by the definition given on handout 3. But it’s not a very interesting
(6.4) p q p→q (6.5) p q p↔q version of compositionality:
[[·]]a 1 1 1 [[·]]a 1 1 1
[[·]]b 1 0 0 [[·]]b 1 0 0
i. In PL, the smallest units are, intuitively, the semantic translations of entire sentences.
[[·]]c 0 1 1 [[·]]c 0 1 0
Thus, PL can assign a workable meaning to Chris enjoys cycling, but it can’t say any-
[[·]]d 0 0 1 [[·]]d 0 0 1
thing about the meanings of Chris, enjoys, or cycling, nor can it get at the semantic
commonalities between this sentence and Chris enjoys pizza.
6.4.3 Calculating equivalences
Two connectives are equivalent iff they assign identical values to identical arguments on ii. The connectives in this version of PL have no meaning. They are introduced syn-
all interpretations. Thus, for example, (p ∧ q) and ¬(¬p ∨ ¬q) are equivalent. Truth-tables categorematically. That is, they are introduced as part of definitions, but we are not
make this clear: told what they mean in isolation — and therefore they have no meaning in isola-
tion. But intuitively they have meanings. (The parsetree above indicates that we are
(6.6) p q p ∧ q ¬p ¬q ¬p ∨ ¬q ¬(¬p ∨ ¬q) pressing the limits of compositionality, since some nodes in that tree have no inde-
[[·]]a 1 1 1 0 0 0 1 pendent meanings.) This is a deficiency that we can correct by moving to a different
[[·]]b 1 0 0 0 1 1 0 implementation of PL. (See handout 10).
[[·]]c 0 1 0 1 0 1 0
[[·]]d 0 0 0 1 1 1 0
6.6 Questions
6.4.4 Intensionality in PL
6.6.1 PL and set theory
In intensional approaches to semantics, the models are composed of possible worlds. The
name ‘world’ is a bit misleading. It would be more accurate to talk about possible universes Find set-theoretic correspondents (handout 5) for the basic connectives of PL (∧, ∨, ¬,
or realities. In any event, the possible worlds depict different realities. They allow us to and ↔). Try to find differences between the two formalisms that might have linguistic
analyze discourses like I’m not at the party, but if I were at the party, I would be dancing. significance.

37 38
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

6.6.2 Sheffer stroke


The Sheffer stroke (‘nand’) is synbolized with |. p|q is true iff both p and q are false.
Construct the truth table for this connective. What special property does it have? (Hint:
which connectives are definable in terms of |.) Do any other definable connectives have
this property?

6.6.3 Exclusive disjunction


The normal disjunction in PL is called an inclusive disjunction because it evaluates p ∨ q
as true when both p and q are true. Exclusive disjunction, here symbolized with &, returns
a false value when both its arguments are true and is otherwise like inclusive disjunction.

Step 1
Provide the truth-table for exclusive disjunction.

Step 2
What value does your exclusive disjunction return for p & (q & r) under an interpretation
in which [[p]] = [[q]] = [[r]] = 1?

Step 3 (challenging)
It is sometimes suggested that English or has the semantics of exclusive disjunction. This
is based on the intuition that speakers sometimes judge disjunctive statements to be false
where both disjuncts are true. And it is true that Chris is in Petersburg or Chris teaches
semantics sounds odd when used to describe the present situation.
However, the result you obtained for step 2 of this problem can be used to cast doubt
on the idea that or is best treated as &. How?

39 40
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

we can form its characteristic function. These views are equivalent, so semanticists tend
to switch back and forth between them freely based on what seems most likely to convey
their ideas efficiently and clearly.

7.2.1 Characteristic sets


Handout 7
If f is a function into the domain {0, 1}, then the characteristic set of f is the set of all
objects d such that f (d) = 1.
Functions
7.2.2 Characteristic functions
If A is a set and U is the universe of objects in the same domain as A, then the characteristic
7.1 Technical specifications function of A is the f such that f (d) = 1 if d ∈ A, and f (d) = 0 if d ∈
/ A, for al d ∈ U .
It’s crucial that we know what the universe of discourse is, so that we know which
Here’s a useful depiction of the function that maps Bart, Lisa, and Maggie to 1 and Burns objects to map to 0. (The objects that map to 1 are just those that are in A.)
to 0:
 




7.3 Why a theory of functions?
 
 
  7.3.1 Meanings for one and all
 
 1 
(7.1)  
  We can assign functional meanings to any lexical item. This brings us a long way towards
 
  the ideal of compositionality (handout 3).
 0 
 
 
 
7.3.2 Ontological flexibility
• A relation R is a function iff each x in the domain of R is mapped by R to at most one Functions are extremely general; many things in the universe can be characterized as func-
element in the range of R. (For more on relations, see handout 5.) tions. This generality means that, in an important ontological sense, we claim very little
about the nature of meaning when we say that lexical items denote (mean) functions. If it
• A function f is total iff every element in the domain of f has a value in the range of turns out that meanings are abstract objects, or mental representations, or brain states, or
f . If f fails to meet this condition, it is called a partial function. whatever, our theory of linguistic meaning can adapt easily.
Bottom line: Functions let us know how meanings (whatever they are) interact in lin-
• A function f is onto iff every element in the range of f is the value of some element
guistic contexts.
in the domain of f .
• A function f is one-to-one iff no member of the range is assigned to more than one
member of the domain. 7.3.3 Keenan’s functional principle
• A function f is bijective or a one-to-one correspondence iff it is onto and one-to-one. (7.2) The Functional Principle (Keenan 1974:298)

i. The reference of the argument expression must be determinable indepen-


7.2 Functions and sets dently of the meaning or reference of the function symbol.
There is a close correspondence between functions into the domain of truth values {0, 1} ii. Functions which apply to the argument however may vary with the choice
and sets: for each function into {0, 1}, we can form is characteristic set. For each set, of the argument (and so need not be independent of it).

41 42
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

Partee (1984:161) paraphrases:

Keenan (1974) enunciates the interesting claim that the (form and) interpre- # $
tation of a function word may depend on the (form and) interpretation of its (7.7) , , , , , ,
argument but not vice versa.
# $
Examples
(7.8) , ,
(7.3) flat tire, flat beer, flat note (Partee)

(7.4) in time, in the news, in the basket 7.4.3 Is it a function?


For each of (7.9)–(7.15), say whether or not it is a function. If it is a function, say also
(7.5) throw a baseball, throw support behind a candidate, throw a boxing match (i.e., whether it is an onto function and whether it is
 a total function.   
take a dive), throw a party, throw a fit (Marantz 1984:25, Kratzer 1996)
   
   
   
A picture    
    
: of; organizing [[x]]
 the property    

  1   1 

 if [[party]] [[x]] = 1 (7.10)   (7.11)  

      
   
[[throw(x)]] =    

 the property of intentionally
: ; losing [[x]]   

0 



0 


  

 if [[sporting-event]] [[x]] = 1   







  
...  
 
 1 
(7.9)  
 
7.4 Questions  
 
 0 
 
 
7.4.1 Characteristic set  

Specify the characteristic set for the function represented graphically in (7.6).
   
 
   
     
     
     
     
     
     
 (7.12)   (7.13)  
(7.6)  1 
 







     
     
     
 0     
     
 
 

(7.14) the relation R from nodes to nodes in tree structures that maps each node to its
daughter(s)
7.4.2 Characteristic function
(7.15) the relation R−1 from nodes to nodes in tree structures that maps each node to its
Assume that the domain of inquiry is the set in (7.7). What is the characteristic function of mother(s)
the set specified in (7.8)?

43 44
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

Variables
For every type ( , we have an infinite stock of variables of type ( . Some conventions:

i. x, y, z are variables of type e

Handout 8 ii. f , h are variables of type %e,t&

iii. R is a variable of type %e, %e,t&&

Lambda calculus iv. p, q are variables of type t

Combinatorics

8.1 A lambda calculus defined i. if ! is of type %' , ( & and " is of type ' , then (! (" )) is an expression of type (

The following is a basic lambda calculus with just two basic types — the standard exten- ii. if ! is an expression of type ( and ) is a variable of type ' , then (* ) . ! ) is an
sional types e (for entities) and t for truth-values. expression of type %' , ( &

8.1.1 Types 8.1.3 Domains

i. e and t are (basic) types i. the domain of type e is De , the set of all entities

ii. the domain of type t is Dt = {0, 1}


ii. if ' and ( are types, then %' , ( & is a type
iii. the domain of a functional type %' , ( & is the set of all functions from D ' into D(
iii. nothing else is a type

8.1.4 Models
8.1.2 Expressions
A model for the lambda calculus is a pair %D, [[·]]&, where D is the infinite hierarchy of
Constants domains defined in section 8.1.3 above, and [[·]] is an interpretation function, taking the
constants defined in section 8.1.2 into D and obeying the following condition:
i. bart, lisa, homer, . . . are expressions of type e
(8.1) If ! is a constant of type ' , then [[! ]] ∈ D'
ii. smile, run, laugh, . . . are expressions of type %e,t&

iii. red, happy, irate, . . . are expressions of type %e,t& 8.1.5 Assignments
The assignment function maps variables into objects in our hierarchy of domains, just as the
iv. tease, see, like, . . . are expressions of type %e, %e,t&&
interpretation function does. We use g, g # , etc., to name assignments. Like the interpretation
function, the assignment obeys the following constraint:
v. mother, friend, . . . are expressions of type %e, %e,t&&
(8.2) If ) is a variable of type ' , then g() ) ∈ D'
vi. the is an expression of type %%e,t&, e&
The expression g[x .→ d] names the assignment function that is just like the assignment
vii. [. . .] g except that the variable x maps to the entity d.

45 46
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

8.1.6 Interpretation The syntax of lambda abstraction (three equivalent views)


i. if ! is a constant of type ' , then [[! ]] g = [[! ]] (8.4) * x. ! * x. ! !
* x. !
ii. if ) is a variable of type ' , then [[) ]] g = g() )
! *x !
iii. [[! (" )]]g = [[! ]]g([[" ]]g )
The semantics of lambda abstraction
iv. if * ) . ! is of type %' , ( &, then [[* ) . ! ]] g = the f ∈ D' ,( such that f (d) = [[! ]]g[) .→d] ,
for all d ∈ D' (8.5) a. cyclist : %e,t&
b. [[* x. cyclist(x)]]g = [[cyclist(x)]]g[x.→d] , for all d ∈ De
8.2 Commentary Graphically:
f (x) * x. f (x)
8.2.1 Well-formed expressions    
# a → . #
Above, I left the definition of well-formed expressions open-ended. Since we’re building    b .→ $ 
a linguistic theory, rather than a logical one, we want to have flexibility with regard to our c → . %
examples. We also want the logical definitions to be as efficient as possible. I just wanted
to suggest how one might assign semantic types to the translations for natural-language
8.2.4 Assignments
sentences. But I’ve obviously only provided the barest beginnings. In later handouts, we’ll
systematically extend this definition, so that we can handle quantifiers, additional modifiers, Here’s a look at the assignment functions for a logic with just two variables, both type e,
and a range of other phenomena. and a domain of entities De consisting of Bart, Lisa and Burns.
         
8.2.2 Functional application          
x →  x →   x .→  x →   x .→ 
Functional application, defined in by the clauses repeated in (8.3), is arguably the most  .   .     .   
         
important mode of semantic composition:          
y→. y→. y .→ y→. y .→
(8.3) a. if ! is of type %' , ( & and " is of type ' , then (! (" )) is an expression of        
type (        
x →   x .→   x .→   x .→ 
b. [[! (" )]]g = [[! ]]g ([[" ]]g)  .       
       
       
In the definition of compositionality, this is usually the function that is used to combine
expressions to form new expressions. y→. y .→ y .→ y .→
Here’s a picture of the semantics of functional application:

     
a .→ f a .→ f a .→ f
< = < = < =
     
 b →
. g  a =f  b →
. g  b =g  b .→ g  c = h
c .→ h c .→ h c .→ h

8.2.3 Lambda abstraction


Functional application saturates the arguments of functions. It has a kind of inverse in
lambda abstraction, which opens up argument slots (typically those that have been saturated
by variables).

47 48
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

8.2.5 Substitution and assignment update


Two paths to the same proposition. $$$####
$$$ ##
chris

$$###
wild(x) $$
$ ##
flat(the(earth))

  "!
"
" !!
[assignment independent]  
  $500
x .→ 
> ? 



wild(x) x → bart = wild(bart)  
-,



 - ,
y→
. bet ali
  [[wild(x)]]
 = 
 
 
x .→    
 
   x →  



 [[wild]]  .  (x) = Using lambda notation, specify a meaning for the logical expression bet as used in the
    
   above tree.
[[wild(bart)]] y →
. = y .→
@ A B C @ A
. 1
→ B C
= ♁ . 1
→ 8.3.3 All and only Lisa’s properties
[[wild]]
♀→ . 1 [[wild]] = ♁
♀→. 1
Suppose that Lisa is young, intelligent, and literate. She is not angry, and she is not tall.
Assume that there are no other properties besides these. Using these facts, construct the
following function using the square-bracket notation:
8.3 Questions * f . f (lisa)

8.3.1 Typing
8.3.4 Assignments
What are the two possible types for ! ?
Fill out these equality expressions:

 
# : %e,t&  
+*  
+
+ ** x
 .→ 

! : " : %e, %e,t&&  
 
 
 

(8.7) [[x]] y .→ =
8.3.2 Novel types  
 
Fill out the following semantic analysis with types and logical expressions:  
x .→ 
 
 
 
 
 

(8.6) Chris bet Ali $500 that the earth is flat. (8.8) [[y]] y .→ =

49 50
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

B C (8.16) a. She smiled.


  x.→ b. Ali can!
 

x


c. Sam remembers.
 .→ 
 
 
 
 
8.3.7 What’s wrong with these sentences?
(8.9) [[x]] y .→ =
If our semantic theory is on the right track, it should separate the good from the bad (the
B C grammatical from the ungrammatical), just as syntactic and phonological theories purport
to do. And we might ask even more of a semantic theory: that it help us get a grip on the
  x.→
notion of infelicitous relative to a context.
 
 
x .→ 
 
 




The task Provide explanations for the deviance of each of the examples in (8.17)–(8.21).
 

(8.10) [[happy(y)]] y .→ = (8.17) ∗ Ed devoured.


(but what about Ed ate ?)
B C
  x.→ (8.18) ∗I saw Sue and that it was raining.
 
  ∗ Ed
x .→  (8.19) glimpsed the dog the printer.
 
 
 
  # It’s
  (8.20) not raining, but Sue realizes it’s raining.
(8.11) [[* y. happy(y)]] y .→ =
(8.21) # The A-train suffered an existential crisis.

8.3.5 Variable names (cf. I dreamed that the A-train suffered an existential crisis.)

Without lambdas
A few things to keep in mind
Can the following expressions differ model-theoretically?
• We are not (necessarily) after a unified theory of the deviance seen above. It seems
(8.12) happy(x) clear, for instance, that (8.19) is different from (8.20).
(8.13) happy(y) • It might be the case that some, even all, of the deviance of these examples should be
explained by something other than semantic theory (syntactic theory and pragmatic
With lambdas theory are likely competitors). That’s fine. You are welcome to advocate such a
position, as long as your answer also explains what a semantic account would look
Can the following expressions differ model-theoretically?
like.
(8.14) * x. happy(x)
• If you’re unsure of how to analyze a constituent semantically and it isn’t important
(8.15) * y. happy(y) to your argument how it is analyzed, then translate it into a single predicate. For
example, The A-train ! the-train.
8.3.6 Assignments and contexts • As usual, there is no unique answer or set of answers that I expect you to converge
Use the concept of assignment function to articulate why the following expressions are on. Think about the theory, think about the intuitively-felt meanings involved, and
context-dependent (and thus strange when presented in isolation like this): work to construct an analysis that seems reasonable to you.

51 52
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

8.3.8 The job of semantics?


What should we do about the semantically marked or outright ungrammatical sentences
below? Should we use features of the lambda calculus to account for these contrasts? If so,
which features? If now, why not?

(8.22) a. Hippos gathered in the square.


b. # Alonzo gathered in the square.

(8.23) a. the idea (is) that pigs fly


b. # the essay (is) that pigs fly

(8.24) # Colorless green ideas sleep furiously.

8.3.9 Building a fragment


In linguistic semantics, a fragment is a complete theory of a (very) small chunk of a natural
language. Your goal for this part is to construct a fragment that handles all the intransitive
verb constructions in (8.25). (Ignore all issues relating to tense.)

(8.25) a. Bart burps.


b. Maggie giggles.
c. Lisa muses.

Your fragment should have the following parts:

(8.26) a. A specification of the class of well-formed expressions of your logical lan-


guage
b. A type theory for organizing the logical expressions
c. A specification of the domains for each of the types
d. An interpretation function that takes the logical expressions to objects in
the domains for the types (in a way that respects typing)
e. A translation procedure for mapping English phrases to expressions of the
logical language

To show readers how your fragment works, you should provide a derivation of some kind
for one of the sentences in (8.25).
Strive for generality. If your fragment works for the examples in (8.25), it will also
work for lots of other intransitive sentences. Either sketch how your fragment could be
generalized to new intransitive sentences with proper-name subjects or (better) define your
fragment so that it has this level of generality built into it.

53 54
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

9.2 Natural language types can be quite high


Here is the type of the “semantically more adventurous” of the two meanings for exceptive
but (e.g., No Muppet but Kermit ) that von Fintel (1994:114) explores (see also von Fintel
1993):1 0 J K 1
Handout 9 %e,t&, %%e,t&, %%e,t&,t&&, %%e,t&,t&

property determiner meaning quantifier


Semantic types Try that without lambda notation! (Just kidding. Don’t try it. It probably isn’t worth
the time or the scratch paper.)

9.1 Types: Your semantic workspace 9.3 Questions


9.1.1 Syntax 9.3.1 A type or not?
Semantic types play much the same role as syntactic categories do in the realm of syntax: Consider the type definition in (9.5).
they organize the expressions of the theory, thereby allowing us to control their interactions
and state broad generalizations. (9.5) a. ◦ and • are types

(9.1) dog : N ‘the lexical item dog is of category N’ b. If ' is a type, then %†, ' & is a type.
c. Nothing else is a type.
(9.2) dog : %e,t& ‘the expression dog is of semantic type %e,t&
For each of (9.6)–(9.9), specify whether it is in the type space defined in (9.5).
We can identify the category N with the set of all lexical items with that category specifica-
tion, and we can identify the type %e,t& with the set of all logical expressions with that type (9.6) %†, ◦&
specification.
(9.7) %◦, †&

9.1.2 Semantics (9.8) %◦, •&


L M NO
Types do more than just categorize expressions. They are also important in categorizing (9.9) †, †, %†, ◦&
denotations (meanings). In a typed, interpreted system, each type has a corresponding
denotation domain.
9.3.2 What’s the difference?
(9.3) a. the domain of type e is De , the set of all entities
In the literature, one finds both of the following type-specifications for properties:
b. the domain of type t is Dt = {0, 1}
c. the domain of a functional type %' , ( & is the set of all functions from D ' (9.10) %e, %s,t&&
into D(
(9.11) %s, %e,t&&
This in turn leads to a natural constraint on the sort of interpretation functions we are willing
to consider: What is the relationship between these two types?
1 This meaning is actually for but X (e.g., but Oscar ). If we represented the application of [[but]] on its first
(9.4) ! is of type ' iff [[! ]] ∈ D' argument, the type would be even higher.

55 56
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

9.3.3 Types in PTQ


Montague’s (1973) type definition is basically as follows:

(M) i. e and t are types.


ii. If ' and ( are types, then %' , ( & is a type.
iii. If ' is a type, then %s, ' & is a type.
iv. Nothing else is a type.

For each of (9.12)–(9.18), indicate whether or not it is in (M). (Yes or no is fine; no need to
explain why.)

(9.12) %s, %e,t&&

(9.13) %e, %s,t&&

(9.14) s
0 J K1
L M NO
(9.15) s, e, e, s, %e,t&

(9.16) t

(9.17) %t, e&

(9.18) %e, s&

9.3.4 Expressive types


The following type definition is used in Potts and Kawahara 2004, a study of honorification
in Japanese.

(9.19) a. e and t are regular types.


b. + is an expressive type.
c. If ' and ( are regular types, then %' , ( & is a regular type.
d. If ' is a regular type, then %' , + & is an expressive type.
e. Nothing else is a type.

Characterize the distribution of the + type. What are its freedoms and limitations? What
are the (potential) model-theoretic consequences of this distribution? (Suggestive question:
can anything ever take scope over an + thing? Why or why not?)

57 58
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

iii. The two-place functions:


 B C  B C  B C  B C
1 .→ 0 1 . → 0 1 .→ 0 1 .→ 0
 1 .→ 0 .→ 0   1 .→
.→ 0C 1 → .
. → 0C 1 → .
.→ 0C
 B C  B0   B0   B0 
 1 .→ 0   1 . → 0   1 .→ 1   1 .→ 1 
0 .→ 0 .→ 0→ . 0→ .
0 .→ 0 0 . → 1 0 .→ 0 0 .→ 1
Handout 10  B
1 .→ 0
C  B
1 .→ 0
C  B
1 .→ 0
C  B
1 .→ 0
C
 1 .→ 0 .→ 1   1 .→
. → 1C  1 .→ 0 . → 1C  1 .→ 0 . → 1C
 B C  B0   B   B 
 1 .→ 0   1 .→ 0   1 . → 1   1 .→ 1 
0 .→ .→ 0 .→ 0 .→
PL to lambdas  B
0 .→ 0
C 
0
B
0 . → 1
C  B
0 . → 0
C  B
0 .→ 1
C
1 .→ 1 1 .→ 1 1 .→ 1 1 .→ 1
 1 .→ 0 .→ 0   1 .→
. → 0C  1 .→ 0 .→ 0C  1 .→ 0 . → 0C
 B C  B0   B   B 
 1 .→ 0   1 .→ 0   1 .→ 1   1 . → 1 
0 .→ 0 .→ 0 .→ 0 .→
10.1 Propositional logic from a functional perspective 0 .→ 0 0 . → 1 0 .→ 0 0 .→ 1
 B C  B C  B C  B C
1 .→ 1 1 .→ 1 1 .→ 1 1 .→ 1
 1 .→ 0 .→ 1   1 .→
. → 1C  1 .→ 0 . → 1C  1 .→ 0 .→ 1C
10.1.1 Functions  B C  B0   B   B 
 1 .→ 0   1 .→ 0   1 . → 1   1 .→ 1 
0 .→ 0 .→ 0 .→ 0 .→
0 .→ 0 0 . → 1 0 .→ 0 0 .→ 1
Propositional logic is our core; its models can be presented as a class of functions. This
doesn’t change the underlying logic, but it has nice conceptual advantages:
10.2 Syntax for PL
i. Its compositionality is obvious.
10.2.1 Types for PL
i. t is a type.
ii. It permits us to come closer to natural language syntax.
ii. %t,t& is a type.

iii. It reveals that this is a subsystem of the more complex logics we’ll use later. iii. %t, %t,t&& is a type.

iv. Nothing else is a type.


The following presentation is adapted and expanded from the one in Gamut 1991:§2.7.
10.2.2 Terms for PL
i. !aaron-abandons-abolone",!aaron-abases-abalone", !aaron-abates" are formu-
The functional model M lae, type t.

i. Basic entities: {0, 1} ii. ¬, I, 1, and ⊥ are one-place connectives, type %t,t&.

¯ are two-place connectives, type %t, %t,t&&.


iii. ∧, ∨, →, ↔, |, ↓, and ∨
ii. The one-place functions:
iv. If † is one-place connective and $ is a formula, then †($ ) is a formula, type t.
B CB CB CB C
1→
. 0 1 .→ 1 1→
. 1 1→
. 0 v. If † is a two-place connective and $ is a formula, then †($ ) is a one-place connective,
0→. 1 0 .→ 0 0→. 1 0→. 0 type %t,t&.

59 60
Ling 610, UMass Amherst, Fall 2005 Christopher Potts Ling 610, UMass Amherst, Fall 2005 Christopher Potts

10.3 Semantics for PL 10.4 Parsetree comparison


Type interpretation Assume that only lexical (categorematic) items appears as terminals.
regular functional
i. The interpretation of the type t is Dt = {0, 1}

ii. The interpretation of a type %' , ( & is the set of all functions from D ' to D( . $ ∧% ∧(% )($ )
/. 10
/ . 1
1 00
10.3.1 Term interpretation $ % $ ∧(% )
B C B C /.
1→
. 0 1→
. 1 / .
[[¬]]M = (negation) [[I]]M = (identity)
0→. 1 0→. 0 ∧ %
B C B C
1→. 1 1→
. 0
[[1]]M = (universal) [[⊥]]M = (never) $ →% →(% )($ )
0→ . 1 0→. 0 /. -,
 B C / . -
- ,,
1 .→ 1 $ % →(% ) $
 1 .→ 0 . → 0C
[[∧]]M =  B  (classical conjunction) [[$ (% )]]M = [[$ ]]M ([[% ]]M ) /.
 1 .→ 0  / .
0 .→ → %
0 .→ 0
 B C
1 .→ 1
 1 .→ 0 . → 1C 10.5 Compositional parsetree interpretation
[[∨]]M =  B  (classical disjunction)
 1 .→ 1 
0 .→
0 .→ 0 ¬(∧(% )($ )) 1
32 '%%
 B C 3 2 '' %%
1 .→ 1 3 2 B ''C %
 1 .→ 0 ¬ ∧(% )($ ) 1→
. 0
 B . → 0C 0
[[→]]M = (classical material conditional) 10 0→. 1 '%%
 1 .→ 1  1
1 00 '' %%
0 .→ $ ∧(% ) ''
' B %
% C
0 .→ 1 1 .→ 1
 B C /. 0
1 .→ 1 / . 0 .→ 0
 1 .→ 0 . → 0C ∧ % "
"!
!
[[↔]]M =  B  (classical material biconditional)  B "" C  !!
 1 .→ 0 
0 .→ 1 .→ 1
0 .→ 1  1 →
. 1
 B 0 .→ 0C

 B C 
1 .→ 0 1 .→ 0 
0 .→
 1 .→ 0 . → 1C 0 .→ 0
[[∨]]
¯ M =  B  (classical exclusive disjunction)
 1 .→ 1 
0 .→
0 .→ 0
 B C
1 .→ 0
 1 .→ 0 . → 1C
[[|]]M =  B  (Sheffer stroke, ‘nand’)
 1 .→ 1 
0 .→
0 .→ 1
 B C
1 .→ 0
 1 .→ 0 . → 0C
[[↓]]M =  B  (‘neither . . . nor’)
 1 .→ 0 
0 .→
0 .→ 1

61 62
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

Lift recipe: Suppose a certain class of expressions E, of type , , is missing a crucial


element A.

i. Add to your type definition a new type a for A.

ii. Add to your models a domain D a — the interpretation of the type a.


Handout 11 iii. Redefine all the expressions in the class E so that their types are now %a, , &.

iv. Find all the expressions that you formerly had taking , -type expressions as argu-
Expanding one’s expressive power ments and redefine them so that they can accept %a, , & arguments.

11.1 In search of a suitable “machine” (Kaplan 1989:541)


Very often, one constructs a model only to find that it is not expressive enough to make
certain distinctions. Semanticists often respond by lifting the entire semantic system. Ex-
amples:

Individuals

• Problem: Basic propositional logic has only expressions of type t. There is no way
to talk about Bob or Carol or Ted or Alice.

• Response: Lift from the domain of Dt = {0, 1} to the domain of functions from
entities to truth-values, D %e,t& .

Worlds

• Problem: Even D%e,t& isn’t sufficient. We cannot, for instance, give a semantics for
belief statements in these terms

• Response: Add a new set of entities, Ds = W , the set of possible worlds. Sentence
meanings are now functions from worlds into truth values; VP meanings are now
functions from entities to sentence meanings; etc.

Times

• Problem: A model with no elements for representing time cannot give a semantics
for any explicitly temporal expressions.

• Response: Add a new set of entities, Ds = R, the class of times. Sentence meanings
are now functions from times into something else.

And so on; how high will we lift?

63 64
Ling 610, UMass Amherst, Fall 2005 Christopher Potts

Marantz, Alec. 1984. On the Nature of Grammatical Relations. Cambridge, MA: MIT
Press.

Montague, Richard. 1973. The proper treatment of quantification in ordinary English. In


Jaakko Hintikka, Julius Matthew Emil Moravcisk, and Patrick Suppes, eds., Approaches
to Natural Language, 221–242. Dordrecht: D. Reidel. Reprinted in Montague (1974),
Bibliography 247–270.

Montague, Richard. 1974. Formal Philosophy: Selected Papers of Richard Montague.


New Haven, CT: Yale University Press. Edited and with an introduction by Richmond
Ayer, A. J. 1936. Language, Truth and Logic. Harmondsworth: Penguin. H. Thomason.
Dowty, David. 2003. What is ‘direct compositionality’?, Paper presented at the Workshop Partee, Barbara H. 1984. Compositionality. In Fred Landman and Frank Veltman, eds.,
on Direct Compositionality, Brown University, June. Varieties of Formal Semantics, 281–311. Dordrecht: Foris. Reprinted in Partee (2004),
153–181. Page references to the reprinting.
von Fintel, Kai. 1993. Exceptive constructions. Natural Language Semantics 1(2):123–
148. Partee, Barbara H. 1996. The development of formal semantics in linguistic theory. In
Shalom Lappin, ed., The Handbook of Contemporary Semantic Theory, 11–38. Oxford:
von Fintel, Kai. 1994. Restrictions on Quantifier Domains. Ph.D. thesis, University of Mas-
Blackwell.
sachusetts, Amherst, MA. Reproduced and distributed by the GLSA, UMass, Amherst,
Linguistics Department. Partee, Barbara H. 1997. Montague semantics. In Johan van Benthem and Alice ter Meulen,
Gamut, L. T. F. 1991. Logic, Language, and Meaning, Volume 1. Chicago: University of eds., Handbook of Logic and Language, 5–91. Cambridge, MA and Amsterdam: MIT
Press and North-Holland.
Chicago Press.
Groenendijk, Jeroen and Martin Stokhof. 1991. Dynamic predicate logic. Linguistics and Partee, Barbara H. 2004. Compositionality in Formal Semantics: Selected Papers of Bar-
Philosophy 14(1):39–100. bara H. Partee, Volume 1 of Explorations in Semantics. Oxford: Blackwell Publishing.

Harnish, Robert, ed. 1994. Basic Topics in the Philosophy of Language. Englewood Cliffs, Partee, Barbara H., Alice ter Meulen, and Robert E. Wall. 1993. Mathematical Methods in
NJ: Prentice Hall. Linguistics. Corrected 1st edition. Dordrecht: Kluwer.

Heim, Irene and Angelika Kratzer. 1998. Semantics in Generative Grammar. Oxford: Potts, Christopher and Shigeto Kawahara. 2004. The performative content of Japanese
Blackwell Publishers. honorifics. In Kazuha Watanabe and Robert B. Young, eds., Proceedings of the 14th
Conference on Semantics and Linguistic Theory. Ithaca, NY: CLC Publications.
Hodges, Wilfrid. Winter 2001. Tarski’s truth definitions. In Ed-
ward N. Zalta, ed., The Stanford Encyclopedia of Philosophy. URL Tarski, Alfred. 1944. The semantic concept of truth and the foundations of semantics.
http://plato.stanford.edu/archives/win2001/entries/tarski-truth/. Philosophy and Phenomenological Research IV. Reprinted in Harnish (1994), 536–570.

Janssen, Theo M. V. 1997. Compositionality. In Johan van Benthem and Alice ter Meulen,
eds., Handbook of Logic and Language, 417–473. Amsterdam: Elsevier.
Keenan, Edward. 1974. The functional principle: Generalizing the notion ‘subject of’. In
Michael W. La Galy, Robert A. Fox, and Anthony Bruck, eds., Papers from the Tenth
Regional Meeting, 298–309. Chicago: Chicago Linguistics Society.
Kratzer, Angelika. 1996. Severing the external argument from its verb. In Johan Rooryck
and Laurie Zaring, eds., Phrase Structure and the Lexicon, 109–137. Dordrecht: Kluwer.
Lewis, David. 1976. General semantics. In Barbara H. Partee, ed., Montague Grammar,
1–50. New York: Academic Press.

65 66

Das könnte Ihnen auch gefallen