You are on page 1of 277

The Elements of Mathematical Semantics

Trends in Linguistics
Studies and Monographs 66

Editor
Werner Winter

Mouton de Gruyter
Berlin · New York
The Elements of
Mathematical Semantics

by
Maurice V Aldridge

Mouton de Gruyter
Berlin · New York 1992
Mouton de Gruyter (formerly Mouton, The Hague)
is a Division of Walter de Gruyter & Co., Berlin.

® Printed on acid-free paper which falls within the guidelines


of the ANSI to ensure permanence and durability.

Library of Congress Cataloging in Publication Data

Aldridge, Μ. V. (Maurice Vincent)


The elements of mathematical semantics / by Maurice V.
Aldridge.
p. cm. — (Trends in linguistics. Studies and
monographs ; 66)
Includes bibliographical references and index.
ISBN 3-11-012957-4 (acid-free paper)
1. Semantics — Mathematical models. 2. Mathematical
linguistics. 3. Language and logic. 4. Pragmatics.
5. Categorial grammar. I. Title. II. Series.
P325.5.M36A43 1992
4 0 1 4 1 Ό 1 5 1 - dc20 92-23202
CIP

Die Deutsche Bibliothek — Cataloging in Publication Data

Aldridge, Maurice V.:


The elements of mathematical semantics / by Maurice V.
Aldridge. — Berlin ; New York : Mouton de Gruyter, 1992
(Trends in linguistics : Studies and monographs ; 66)
ISBN 3-11-012957-4
NE: Trends in linguistics / Studies and monographs

© Copyright 1992 by Walter de Gruyter & Co., D-1000 Berlin 30


All rights reserved, including those of translation into foreign languages. No part of this
book may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopy, recording, or any information storage and retrieval
system, without permission in writing from the publisher.
Disk Conversion: D. L. Lewis, Berlin. — Printing: Gerike GmbH, Berlin.
Binding: Lüderitz & Bauer, Berlin. — Printed in Germany
Acknowledgements

In the preparation of this book, I was generously supported by a grant from


the Human sciences research council as well as by the University of The
Witwatersrand.
I have received invaluable assistance in the form of academic advice and
technical help from many colleagues and friends, among whom I owe a
special debt to Professor J. Heidema of the department of mathematics at
Rand Afrikaans University. Above all, my thanks go to Christine Aldridge
whose tireless aid in the technical preparation of the book has been beyond
reckoning and to Oliver Aldridge who was responsible for dragging me into
the computer age.

Johannesburg, June, 1992 Maurice V. Aldridge


I dedicate this book to St. Dunstan 's. St. Dunstan 's is an organisation devoted
to the care and assistance of the military blinded and, without their endless
support, I would never have been able to set out upon the wad of scholarship,
let alone become, in some small degree, a linguist.
Contents

1 Some topics in semantics 1


1.1 Aims of this study 1
1.2 Mathematical linguistics 1
1.3 A functional view of meaning 2
1.4 Truth conditions and truth values 8
1.5 Counterfactuals 11
1.6 Compositionality and syntax 11
1.7 Pragmatics 13
1.8 Propositional relations 14
1.9 Ambiguity 16
1.10 Formal logic and natural languages 17
1.11 Universal semantics 18

2 Background notions from mathematics 20


2.1 Purpose of this chapter 20
2.2 Sets 20
2.3 The cardinality of a set 23
2.4 Product sets 24
2.5 Relations and functions 25
2.6 Equivalence relations 27
2.7 Boolean algebras 29
2.8 Isomorphisms and homomorphisms 32
2.9 Effective processes 33

3 Background notions from formal logic 37


3.1 Scope of this chapter 37
3.2 The calculus of propositions 37
3.3 The nature of propositions 41
3.4 Monotonicity 42
3.5 The predicate calculus 43
3.6 Modal logic 50
3.7 Lambda abstraction 58
X Contents

3.8 Montague's intensional logic 61

4 Vagueness and ambiguity 78


4.1 Background 78
4.2 Ambiguity 80
4.3 Structural ambiguity 85
4.4 De dicto vs de re 91
4.5 Intensions and temporal quantifiers 98
4.6 Modalities 100
4.7 Regimentation 102

5 Logical form in binding theory 104


5.1 Levels of representation 104
5.2 Logical form 105
5.3 Wellformedness in binding theory 107
5.3.1 Some typical problems of coreference 107
5.3.2 Some conditions on coreference 108
5.3.3 Wh-questions and quantifiers 116
5.4 Case 120
5.5 Logical form in semantic representation 125

6 Pragmatics 126
6.1 Definition of pragmatics 126
6.2 Indices 127
6.3 Contextual properties 128
6.4 Performatives 132
6.5 Fuzziness 137
6.6 Presuppositions 137
6.7 Types of semantic presupposition 139
6.8 Truth-value gaps 147
6.9 Primary and secondary presuppositions 152
6.10 Presuppositions and questions 153
6.11 Pragmatic presuppositions 155

7 Categorial grammar 162


7.1 Categorial grammar 162
7.2 A severely limited grammar 163
7.3 Some category assignments 166
7.3.1 Nominals and intransitive verbs 167
Contents XI

7.3.2 Nominals and transitive verbs 168


7.3.3 Some verb types 170
7.3.4 Wh-words 174
7.3.5 Common nouns, quantifiers and of 176
7.3.6 Mass and abstract nouns 181
7.3.7 Adverbs and adjectives 182
7.3.8 Relatives and appositives 185
7.3.9 Comparison 186
7.3.10 Intensifies 190
7.3.11 Equatives 191
7.3.12 Degree complements 193
7.4 Abbreviations 194
7.5 Spelling-out rules 196
7.6 The lexicon 199

8 Semantic rules 201


8.1 Semantic rules 201
8.2 Logical and nonlogical connectives 203
8.3 Nominals 214
8.3.1 Unitary nominals 214
8.3.2 Common nouns and intransitive verbs 215
8.3.3 Logical quantifiers 217
8.3.4 Proportional quantifiers 222
8.3.5 Partitives and genitives 226
8.4 Some verb types 227
8.5 Wh-words 234
8.6 Adjectives 235
8.7 Adverbs 238

Bibliography 243

Index 251
Chapter 1
Some topics in semantics

1.1 Aims of this study

The central preoccupation of this study is semantic. It is intended as a modest


contribution to the development of a general theory of meaning as presented
in certain proposals made over the last two decades and principally associated
with the work of Richard Montague (Thomason 1974). I choose to give this
approach the neutral name "mathematical semantics" but, in keeping with
common usage, it could also be called "Montague semantics" or "Montague
grammar" in recognition of the central role which Montague's writings play
in its foundation.
In this chapter, I present a preliminary survey of the kinds of semantic
phenomena with which I shall be concerned. However, since the general
approach adopted for the solution of these basic questions constitutes a sub-
discipline of mathematical linguistics, it is appropriate to begin with a few
informal remarks on that more general enterprise.

1.2 Mathematical linguistics

In this book, mathematical linguistics is assumed to be a noncalculative dis-


cipline primarily concerned with the formal modelling of language. It is
noncalculative in the sense that calculations of proportion play no role in its
methodology and it is mathematical in that it makes extensive use of certain
subdisciplines of mathematics, notably, set theory and formal logic. As the
approach is noncalculative, it involves no discussion of statistical linguistics,
word counts, probability grammar, etc.
The essence of mathematical linguistics is, in my view, most clearly set
forth in G l a d k i j - M e l ' c u k (1969). Those authors present an imaginary sit-
uation in which a mathematician resolves to construct a model of human
linguistic behaviour. His observations lead him to establish, on the one hand,
2 Some topics in semantics

a plane of content - a set of meanings - and, on the other, a plane of ex-


pression - a set of texts, or utterances.
Our mathematician, further, observes that the set of meanings corresponds
to the set of texts in a nonrandom fashion. For some texts there are certain
meanings and for others one meaning only and vice versa. This observation
leads him to hypothesise the existence of a set of rules which map the one
set into the other and it is the unambiguous and explicit formal statement of
these rules which becomes the central goal of his study.
It seems plausible to equate the concept, Language, with the system of
rules itself. This equation is reminiscent of de Saussure's approach (1915),
though it is not entirely compatible since he did not see form and expression
as necessarily distinct.
Whether or not we are prepared to accept the equation above, the close
parallel between the two planes of content and expression and the traditional
levels of deep and surface structure is unmistakable. In particular, Gladkij
and Mel'cuk's approach is reminiscent of the work of Cresswell (1973) who
adopted the term "deep structure" as a name for semantic representations
which are converted by a simple set of rules into "shallow structures". It is
also, of course, close to the concepts of "logical form" and "Surface represen-
tation" in generative semantics. The advantages of such multilevel approaches
are very considerable and I shall adopt a parallel treatment in this study.
Being thus concerned with the description of rules, it is natural that the
formal linguist should approach her/his task within the framework of formal
logic (chapters 2 and 3) and that discipline, therefore, plays a central role in
the metatheory and methodology.
Although, in my opinion, Gladkij and Mel'cuk's book is among the best
texts on mathematical linguistics available, it was prepared before the influ-
ence of Montague's work on the discipline became universally apparent and
thus, concentrating, as it does, on formal grammars - devices for the descrip-
tion or generation of syntactically grammatical utterances - , it makes little
contribution to the exploration of semantics, which was Montague's primary
interest and is the focus of this book.

1.3 A functional view of meaning

The philosophical complexity of the nature of meaning is so profound and its


specialised discussion so extensive that it seems wisest, in a linguistic study,
to treat it on a somewhat naive level. Rather than attempting, for example, to
A functional view of meaning 3

investigate the nature of existence or to explore the relation between the sign
and what is signified, it appears, to me, more profitable simply to present
a particular theory of meaning almost as if it enjoyed the status of "God's
truth" and allow the pages which follow to constitute its elaboration and
justification. In this preliminary exercise, 1 shall make use of several notions,
such as Function, Intension, and Proposition, which are not discussed in detail
until later chapters.
Consider, first, what is involved in understanding the sentence:

(1) Mon crayon est noir.

As Carnap (1961) stressed, to understand (1) it is necessary to know what


the world would have to be like in order for it to be true. It is certainly not
necessary to know whether it is true in fact. If we understand (1), then, in an
important sense, we can be said to know what it means.
Of course, the problem of knowledge is one of immense complexity.
For instance, should we distinguish between world-knowledge and linguistic-
knowledge, or is there no difference? I shall not explore such questions here.
However, it seems obvious that the depth of our understanding is dependent
upon the depth of our knowledge and that we do not need complete knowl-
edge to know of a sentence that it can be true. Thus, I do not understand
(2) in a profound sense, yet I know that it is true and, hence, know what it
means.

(2) The centre of mass of the universe is a point.

Given the above reservations, let us say that to know the meaning of
a sentence like (1) is sufficiently to know the set of conditions which would
have to hold in order for it to be true. This is most certainly not to say
that the meaning of the sentence is its truth value. In the orthodox view, a
sentence is false if it is not true. Thus, in this orthodoxy, there are two and
only two truth values, truth and falsehood. It would be nonsensical to make
the same claim for sentence meanings. That meaning is more than mere truth
or falsehood is clearly demonstrated by Lewis (1970) who points out that, if
it were not so, we would be obliged to consider all tautologies as having the
same meaning and likewise for contradictions.
Obviously, just because a certain state of affairs actually is the case, we
are not always obliged to accept that things must be that way. Thus, many
elements in the set of truth conditions which provide the foundation of our
understanding are only contingent conditions, they are not necessary. Sets
of such conditions may be thought of, metaphorically, as possible worlds of
which the actual world is but one. Thus, for example, while, in the actual
4 Some topics in semantics

world, London is the capital of England, other worlds are possible in which
that is not so.
Alongside contingent conditions, we have a subset of necessary conditions.
Thus, for example, if a proposition, p, is true, then it necessarily follows that
the disjunction "p or q" is true, no matter whether q be true or false. This is
so because, by necessity, the disjunction of any true proposition with any true
or false one always results in a true proposition. Propositions which are true
by necessity are "logically" or "tautologically" true. Similar considerations
also hold, mutatis mutandis, for contradictions. Thus, any proposition of the
form (p and not-p) is false by necessity. We may think of the set of necessary
conditions as the set of all possible worlds.
Carnap's example (1) is apt in a book written in English because it facil-
itates the demonstration of the obvious, but sometimes neglected, point that
what a sentence denotes is a proposition, which could often be denoted by
other sentences. Thus, the proposition in (1) is equally well denoted by:

(3) My pencil is black.

as it is by several sentences such as:

(4) La couleur de mon crayon est noire.

or:

(5) My pencil is black in colour.

Given that a sentence denotes a proposition, it is common to think of


that proposition as a function which, having the set of conditions as input,
yields truth or falsehood as its value. Call such a function an "intension".
Then, the intensions corresponding to the sentences so far given have the
form, F(W), where W is a set of possible worlds. As remarked above, the
value of such a function - its "extension" - is usually taken to be truth or
falsehood, though my evaluations will not always be so restricted.
Sentences - or the propositions they denote - are not primitive. They
have internal structure which reflects the structure of the possible worlds
which determine their values. Thus, in analysing the meaning of a particular
sentence, it is necessary to determine both the meanings of its parts and the
manner of its composition. This principle, the principle that meanings are
"compositional", reflecting the influence of Frege, is at the heart of current
work in semantics and much of this book will constitute an instance of its
application.
It can be argued that, at the most basic level, the operation of composition-
ality is observable in the morphology of many languages. Thus, for instance,
A functional view of meaning 5

in English, the suffix -er may, among other things, combine with a verb, say
walk, or kill, to form an agentive noun, walker, killer. The meanings of such
nouns are, thus, arrived at on the basis of the meanings of the individual
morphemes. However, to argue consistently for a compositional approach to
word-meaning is frequently very difficult and apart from some discussion of
case in chapter 5 and the morphology of comparison in chapter 7/8, I shall
largely ignore morphology in this book.
Taking morphology for granted, the constitutive parts of sentences are
made up of words. What determines the semantic and syntactic ability of a
word to combine with others is the syntactic category to which it belongs. In
order to exploit the compositional principle in arriving at the meaning of a
particular sentence, therefore, it is necessary to determine the meanings of its
words as governed by their syntactic function. Thus, a fundamental aim in the
development of a semantic theory is the determination of the possible range
of meanings of the categories on which the structure of sentences depends.
The simplest sentential structure is that consisting merely of a proper noun
and an intransitive verb, as in:

(6) Jack runs.

To provide a semantic account of such a sentence, it is necessary to state the


conditions for its evaluation on the basis, first, of the words in isolation and,
then, as they combine in a subject-predicate structure.
Let us use the term "individual" as a synonym of "thing". Evidently, any
possible world will contain one or more individuals, which will be endowed
with certain properties and stand in particular relations to each other.
We may loosely think of proper nouns, like Jack, as having unique values.
More exactly, proper nouns denote constant functions which, given the set
of all individuals as input, always yield a particular member as output.
If we regard the property of running as characteristic of certain individuals,
we may think of the predicate runs as denoting a function which, given the set
of individuals, picks out a given subset. Thus, the proposition, runs(Jack),
has the value true if and only if the individual denoted by Jack is in the
set of running individuals. If the proposition, runs{Jack>, is true, then, the
propositional function, runs(x), is "satisfied" when Jack is substituted for
χ in its argument place. That is to say, the proposition that something has
the property of running is satisfied by at least one individual, namely, the
individual picked out by the constant function assigned to Jack.
As a more interesting example, consider the following.

(7) The president of France is French.


6 Some topics in semantics

Without discussing the internal structure of the president of France, it is


apparent that (7), on one interpretation, claims that any individual picked
out by the function denoted by the subject noun phrase has the property of
being French. Indeed, even if there were no such individual at the time of
utterance, (7) would be meaningful and, therefore, capable of being judged
true or false, or, perhaps, pointless.
Thus, we think of phrases like the president of France in terms not merely
of their extension, their denotation, but also in terms of their intension or
sense. Put more precisely: the extension of a given noun phrase function is
the individual or set of individuals which it picks out at a given possible
world and moment. The intension of such a function is all of its possible
extensions in all worlds and moments of time. Similarly, the extension of a
given intransitive verb function, say that denoted by runs or be French, is
the set of individuals it picks out at a given instant. The intension of such a
function is all of its possible extensions.
From these remarks, it is evident that the notion of an intension is usu-
ally more general than that of an extension. Reverting to the looser way of
speaking: to know the intension of the president of France is to know what
conditions must be met for any individual to be the value of the function it
denotes. To know the extension of the same term at a given moment, it is
necessary only to know which particular individual its associated function
picks out at that moment. Parallel remarks hold, of course, in respect of the
intensions and extensions of predicates.
Of course, the sentences of a natural language are not all as simple as (6)
or (7). The propositions they denote are frequently concerned with relations
between individuals. Thus, for instance, (8) asserts that Scott stands in the
relation is the author of to Waverley.

(8) Scott wrote Waverley.

Like noun phrases and intransitive verbs, transitive verbs, such as write, have
extensions and intensions. The extension of the wr/te-function is the set of
ordered pairs of individuals, e.g. < Waverley, Scott >, which is its value at a
given world-time pair. Its intension is the range of its possible extensions. If
the verb necessarily involves three individuals - I here ignore the instrument
involved in writing - for example, give, it denotes a function which has a set
of ordered triples as its value, and so forth.
In the case depicted in (8), the verb can be treated, from a semantic point
of view, as purely extensional since both Scott and Waverley must actually
exist in order for the proposition to be true. However, such extensionality
Afunctional view of meaning 7

does not always characterise transitive verbs. Thus, for example, while find
in (9) is extensional, seek in (10) is not, unless a specific whale was involved.

(9) Jonah found a whale.

(10) Jonah sought a whale.

At first glance, the nonextensionality of seek in (10) might appear to


demonstrate nothing more than the need for intensions as well as extensions
in semantic analysis. However, extended to further instances, it soon becomes
evident that much more is involved than a simple case of ambiguity. Consider,
for example, the following case:

(11) Necessarily, the prime minister of England's residence is the prime


minister of England's residence.

That this sentence is true is obvious from the tautological status of:

(12) The prime minister of England's residence is the prime minister of


England's residence.

The same cannot be said of:

(13) *Necessarily, the prime minister of England's residence is no. 10.

Although no. 10 and the prime minister of England's residence have exactly
the same extension - a particular house in Downing Street, London - they
do not mean the same. Thus, while, in the actual world, the sentence:

(14) The prime minister of England's residence is no. 10.

is true, it is not so by necessity.


Cases like these figure prominently in the philosophical and linguistic
literature and will frequently be the focus of attention in this book. They
were first discussed by Frege in the context of Leibniz's law of substitution.
Briefly, Leibniz's law says that two terms having the same denotation may be
substituted for each other without affecting truth values. The fact that (13) is
false shows that the two phrases in question are not completely interchange-
able. Thus, it is evident that the notion of meaning cannot be simply equated
with extension. While the function denoted by no. 10 Downing Street has an
extension which coincides precisely with its intension, the same cannot be
said of that denoted by the prime minister of England's residence and, thus,
the terms are not completely interchangeable.
Yet another instance of the need for intensions which results from the
failure of Leibniz's law is provided by:
8 Some topics in semantics

(15) Jack believes that 1+3 = 4.

Since the expression ( 1 + 3 ) has the same extension as (2 + 2), i.e. the
successor of 3, we might expect that the two could be substituted in (15)
without affecting its meaning. However, reflection shows that this is not so.
If (15) is true, then Jack has a belief about ( 1 + 3 ) and, unless he is totally
irrational, it is necessarily true that he believes that (1 + 3) = (1 + 3 ) .
However, it certainly does not follow that his belief about ( 1 + 3 ) requires
that he hold that (2 + 2) also has the value 4. Rather, therefore, than saying
that the object of Jack's belief is the extension of the expression (1 + 3) in
(15), we claim, for the present, that his belief has to do with its intension. I
return to such cases and especially Cresswell's sophisticated treatment (1985)
later, chapter 4, section 4.
Thus, we see that a verb of "propositional attitude" like believe is like
such transitive verbs as seek in failing to be fully extensional. Like such
verbs, its possible values will not be ordered pairs of extensions, but ordered
pairs of an extension - the subject - and an intension - the object.
According to the functional view of meaning espoused here, then, the
denotation of a sentence is a proposition and it is this denotation which
constitutes the meaning of the sentence. A proposition is a function which
takes, as argument, possible worlds and denotes a value, typically true or false.
These evaluations are assigned to propositions relative to possible worlds and
it is our apprehension of such worlds which enables us to say which value
is assigned in a given case. The meanings of propositions are arrived at
compositionally through the meanings of their parts and the manner of their
composition. The meanings of the parts are, themselves, functions some of
which have straightforward extensions as their values and others of which
have, as values, either intensions, or ntuples consisting of extensions and
intensions.

1.4 Truth conditions and truth values

Of course, the very idea of a semantics based, albeit not exclusively, on truth
values must presuppose a theory of truth. The discussion of such theories in
its most profound philosophical form lies well beyond the legitimate concerns
of a linguistic essay on semantics. As in the previous section, therefore, my
remarks in this area may appear rather superficial.
Truth conditions and truth values 9

An issue which is clearly central is the difference between the notions of


truth condition and truth value.
Whatever is doubtful about the nature of meaning as a concept, we are
certain that for two sentences to be synonymous, they must mean the same
thing - they must assert the same truth. Thus, if we can establish what it is
for a sentence to refer to a truth, we have the basis for a theory of synonymy
and, hence, of meaning.
A common-sense view is that the propositional content of a sentence
evaluates to truth if it reflects the way things actually are - including, of
course, the fact of somethings' being possible and of others being necessary,
etc.. Tarski's famous example (1956):

(16) Snow is white.

has the following truth condition:

(16) a. "Snow is white. " is true just in case snow is white.

This may be generalised as:

(17) Sentence s is true just in case p.

Although (16a) represents the truth condition for (16), it does not represent
its truth value.
It is usually assumed that the connective just in case in such formulae as
(17) is strict implication. That relation yields the value true only when both
protasis and apodosis have the same truth value. This is, in fact, a source
of difficulty since, construing the relation thus, we seem obliged to accept
infinitely many nonsensical conditions such as:

(18) "Snow is white. " is true just in case grass is green.

Such combinations are true under logical implication, but they clearly do not
require us to accept that the antecedent depends for its meaning upon the
consequent. The difficulty disappears - or at least recedes - if we accept
that truth conditions are not in a one-to-one relation with truth values (Fodor
1977). That is incorrect. They are in a many-to-one relation. It is patently
erroneous to claim that because there are only two truth values - ignoring
the possibility of an in-between value - there are only two truth conditions.
It seems, to me, reasonable to hold that a given proposition has a particular
truth as its meaning, which is not to say that that particular truth is its truth
value. Truth values are associated with particular truths or falsehoods, they
are not those particulars themselves.
10 Some topics in semantics

This simple approach has the advantage of avoiding the complex philo-
sophical problems surrounding the role of necessary truth and contingent
truth in the analysis of such formulae as (16a). It is often held that we can
exclude such nonsense instances of (17) as (18) by narrowing down the kind
of truth involved to necessary truth. Obviously, the truth of (16a) is neces-
sary. However, the philosophical complexities of necessity are considerable -
I offer some discussion in chapter 3, section 6 - and it seems unnecessary
to call upon it here. We do, of course, rely upon the notion of necessity
when we invoke Leibniz's law in arguing for the reality of intensions, but
the motivation there is altogether more powerful.
Moreover, it seems unwise, in the present connection, to call upon the
notion of nonlogical necessity or analyticity, Katz (1964). An expression, p,
is nonlogically necessary if the meaning of its predicate is fully included in
its subject or vice versa. Thus, (19) is analytic:

(19) A spinster is a woman who never married.

While such cases are patently analytic, analyticity, itself, does not provide
a foundation for a theory of meaning since it relies upon a prior notion of
meaning for its own definition - comparable objections could, doubtless, be
levelled at my own appeal to synonymy above.
Given that there is only one world, one actual state of affairs, we might be
tempted to say that propositions are true if they express facts. The notion of
a fact is difficult to disentangle from what I referred to above as a "particular
truth". For simplicity, let us say that facts are actual truths and that anything
which is only possible is not a fact. Appeal to facts is usually appeal to
extensions and, as we have seen already, it is improper to adopt an exclusively
extensional view of meaning. Frege showed that, while the evening star and
the morning star have the same extension, they certainly do not have the same
sense. Moreover, we frequently speak not of actualities but of possibilities
and whether these possibilities are viewed as reasonable - perhaps there are
ten planets - or unreasonable - there could be fiery dragons - their expression
requires intensions, not extensions. Thus, the intensional approach to meaning
seems very sensible. It is worth mention, here, that Montague himself (1970a)
tried, without success, to construct an extensional semantics, only to abandon
it in his famous (1973) paper.
Counter/actuals 11

1.5 Counterfactuals

To introduce possible worlds, as we have done, into semantics is not entirely


uncontentious. One interesting problem which it raises concerns cross-world
identity. Lewis (1973) discusses in considerable detail counterfactual sen-
tences, for example:

(20) If Jack were a rich man, he would marry Sally.

This sentence can be used appropriately if and only if, in the actual world,
Jack is not a rich man. If it is true, it must be so with respect to a nonactual
world and, in such a case, is the individual denoted by Jack the same individ-
ual in both worlds? Lewis proposes that we accept that the worlds concerned
be as similar as possible, differing only sufficiently to make the sentence
counterfactual. We should, therefore, trace the counterfactuality of (20) to
variation in nonessential properties of the individual constantly denoted by
Jack.
It is apparent that the problem of cross-world identity, although fairly
simple in cases like (20), can assume enormous dimensions. Even if the dis-
tinction between essential and accidental properties were clear-cut, it would
still be difficult indeed to account for such extreme cases as:

(21) If London were a rich man, it would marry Sally.

While, therefore, I shall follow Montague's practice and regard proper nouns
as rigid designators, I am conscious of at least some of the metaphysical
difficulties.

1.6 Compositionality and syntax

I regard the central task of semantics as the study of the rules by which
word-meanings are combined into complex units - which is not to say that it
takes no heed of word-meanings. In order for the theory to be compositional,
it must take account of how lower-order meanings combine to form higher-
order meanings and, ultimately, the semantic values of sentences themselves.
A compositional semantics has, therefore, to be framed in such a way that it
relates the two levels of content and expression in accordance with the syn-
tactic properties of sentences. Hence, part of the task of providing a semantic
description is the provision of a syntactic analysis.
12 Some topics in semantics

It is important to stress that the syntactic analysis has the sole function
of enabling the semantic description. Syntax is not an end in itself in the
framework of Montague's programme and, in this, that programme differs
fundamentally from the work of the transformationalists and other formal lin-
guists. The syntactic model to be used is chosen for its ability to characterise
the compositional nature of sentence-meaning and not for its ability to ex-
plain formal syntactic phenomena or to predict syntactic relations. Syntactic
phenomena are interesting only if they have a bearing on semantic issues. It
is in this spirit that I devote a substantial part of this study to the syntax,
especially chapter 7.
It will, however, be immediately apparent that the question of what con-
tribution to sentence-meaning is made by syntax is not always a straight-
forward one. Carnap (1961) emphasised that the analyst should strive for a
one-to-one relation between form and meaning, to establish an "intensional
isomorphism" between the two. A major task in such an enterprise is to
determine which syntactic variations are meaningful and which are purely
formal - among others, Fodor (1977) discusses alternatives like the father
of my father and my father's father. This task also requires, of course, that
we settle questions of synonymy between apparently equivalent word and
phrase-pairs, e.g. spinster and woman who never married. The establishment
of such an isomorphism is exactly what our opening vignette supposes to be
the central aim of mathematical linguistics.
At the moment, however, it seems unlikely that Carnap's aim is within
practical reach. On the one hand, it is clear that rather than thinking in
terms of isomorphy, we must explore the link between form and meaning
in terms of the weaker relation of homomorphy - chapter 2. There is not a
general one-to-one relation between expressions and meanings, but a many-
to-one relation in many instances. The semantic content of a sentence - its
"prepositional content" - is, as we saw earlier, often expressible in a number
of alternative ways. The examples provided earlier mostly involved different
language encodings of the same proposition. However, it is obvious that many
different syntactic patterns may be equivalent semantically, including Fodor's
examples referred to above and, in many cases, such alternative formulations
as active and passive.
Pragmatics 13

1.7 Pragmatics

An adequate semantics for any language could not, of course, be restricted to


the analysis of sentences whose lexical items have fixed values. In most of the
examples so far, it is permissible to view the parts as having restricted values.
Such restriction may be achieved by setting up a model whose individuals,
their properties, and the relations holding between them, are predetermined.
However, natural languages make use of lexical variables whose domains
cannot be so easily fixed. The set of personal pronouns, for instance, form
a class of indexicals whose references vary from one context to another.
Carnap's sentence, quoted at the outset, illustrates the point. Another example
is:

(22) You are tired.

(22) may be true for an addressee on a particular occasion and false for
the same addressee on another. Evidently, the semantics must incorporate
a theory of language in use. This theory, the Pragmatics, appears, at first,
to be dangerously vague. However, given Montague's own formal account
of pragmatics (1968) and the work of other scholars, especially Cresswell
(1973), this aspect of meaning can be fairly rigorously described and need
not result in unwanted vagueness.
The pragmatics is needed, in any case, if the semantics is to be capable
of accounting for temporal and place indexicals. Such systems are excluded
from standard logics since they introduce contingency into the evaluation of
propositions and, hence, make truly deductive reasoning impossible. Thus,
for example, (23) is usually treated as if it made no reference to time:

(23) Jack married Sally.

More radically, (24) could not be treated at all in standard logic because of
the indeterminacy of the adverb.

(24) London is here.

A somewhat similar situation is presented by cases like (25) which depend


for their evaluation on the subjective opinion of the speaker.

(25) Paris is beautiful.

Gradables like wise ought not to figure in an exclusively truth-functional sys-


tem, but they clearly cannot be excluded from a natural-language semantics.
14 Some topics in semantics

Another limitation of standard logics is their failure to treat sentence types


other than declaratives. Declaratives are, of course, of especial importance in
philosophy since they are viewed as expressing propositions uncluttered by
such modalities as interrogation. However, current work in Government and
binding theory, building on earlier studies both in Transformational Linguis-
tics and Generative Semantics, provides crucial insights into the semantics
of other sentence types, especially questions. I shall attempt a fairly detailed
discussion of such work in chapter 5 and return to questions in chapter 6.
Central to much contemporary work in pragmatics is Speech-act Theory,
Austin (1962) and Grice (1975). One of the most difficult problems in making
use of this theory is the formulation of "felicity conditions" and their inclu-
sion in a formal system. Fortunately, however, the work of several scholars,
including Lakoff (1972, 1975), has demonstrated that it is proper to broaden
the scope of satisfaction to include appropriateness or felicity and I shall
make use of this extension.
The question of how we might include Gricean implicatures into a formal
semantics is far less clear. Evidently, when a maxim like relevance is de-
liberately broken, the semantic result is all-important. However, as Fodor's
discussion (1977) suggests, such implicatures do not fit readily into a formal
semantics and I shall, therefore, have nothing to say about them.
Even more contentious is the question of whether to include other aspects
of language in use such as connotations and metaphors in a formal descrip-
tion of natural language. It has not, to date, seemed possible to build these
phenomena into the semantics for lack of a rigorous theory of their function.
Thus, while it is obvious that old maid is metaphorical in:

(26) John is an old maid.

and that its connotational force is paramount in:

(27) Mary, who is thirty, is an old maid.

such matters are not reflected in mathematical semantics. Recent work, such
as Kittay (1987), shows that the essential formal theory is at least in prospect.
Even so, I shall not attempt to pursue such matters here.

1.8 Propositional relations

The semantics discussed so far is primarily concerned with the sentence as


the basic unit - basic in the sense that only sentences have truth values.
Propositional relations 15

The basic status of sentences does not, however, imply exclusion of inter-
est in inter-propositional relations which, as Katz (1966) and many others
have argued, constitute a vital part of the subject matter of semantics. In-
deed, within the confines of Montague's own work - as with most other
formal linguists, including Keenan - Faltz (1985) - certain relations which
hold between propositions figure prominently.
Especially important is the relation of entailment. It is obvious that we
must be able to say which propositions logically follow from any assertion,
as (29) from (28).

(28) John loves Mary and Mary or Jane loves Oliver.

(29) John loves Mary.

However, from (28), we cannot infer :

(30) Mary loves Oliver.

Paraphrasing Keenan - Faltz, we may say that a proposition, p, entails


another, q, if and only if ρ is at least as informative as q. Thus, (28) contains
all of the information conveyed by (29). In addition, (28) asserts the disjunc-
tion of two other propositions either of which may be false, so prohibiting
the inference of (30).
The examples just given may appear rather uninteresting, since they reflect
purely logical properties of the truth-functional connectives and and or. How-
ever, questions of entailment can be highly complex. Consider, for example,
the relation between (31) and (32) in which causal factors are involved:

(31) The storm caused the hayrick which was made by John to collapse.

(32) The storm caused John to make a hayrick which collapsed.

compared with:

(33) Panic caused the elephant to trumpet loudly.

(34) Panic caused the elephant to trumpet.

Obviously, (34) follows from (33), but (32) is not entailed by (31). This is
so because, in (31), the object of caused is an infinitive complement with to
collapse as its main verb, but, in (32) the object of caused is an infinitive
with to make as its main verb.
The relation of Presupposition has been an important part of philosophical
enquiry for many years and, more recently, linguistic discussion has also
focussed on this topic. Probably the most famous example is Russell's (1905):
16 Some topics in semantics

(35) The present king of France is bald.

If there exists a unique individual who is the present king of France, then
(35) is obviously true or false. However, if such a person does not exist, it
is arguable that (35) is merely pointless. I return to this topic at some length
in chapters 4 and 6.
As we have already seen, Leibniz's law plays a central role in establishing
the degree to which noun phrases are interchangeable. The relation of identity
is of great semantic interest. It is, for example, important to our understanding
of a set of problems concerning the evaluation of certain types of argument.
An instance from an early paper by Montague-Kalish (1959) - see also the
discussion in Quine (1960) - is:

(36) The number of planets is 9. Kepler was unaware that the number of
planets exceeds 6. Therefore, Kepler was unaware that 9 exceeds 6.

Clearly, this is nonsense. In spite of the attention such puzzles have received,
their resolution is still to be achieved to everyone's satisfaction and my own
treatment, chapter 4, will, I hope, not be without interest.

1.9 Ambiguity

A most crucial factor in the semantic analysis of natural languages which has
a bearing on all of the kinds of issues referred to above is their pervasive
tendency towards ambiguity. It is an essential task of the theory to resolve
ambiguities as a sine qua non to the assignment of values to expressions. The
formulation of a mechanism for the resolution of ambiguity therefore plays a
dominant role in the grammar, being fundamental to the compositional view
of meaning.
Of course, problems of ambiguity have featured in language studies for
centuries. Linguists have been concerned with lexical ambiguity, as in:

(37) The seal was heavy.

and structural ambiguity, as in:

(38) John saw the girl in the street.

In current transformational studies, the emergence of Government and


binding theory has been largely dependent on the investigation of a particular
source of ambiguity, namely, ambiguity of coreference, illustrated by:
Formal logic and natural languages 17

(39) John said that he had won the prize.

The solutions which have been proposed in the theory for such problems
have, in my view, enormous importance for semantics generally and I shall
discuss them at length in chapter 5.
Montague's treatment of ambiguity is not novel, although he puts more
emphasis than is customary on some types, notably intensional ambiguity, as
in the sentences discussed above involving the verb seek and ambiguities of
quantifier scope, as in:

(40) Every man loves a woman.

It could, however, be argued - I shall not do so - that his treatment is


unsatisfactory since it ignores the kind of lexical ambiguity reflected in (37).
In fact, Montague's work largely ignores the structure of particular noun
intensions, verb intensions, etc.. Thus, he showed little interest in differences
in meaning between given adjective pairs, such as tall and short or adjective
types such as thin and correct, or verbs such as walk and run. Naturally,
given his predominantly formal interests, he was careful to make explicit
the differences between the uses of logical words such as necessarily and
possibly, or the and a/an. For the rest, the decision that a given lexical item
should have this or that meaning is simply taken in light of the particular
problem under investigation.
In my view, ambiguity is arguably the most important problem in se-
mantics and I shall devote most of chapter 4 to examining some of the types
involved. However, while I shall, in chapter 8, draw up some "semantic rules"
for individual words, the words themselves will be representative only and
will, therefore, be few in number. Further, the rules will relate to fairly for-
mal questions having to do with the satisfaction of propositional functions -
what is required for them to become true or appropriate propositions - and
will offer very little of interest to the study of ambiguity in the context of
lexicography.

1.10 Formal logic and natural languages

A semantics based, in large part, on truth-evaluations obviously requires the


support of a formal logic. In particular, such a logic may be expected to
provide a rigorously defined, nonambiguous language into which the natural-
language expressions can be translated, so that, if we have the semantics
18 Some topics in semantics

for the logical expressions, and if the translations are precise, then, in an
important degree, we have the semantics for the natural-language expressions
also.
The logic which Montague employs, besides being intensional, is both
modal and tensed. However, it rests upon the standard logics of the propo-
sitional and predicate calculuses and I shall, therefore, offer a fairly broad
discussion in chapter 3 of such logics as well as a more detailed account of
Montague logic proper. The version of the latter upon which my exposition
will be based is that set out in Montague (1973). I shall also draw on the
work of other scholars, most notably, Cresswell (1973, 1985), whose use
of lambda abstraction in linking syntactic and semantic representations, or
"logical form representations", is especially valuable.

1.11 Universal semantics

The brief outline of semantics provided in this chapter may give the impres-
sion that the primary concern is with English. While this book will be entirely
based on English data, it is certainly not intended to be solely about English
as an individual language. My reason for confining myself to English exam-
ples is merely a function of my status as a native speaker of the language
and a lack of confidence in my non-English intuitions.
Although two of Montague's papers contain the word "English" in their
titles, the theory is intended to be universal in the broadest possible sense. In-
deed, the title of his philosophically most important paper Universal grammar
(1970b) is more reflective of his programme than any other. For Montague, it
is vital that semantic theory be maximally general and thus, for him, the term
"universal grammar" embraces all languages, artificial as well as natural. In
this usage, "universal" does not refer, as is common in linguistics, to features
of all natural languages. Mathematical semantics, in its purest form, is not
at all concerned with establishing the actual universals found in natural lan-
guage. The theory is, therefore, not concerned with psychological issues of
language acquisition, nor yet with statistical probability. The focus is solely
upon formal universals and this focus is based upon the assumption that there
is no essential difference between the artificial languages of mathematics, in-
cluding logic, and those we call natural. This view does, in fact, have a long
tradition in philosophy; see for example, the classic paper by Church (1951).
Thus, Mathematical semantics is, in essence, the semantic theory of lan-
guage in general and, as such, is as much part of mathematics as it is of
Universal semantics 19

philosophy or linguistics. In this spirit, its scope does not include the kind of
psychological dimension referred to above. However, from the stand-point
of natural-language studies, it may be that the exclusion of psychological
considerations is ultimately impoverishing - it is certainly not in line with
the currently popular interest in so-called "Cognitive Linguistics". Even so,
I shall not, in this book, make any attempt to bring such considerations into
the discussion.
The kinds of issues touched on in this chapter will constantly recur as
central themes in this book. I do not pretend to provide an exhaustive ac-
count of them, let alone an explanation of the philosophical problems which
surround them.
I now turn from the discussion of the scope of the semantics to an account
of some of the background notions from mathematics, logic, and linguistics,
which are required to appreciate both Montague's work in particular and the
enterprise of mathematical semantics in general. Naturally, to those readers
who are well versed in the various disciplines concerned, I shall have little
of interest to say in the preliminary chapters.
Chapter 2
Background notions from mathematics

2.1 Purpose of this chapter

In writing this chapter, I have been sharply conscious of the fact that it might
be considered by some readers to be superfluous. To some, its content will
be profoundly familiar. For such readers, it would probably be best to pass
over the next few pages altogether. To others, the discussion may appear
unnecessary because the next chapter on logic deals with most of the issues
commonly associated with formal semantics.
Indeed, it is not strictly necessary to know anything about mathematics in
order to follow the remarks, arguments and technical developments in this
book and there are parts of this chapter which will scarcely receive mention
in the sequel. My motivation for presenting a brief account of background
notions from mathematics is that these notions, besides being helpful in ap-
preciating the highly technical discussions of Montague and other scholars
working in mathematical linguistics, such as Cresswell, serve to place the
more familiar discussion of the next chapter in a different perspective and,
in doing so, considerably deepen one's understanding.

2.2 Sets

The development of set theory from the ideas of Cantor (1932) has been
among the major achievements of modern mathematics and its application in
much formal semantics is central.
Intuitively, a set is a collection of elements whose membership is deter-
mined by some characteristic which they share with all others in the set.
Thus, among numbers, the positive numbers form one set which contrasts
with that of the negative numbers. The wellformed expressions of a natural
language constitute a set which contrasts with the informed ones and so forth.
Intuition also serves to assure us that set membership is not always clearly
defined. Many sets, e.g. the set of beautiful things, are "fuzzy". I shall not
Sets 21

discuss such sets here, though their existence will be taken for granted in the
sequel.
I f S is any set and a, b, c, . . . are elements of S, then S = {a,b,c, . . . } and
for any a contained in S, a is a member o f S, written (a e S).
The members of a set may be grouped in terms of the notion of a subset,
i.e. some grouping of elements in respect o f a given characteristic. If S
contains the subset a, a is a "proper subset" o f S, written (a c S), if and
only if, " i f f ' , S contains elements not in a. If it turns out that there are no
elements in S which are not also in a, a is a mere subset of S, indicated by
(a C S). If everything in S is also in a and vice versa, then, obviously, (a =
S).
In accord with the above, w e say that t w o sets, S and T , are "identical"
iff ((S C Τ ) & ( T c S)), i.e. if S and Τ have exactly the same members.
If a is a proper subset of S, then the set o f elements in S which are not
in a constitute the "complement" of a, written " - a " . Thus, if S is the set of
numbers and a is the subset of natural numbers, then the subset in S which
comprises the non-natural numbers is the complement, -a, of a. Similarly, if
S is the set of wellformed sentences of English and a is the subset of positive
sentences, then the subset in S of negative sentences is the complement of a.
In fact, it is common practice for the notion of a complement to be iden-
tified with negation both in formal logic and in linguistics. This is not, o f
course, to be confused with the metalinguistic convention in which " c o m -
plement" is the label for a constituent acting as the object o f a verb or
preposition, though that usage ultimately derives from the same notion o f set
completion.
It is customary to accept the existence o f a set containing no elements at all.
This set, the " e m p t y " , " v o i d " or "null" set, is symbolised " 0 " . Frequently,
the notation (S e 0 ) is used to indicate that S has no members, i.e. is a
member of the null set. Thus, the first prime number > 2 which is even is an
element of 0. A n analogous instance from English is provided by the set o f
sentences consisting of a single word - ignoring ellipsis. Another example of
a member o f the null set is the fabulous unicorn which figures so prominently
in philosophical and linguistic discussion.
From the above example, it will be apparent that, if [J is the universal set,
then the value o f any statement which is true is a member of [J. Since the
complement, -|J is the empty set, it contains the values of all false statements.
Thus it is that the empty set, 0, is frequently employed to represent falsehood,
while " 1 " often symbolises truth.
Zermelo (1908) required that the empty set be a member of every set.
Hence, any set S, in addition to its characteristic elements, a, b, c, ...,
22 Background notions from mathematics

contains the empty set. Further, since every set is said to contain itself as a
member, the null set contains one member, namely, itself.
The "intersection" or "meet" of two sets S and Τ is the subset of S which
is contained in Τ plus the subset in Τ contained in S. This subset may be
symbolised {S & T} and may be verbalised as the set whose members are
both in S and T.
Since a given element, a, must belong to both S and Τ to be in the
intersection of S and T, it follows that the intersection may contain fewer
elements than either or both of the intersecting sets. Given the set, S, of
books and the set, T, of works of art, then, clearly, the set of objects which
are both books and works of art is smaller than either S or Τ - at least this
is so in our world.
When two sets, S and T, are "joined", the result is the "logical sum" or
"union" of S and T, symbolised {S ν T}. The "joint" of two sets contains
those elements which are in either or in both of those sets. Thus, the union
of two sets is always greater than is either one in isolation. The set of things
which are either books or works of art or both clearly is more numerous than
either set taken alone.
It is evident from the above that the relations of intersection and union
correspond, in the field of sets, to the relations of conjunction and disjunc-
tion in natural languages. It is, further, clear that intersection corresponds to
logical conjunction and that the status of union is precisely that of inclusive
disjunction, not the exclusive variety, see chapter 3.
Intersection and union correspond mathematically to multiplication and
addition - they are commonly called the "logical product" and "logical sum"
respectively - and since their natural-language equivalents are conjunction
and disjunction, it is not unusual, e.g. Reichenbach (1947), to symbolise
and as * and or as + . To see the plausibility of these equations, it is only
necessary to consider the result of adding 0 to 1 and of multiplying 0 by 1.
Obviously, (1 + 0) = 1, while (1 * 0) = 0. Any disjunction, (p or q), is true,
i.e. has the value 1, if either disjunct is 1. Any conjunction, (p & q), is false,
i.e. has the value 0, if either conjunct is 0.
Thus, using ρ and q as propositional variables, the following equivalences
hold:
(a) (p & q) = (p * q);
(b) (pvq) = (p + q).
These equivalences are particularly important in the context of certain
rules of equivalence, especially the distributive laws (see below for further
discussion and chapter 3).
The cardinality of a set 23

When two sets, S and T, share no elements in common, they are "disjoint".
Thus, the set of natural numbers and English sentences is a disjoint set, which
is, in effect, the null or void set. The term "disjoint" is thus frequently used
in the sense 'distinct'.
Since it is possible to confuse the names of set elements with the elements
themselves, it is common practice to interpret, say, "S =(a,b,c, . . . ) " as
standing for the set of distinct objects denoted by a, b, c, . . . rather than the
set of names themselves. Following Carnap's (1961) usage, a set is taken "in
extension" unless otherwise specified.

2.3 The cardinality of a set

Using Russell's illustration (1919), an exceedingly simple way of viewing


the operation of addition is in terms of bringing the members of one set into
a one-to-one correspondence with the members of an ordered set. Thus, if
we take the set of natural numbers, Ν, = {0, 1, 2, 3, . . . , n} as being an
ordered set containing subsets, e.g. {1,2,3,4,5}, we may bring another set,
say the fingers of one hand, into a one-to-one correspondence with it. This
process in which the elements of the one set are paired with those in the
other establishes a relation of "similarity" between the two.
Among the real numbers, the set of rational numbers is, of course, infinite
since there is no highest number. However, since, by definition, that set
can be counted, it is said to be "denumerable" and any set which can be
brought into a one-to-one correspondence with it is a "denumerable set", or a
"denumerable infinity". In contrast, the set of irrational numbers, the "endless
decimals", as Cantor showed, is nondenumerable.
To establish the "cardinality" of a set, we have only to establish the simi-
larity relation just referred to between it and some subset of the set of natural
numbers. If a set contains no members, as already mentioned, it is said to be
"empty", or "null" - equivalently, "void" - with cardinal number 0. If a set
has only one member, it is called a "unit set" and is usually symbolised "I".
In ordinary counting, it is universal practice to take the last member of a
natural number subset which is brought into a one-to-one relation with the
members of some other set as the number of that set. Thus, we say that
5 is the number of fingers on one hand, rather than specifying the set as
{1,2,3,4,5}. When we employ this abbreviatory convention, we arrive at the
cardinal number of the set or the cardinality of the set.
24 Background notions from mathematics

Of course, since subsets are members of sets, it is possible to count not


just the individual elements, but their groupings as well. The set of all subsets
of a set, including the set itself and the void set, is called the "power set".
Thus, it follows that the cardinality of a power set is 2", so that, for example,
a set with three elements will have a power set whose cardinal number is the
cube of 2, i.e. 8. It is to be noted here that the ordering within subsets is not
significant. Hence, {a,b} = {b,a}, etc.. To calculate all possible permutations
is a more complex and, for our purposes, irrelevant process.
It is important to note that in order to know that two sets, S and T, are
cardinally equivalent - have the same cardinal number - it is not always
necessary to know what that number is. To illustrate, again using an example
from Russell (1919 ): assuming that a given shoe-shop has no broken pairs in
stock, we know that the set of right shoes is cardinally equivalent to the set of
left shoes. Another illustration of the same point, this time from Reichenbach
(1947) is provided by the seats of a theatre and the size of the audience. If
all the seats are taken and no patron is standing, we know that the set of
seats and the set of patrons have the same cardinal number, even if we are
ignorant of the number involved.

2.4 Product sets

By a "product" or "Cartesian" set, {S X T}, is understood the set of all


ordered pairs, < s, t > , where s e S and t e T. The notion of an ordered pair
will be defined below in terms of the notion, Function. For the moment, it
is sufficient to remark that: if a pair is ordered, for example, < s.t > , then
that pair is not equivalent to any other ordering of the same elements, e.g.
< t,s >.
Mathematically, the members of a product set, {S X T}, are such that
each member from S occurs as the first member paired with each element in
T. Thus, if S has three elements and Τ has five, then each element of S will
appear in five separate pairs. Thus, if S = {a,b,c} and Τ = {d,e,f,g,h}, then
{S X T } comprises the pairs:

(1) {< a,d >, < a.e >, ...< a.h >; < b,d >, < b,e >, ...<
b, h >;
< c,d >, < c.e > ... < c,h >}.

Hence, the cardinality of a product set is the cardinal number of the one
multiplied by that of the other, namely, {S X T}.
Relations and functions 25

2.5 Relations and functions

The notion of a product or Cartesian set provides a natural background for


the consideration of relations and functions.
Essentially, a relation holds between two elements in such a way as to
bring them into a pair, which may or may not be ordered.
Thus, a relation like < brings the elements of the set, N, of numbers into
a set of ordered pairs such that one is less than the other. Similarly, the
relation, is the author of, brings the elements of the set, S, of writers into an
ordered relation with elements of the set, T, of written works, such that, for
each pair, the one is the author of the other. It is to be observed that, in this
example, the relation, is the author of is not equivalent to is an author of.
In contrast to these examples, the relations, = and is married to, bring
two elements into an unordered pair. If "a = b" is true, or, if "a is married
to b" is true, then the inverses of these statements are also true.
Symbolically, if we use R to indicate a relation and the variables χ and
y for the elements concerned, we may write "y R x" to mean that y stands
in the relation, R, to x. Further, since a set of ordered pairs of elements is
a subset of the relevant product set, we may say that, for the relation in
question, (R e {S X T}) or, if one set is involved, (R 6 {S X S}).
When a relation holds between the members of one set, say the set, N,
of numbers, it is said to hold "in" that set. When the relation holds between
the members of two sets, it is said to be "from" the one set "into" the other.
Hence, < is a relation in the set of numbers, while is the author of is a
relation from the set of writers into the set of written works. A relation of
one set into itself is often referred to as a "transformation".
It is to be remarked that, in stating that a relation, R, holds, the ordering
of the variables in the symbolic expression is opposite to what would seem
to be intuitively natural. Instead of "y R x", we might have expected "x R y".
The former ordering is, however, conventional in mathematics. Thus, given
the numbers 1 and 2 and the relation < , the ordered pair < 2. 1 > satisfies
the relation since "1 < 2" is a true statement. Similarly, if y = Dickens and
χ = Oliver Twist, then the pair < Oliver Twist, Dickens > satisfies the
relation, is the author of
The conventional ordering described above seems more natural when con-
sidered in respect of the interpretation of graphs. If the number of elements in
a relation is finite, we may exhibit them in graph form with the intersections,
or lattice points, indicating each ordered pair. As is customary, the horizontal
axis is taken to be the x-axis and the vertical the y-axis. On the x-axis, the
26 Background notions from mathematics

independent variables are written and, on the y-axis, their dependent coun-
terparts. In reading such a graph, it is practice to read the elements on the
x-axis first.
The elements in a relation which comprise the first components of the
associated pairs - the elements on the x-axis - are known as the "domain" of
the relation, while those making up the set of second components - elements
on the y-axis - are referred to as its "range" or its "value". Thus, if the
relation, R, is is the author of, the domain is the set of written works and the
range is the set of writers. Finally, the set of elements which together make
up the domain and the range is called the "field" of the relation.
Obviously, the relation, < , is potentially multi-valued. Thus, if U is a set
of numbers, and χ = η, for some n, there may be a number of elements, y,
which pair with η to satisfy the relation. Similarly, the relation, is an author
of, will be multi-valued since books, etc. may be co-authored - hence the
indefinite article, an, in the English name for this relation.
A relation which is uniquely valued is conventionally known as a "func-
tion". Thus, for instance, the relation between any number and its square,
cube, etc. is a function, as is the relation, husband of, in a monogamous
society.
If we regard the set of pairs satisfying a relation as constituting the re-
lation - an equation which is proper since an exhaustive list of such pairs
constitutes a definition of the relation - we may rephrase the above definition
of a function as a uniquely valued relation by saying that a function is a set
of ordered pairs sharing no common first member. As an illustration, let U
= { 1,2,3}. Then the cube-function defined over U is: { < 1,1 > , < 2 , 8 > ,
< 3,27 > } . By contrast, the relation, is the cube root of, is not a function
since, for any power, the absolute value may be positive or negative. Hence,
for instance, 8 has the cube roots 2 and -2, so that 8 appears as the first
member of two distinct pairs.
This is a useful way of looking at functions, in the context of natural
language, since it emphasises the functional VS relational status of given
expressions. Thus, Oliver Twist is the first member of only one pair defining
the function, is the author of, i.e. < Oliver Twist, Dickens >, since the
book in question had only one author. By contrast, the expression written
by denotes a relation since, for any argument, it may have several values.
Thus, written by includes many pairs sharing the same first member, e.g.
< Dickens, Oliver Twist >, < Dickens, Great Expectations >, ...
Functions may be generalised symbolically as F(x), where F is the function
variable and χ the domain variable. Thus, if F is the cubing function, F(3)
= 27. If F is the function, husband o f , then F(Mrs. Thatcher) = Dennis
Equivalence relations 27

Thatcher. This notation is the one customarily employed by logicians. Further,


since the value of F(x) is y, it is common to employ the function expression
in place of y itself. Thus, F(x) = y.
A function such as cube is a simple function in the sense that only a single
operation is performed on a given argument. It is obviously possible to create
complex functions, which involve two or more operations. Such complex
functions may be viewed as comprising functions whose domain contains
other functions and their domains. As illustrations, consider the following, in
which F is the squaring function and G the factorialising function, F(G(2)),
i.e. 4; G(F(2)), i.e. 24.
Complex functions are not, of course, restricted to mathematics. An ex-
cellent example from kinship relations, provided by Thomason (1974), is the
following in which F = mother of and G = father of Combining these func-
tions with respect to an appropriate argument yields either F(G(x)) or G(F(x)).
Given the definitions, the first of these expressions yields the paternal grand-
mother of x, while the value of the second is the maternal grandfather of x.
In theory, there is no limit to the degree of complexity which such functions
may assume. In stating the semantic rules for representative English words,
chapter 8, many of the functions - values assigned to words - are complex
functions. Thus, for example, seldom denotes a complex function, having,
as its domain, propositions standing as argument to the function denoted by
often.
It is to be remarked at this stage that, although the term "function" is re-
served in mathematics and in the writings of Montague for uniquely valued
relations, in other disciplines, including formal logic, the expression is some-
times used more loosely for relations with multivalued arguments. In fact,
some scholars, including Reichenbach (1947), employ the term "function" as
a synonym for "predicate" or "verb". Thus, run is a one-place function, kill
is a three-place function.

2.6 Equivalence relations

From the viewpoint of individual members of a set, the important relation


between them and the set itself is membership. Thus, we say "s e S" just so
long as there is some characteristic property by virtue of which s is a member
of S, even if that property is only membership itself.
28 Background notions from mathematics

When we consider relations between sets and subsets, the most fundamen-
tal is that of equivalence or equality. Thus, the expression "S = T" claims
that:

(D.l) s e S iff s e Τ & t e Τ iff t e S.

Of course, since every member of S is in S, including S, the definition of


equivalence just given implies that S = S.
When we say that a set is equal to itself, we assert that the equivalence
relation is a "reflexive relation". Equivalence, however, is not the only re-
flexive relation. Thus, parallel to is also usually held to be reflexive since
any line is taken to be parallel to itself. Similarly, the relation as big as, or
that denoted by born on the same day as are also reflexive relations.
In addition to being reflexive, it is intuitively obvious that equality is
"transitive", by which is meant:

(D.2) if set S = Τ and Τ = U, then S = U.

If line a is parallel to line b and b is parallel to line c, then a is parallel to c.


If χ is born on the same day as y and y on the same day as z, then, clearly,
χ is born on the same day as z.
Finally, it is apparent that the definition of equality involves the assumption
of symmetry. We say that a relation, such as equivalence, is a "symmetrical
relation" if it holds in both directions. Thus:

(D.3) if S = T, then Τ = S.

If line a is parallel to line b, then b is parallel to a. If χ is born on the same


day as y, then y is born on the same day as x.
Relations, such as parallel to, which have the property of holding for
equivalence are called "equivalence relations". Such relations will always
have the properties of reflexivity, transitivity and symmetry.
A relation which is not reflexive is sometimes called an "irreflexive" re-
lation. Thus, < is irreflexive, since, if (x < x) were true, then (x = x) would
be false which is patent nonsense. As well as being irreflexive, < is clearly
"nonsymmetrical". This is so because, if a < b, then the reverse cannot hold.
Although < is irreflexive and non-symmetrical, it does have the property
of transitivity since, if a < b and b < c, then a < c.
Clearly, a relation which is irreflexive, non-symmetrical and intransitive
is not an equivalence relation. Thus, son of is a nonequivalence relation, as
is parent of Hence the need for the prefix grand in these and similar cases
to express a transitive-like relation as in:
Boolean algebras 29

(2) John is the son of Peter and Peter is the son of Jack. Therefore, John
is the grandson of Jack.

(3) Mary is a parent of Jean and Jean is a parent of Sally. Therefore,


Mary is a grandparent of Sally.

It is to be observed, here, that the question of whether a given relation


has any one or all of the properties mentioned above is not always a simple
yes/no matter, especially in respect of relations denoted by natural language
expressions. Thus, the relation brother of while it is irreflexive and transitive,
may or may not be symmetrical, as can be seen from the following examples:

(4) *John is his own brother.

(5) John is a brother of Jack and Jack is a brother of Fred. Therefore,


John is a brother of Fred.

(6) *John is a brother of Mary. Therefore, Mary is a brother of John.

Following Reichenbach's practice (1947), we may use the prefix "meso-"


to indicate such inbetween values. Hence, brother of is a mesosymmetrical
relation.
It is customary to further classify relations according to the degree of their
places or arguments. Thus, a relation like husband of is a one-to-one relation
in a monogamous society and is thus a function in Montague's strict usage.
The relation is a finger of is also, stricto sensu, a function since it is many-
to-one. The relation native language o f , on the other hand, is one-to-many,
while pupil of is many-to-many.
Finally, it is worth noting that natural languages do not always have spe-
cific terms to denote given relations. In English, orphan denotes the child of
deceased parents, but there is no corresponding term for the relation of being
a bereaved parent.

2.7 Boolean algebras

In a sense, we may consider our discussion of sets as a discussion of an


algebra. In that algebra, the elements are sets and the operations are those on
sets. In general, any algebra is a system (A,F) consisting of elements, A, and
operations, F, defined on A. Thus, if A = the set of numbers and F = { + ,
* , - , / } and the relation == is defined, then (A,F) is the algebra of arithmetic.
30 Background notions from mathematics

An algebra which is an algebra of sets is known as a "Boolean algebra"


after George Boole (1854). The elements of the algebra may be indifferently
interpreted as actual sets, or as propositions, as sentences of a natural lan-
guage, etc. (Langer, 1953, offers an amusing demonstration of the fact that
the slicing of a cake can be thought of as a Boolean algebra!).
In fact, it is the case that any system in which the relation, < , is defined
and which has the three operations, disjunction, conjunction and negation,
i.e. V, & and -, is a Boolean algebra if it satisfies the following postulates.

(7) Boolean postulates

a. 1 and 0 are distinct.

b. (x & y) = (y & x); (χ ν y) = (y ν χ).

c. (χ & (y & ζ)) = ((χ & y) & z); similarly for disjunction.

d. (x & (y ν ζ » = ((x & y) ν (x & z)).

e. (χ & -χ) = 0; (χ ν -χ) = 1.

f. (χ ν 0) = χ; (χ & 1) = χ.

A selective commentary on this axiomatic system seems appropriate.


Postulate (7a) establishes the disjoint status of 1 and 0. It is the fact that
these elements are disjoint which, of course, permits postulate (7e). Since 1
and 0 = truth and falsehood respectively, postulate (7a) also requires that
these values be distinct, thereby guaranteeing the law of excluded middle.
Postulate (7b) is the familiar rule of commutation. Thus, in arithmetic, the
operations of addition and multiplication are commutative since the result of
adding two numbers or of their multiplication is indifferent to the ordering
involved. Similarly, in a natural language, the operations of disjunction and
conjunction are commutative, provided they are of the logical variety (see
chapters 3/8).
Postulate (7c) is the rule of association. According to this rule, groupings
are irrelevant in multiplication and addition and, indeed, the associative law
permits the equation of bracketed and bracket-free expressions under the
appropriate operations. In natural languages, the operations of disjunction and
conjunction are also generally associative, as can be seen from the following:

(8) John came and Mary and Jane left. — John came and Mary left and
Jane left, (so too for or).

The fourth postulate is the law of mathematical distribution. It is to be


noted that, formal logic and natural languages allow for a second distributive
Boolean algebras 31

law in which or and & supplant each other in (7d). Clearly, this second law
does not hold in arithmetic since (x + (y * ζ)) φ ((χ + y) * (χ + ζ)).
The postulates in (7e) proclaim, on the one hand, the contradictory status
of a conjunction of a proposition and its negation and, on the other, the
tautological status of the disjunction of a proposition and its negation.
In terms of sets, it is obvious that any element, a, cannot belong both to
a set and to its complement. That is to say: the set of elements belonging
to S and -S is the void set, 0, and any statement to the contrary is false,
i.e. has the value 0. Conversely, the set of elements belonging either to S or
its complement -S is the universal set 1 and any statement to that effect is
true, i.e. has the value 1. I return again to the question of contradictions in
natural languages. Here, it is to be noted that, in spite of their apparent lack
of informativity, tautologies such as (9) and (10) occur fairly frequently in
natural discourse as expressions of emotions such as resignation.

(9) Either the government will fall or it will not fall.

(10) If Sally is upset, she is upset.

The Postulates in (7f) are especially interesting from the viewpoint of


natural language since they enable the systematic identification of conjunction
with multiplication and disjunction with addition. In chapter 3,1 shall provide
some discussion of and and or in the context of formal logic. It will be seen
that, in that context, two statements joined by and result in a false statement
if either is false. If two statements are joined by or, the result is a false
statement only if both disjuncts are false.
Let χ = 1 and y = 0 and let 1 and 0 represent truth and falsehood
respectively. Since (1 * 0) = 0, while (1 + 0) = 1 , it follows that truth
times/and falsehood equals falsehood and truth plus/or falsehood equals truth.
Thus, as noted earlier, multiplication equals and and addition equals or. It
is, however, important to note that, if the conjuncts are not propositions, the
{and — *) equation fails. Thus, in English, and is often used to mean + in
informal statements of arithmetic, as in:

(11) 7 and 7 make 14.

Like any formal system, including a logical system, Boolean algebras must
be consistent and complete. A system is consistent if and only if, for any
formula a which can be derived in it, the negative - a is not also derivable.
For the system to be complete, any wellformed formula a which is not a
postulate must be derivable in it as a theorem. This is possible only, of
course, if the postulates are valid - true under any interpretation. It also
32 Background notions from mathematics

requires that all rules of derivation be truth preserving. I return to rules of


derivation, or inference, in chapter 3.

2.8 Isomorphisms and homomorphisms

Any two algebraic systems may be related in a number of ways. The simplest
cases to consider are those in which there is one operation only, which may
arbitrarily be treated as though it were either addition or multiplication. Since
it is convenient, at this point, to consider number systems, the additive or
multiplicative operations do, in fact, correspond to their arithmetical uses.
However, it is important to realise that this correspondence is not necessary.
Thus, the expressions "a * b" or "a + b" may be taken as designating any
operation on two arguments, so that "*" = "+".
Given two systems, say of numbers, one, S, may correspond precisely
to the other, S \ just in case we are able to associate each element in S in
a one-to-one relation with each element in S' and if, further, the operation
is preserved under the correspondence. Such a relation is referred to as an
"isomorphism" and may be simply illustrated by the following case.
Let S be the set of natural numbers and S' be the set of their common
logs - logs to the base 10. Then, for every a and b in S - for arbitrary a and
b - there will correspond a unique element a', b' in S'. If, for example, a
= 10, then a' = 1,. If b = 100, then b' = 2. If, now, the operation in S is
multiplication and that in S' is ordinary addition, the relation between S and
S' will be isomorphic, as Table 1 testifies.

Table 1. An isomorphism

S <—• S'
a = 10 <—> a' = 1
b = 100 <—> b' - 2
(a * b) = 1000 <—> (a' + b') - 3

Since the isomorphism established above is perfect, there is, from an


algebraic point of view, no distinction to be made between the two systems
S and S'. They are, in effect, equivalent. It should be noted, however, that
this equivalence holds only because no account is taken of the properties of
the elements and operations. Thus, it makes no difference to the isomorphic
Effective processes 33

status of the relation that the operation is multiplication in the one case and
addition in the other.
A more general and, therefore, weaker relation between two systems is
that in which the correspondence between the elements of the two is not one-
to-one, although the operation is still preserved under the correspondence. A
relation of this kind is called "homomorphic". The following illustration is
from Moore (1962).
Let S be the set of natural numbers and S' be the set of integers comprising
just 1 and -1. Let every even number in S be associated with 1 in S' and every
odd number in S with -1 in S'. If the operation in S is ordinary addition and
that in S' is ordinary multiplication, then these operations will be preserved
under the correspondence, as Table 2 shows.

Table 2. A homomorphism

s S'
a = 1 < — a' = -1
b = 2 i b' =1
(a + a) = 2 <— b' i.e. (a' t a') = 1
(b + b) = 4 <— b' i.e. (b' * b') = 1
(a + b) = 3 <— a' i.e. (a' * b') = -1

As observed in the previous chapter, the homomorphic relation is im-


portant in mathematical semantics since it plays a fundamental part in the
reconstruction of the relation between meaning and expression. Here, it is
necessary only to point out that, since the isomorphic relation is narrowly
defined, i.e. insists upon a one-to-one relation between the elements, it is to
be viewed as a type of homomorphism. This is so because there is noth-
ing in the latter to require that the correspondence be literally many-to-one
and, thus, one-to-one relations are included among those with homomorphic
status.

2.9 Effective processes

In chapter 1, I referred to the work of Gladkij and Mel'cuk (1969) in which


language is identified with the set of rules which map objects on the plane
of meaning onto objects on the plane of expression. The ultimate aim of the
linguist should be the description and explanation of this set of rules.
34 Background notions from mathematics

The system of rules itself may, as Gladkij and Mel'cuk suggest, be re-
garded as a function, or "mapping", with a very complex structure. To de-
scribe it in its entirety requires that we identify and describe all of those
simple functions which combine to make up the whole and, ultimately, to
account for the manner of their combination. Among others, we should de-
scribe:

a. The function which maps semantic descriptions onto syntactic structures.


b. The function which maps syntactic structures onto actual strings of words.
c. The function which maps abstract phonological forms onto their phonetic
realisations.
d. The function which maps suprasegmental features, such as intonation, onto
surface representations.

The preoccupation of semantic enquiry - and, hence, of this book - is,


of course, with the first two of these tasks and the methodology which it
employs must be algorithmic that is to say: it must take the form of a
step-by-step procedure. In the terminology of computer science, the descrip-
tion of the semantic-to-form function must be "effective".
The study of effective procedures is a specialised branch of modern math-
ematics to which a brief description cannot do justice - Curry (1976) gives
a detailed exposition. However, the fundamental principle is simple enough.
A process is said to be effective if and only if it leads to the achievement
of a given goal, say the syntactic description of a sentence, from an initial
set of elements, say abstract syntactic categories, in a finite number of steps.
Such a process, moreover, may involve no ambiguity which requires appeal
to an infinite class of possibilities.
If we call the elements on which the process can operate "admissible
elements", then we can outline what is required of such a system as:

a. We must know exactly which elements are admissible elements.


b. We must know what transformations apply to the elements and their effect.
c. We must know when the goal has been reached, i.e. when the process is
complete.

Given these requirements, the effective process is an algorithmic system


consisting of a set of initial elements, B, often called the "base", and a set of
transformations, or "commands", M, referred to as the "generators". Among
the elements in M, will be a subset of transformations which terminate a
derivation and one which initiates the derivation. A derivation is complete
when it outputs a terminating transformation. If no further transformations
Effective processes 35

can be applied to a given string which has not terminated, we say that the
derivation is "blocked".
As a simple example of such an algorithm, based on Curry, consider
the following reduplicating system in which the members of Β are upper-
case letters and the lower-case equivalents are auxiliary symbols - symbols
which mark given elements or strings of elements as legitimate inputs to
given commands. The goal is simply to copy an expression, E, consisting of
elements in B.

(12) A simple copying algorithm


a. aA ==> AaA.
b. XxY = > XYx.
c. a = > · -.
d. - = > a.

Here, command (12d) is the start-command, having the void set to the left
of the arrow, and (12c) the stop command, having void to the arrow's right.
Command (12b) merely ensures that elements and auxiliary-marked elements
interchange to permit the operation of (12c). Command (12a) generates the
requisite number of copies and, since it may take its output as input, it may
be applied an infinite number of times. In fact, of course, the number of
applications of (12a) will be determined by the number of elements in E.
Thus, if Ε = aa, its copy will be "aa" and so forth.
The above algorithm makes explicit reference to an alphabet in Β con-
sisting only of A. It should, however, be clear that the number of admissible
elements is not limited. Thus, if Ε is the English word cat, then we can derive
its copy simply by expanding the algorithm to allow for the fact that three
letters are involved as follows:

(13) a. cC —> CcC.


b. aA —> AaA.
c. tT — • TtT.

This expansion will yield:

(14) CcCAaATtT.

which will input to the interchanging command eventually to yield:

(15) CAcCaATtT.
36 Background notions from mathematics

This string will input the terminating command, suitably expanded, which
erases the auxiliary alphabet to yield:
(16) CATCAT.

The algorithmic technique is, of course, very familiar to the modern lin-
guist in the form of a standard phrase-structure grammar and its application
will be assumed in the remainder of this study.
Chapter 3
Background notions from formal logic

3.1 Scope of this chapter

The following discussion of background notions from formal logic has, of


necessity, to be very brief and, in consequence, sometimes rather superficial.
The science of formal logic is of immense complexity and contains many
subdisciplines, including, according to some, e.g. Whitehead-Russell (19 ΙΟ-
Ι 3), mathematics itself. It is also fitting to add that there is a vast number
of works both introductory and advanced on particular logics and that, since
some of these at least have a very wide readership across several disciplines,
including linguistics, it is not appropriate to attempt the repetition of their
content here.
I shall, therefore, offer a broad account only of those systems which are
particularly relevant to linguistic semantics. Only in my discussion of Mon-
tague's intensional logic (1973) shall I attempt a reasonably detailed presen-
tation.

3.2 The calculus of propositions

The calculus of propositions includes a system whose atomic elements are


propositions and whose operations are the functions: conjunction, negation,
disjunction, implication and equivalence. The propositional variables are taken
from the set { p, q, r, . . . } and the operations are symbolised by: &, v,
A, —> and < — r e s p e c t i v e l y : where A is exclusive disjunction and <—•
symbolises " i f f ' .
The syntactic formation rules of the system restrict the wellformed for-
mulae, wff, as follows.

(1) Formation rules

(R.a) ρ is a wff.
38 Background notions from formal logic

(R.b) If ρ and q are wffs, then (p & q) is a wff, as are: -ρ, (ρ ν q), (ρ Λ
q), (p — • q), (p < • q).

(R.c) Only formulae constructed in accordance with the formation rules


are wff.

Complexes of atomic propositions are punctuated by parentheses, as in:


(p & (q & r)), ((ρ ν q) ν r), . . .
The logical constants are defined by means of truth tables which display
the truth values, {0, 1}, to be assigned to nonatomic propositions on the bases
of the values of the atomic propositions.

Table 1. Conjunction, & Table 2. Negation, -

ρ q (p & q) ρ -ρ
1 1 1 1 0
1 0 0 0 1_
0 1 0
0 0 0

Table 3. Inclusive disjunction, ν Table 4. Exclusive disjunction, Λ

Ρ q (Ρ ν q) Ρ q (Ρ A q)
1 1 1 1 1 0
1 0 1 1 0 1
0 1 1 0 1 1
0 0 0 0 0 0

Table 5. Implication, —> Table 6. Equivalence, <—>

q (p — > q) p q (p <—> q)
1 1 1 1 1 1
1 0 0 1 0 0
0 1 1 0 1 0
0 0 1 0 0 1

The linguistic literature contains many discussions of the relations be-


tween these constants and their natural language counterparts - see McCaw-
ley (1981) for a particularly detailed exposition. Here, I note a few facts
only.
English and frequently appears to be used nonlogically as a temporal
connective, as in:
The calculus of propositions 39

(2) The plane took off and circled the airport.

That the conjunction is here nonlogical is confirmed by the fact that the
meaning changes if the order of the conjuncts is reversed, as in:

(3) The plane circled the airport and took o f f .

While it would be possible to regard and in such cases as a pragmatic opera-


tor, meaning 'and then', it is historically more plausible to attribute the tem-
poral basis of the relation between the conjuncts to features of their semantic
representation. Perhaps, each conjunct in (2) and (3) should be assigned mu-
tually exclusive temporal indices as part of the auxiliary.
The distinction between inclusive and exclusive disjunction is usually sig-
nalled in English by means of periphrases as in the following pair:

(4) a. You can have coffee or tea or both.

b. You can either walk or ride, but not both.

Very frequently, however, disjunctions are ambiguous in this respect, as in:

(5) Iris plays bridge or tennis.

Since the ambiguous cases seem to be the rule rather than the exception,
most logicians take or as inclusive unless otherwise stated.
As in other natural languages, the relation between English if and logical
implication is tenuous. If carries connotations of causality which make it
difficult to equate with — O n l y the second line of Table 5 seems natural.
Adherence to the other lines yields such strange truths as:

(6) If 7 is an odd number, then London is in England.

(7) If squares have three sides only, then 2 + 2 — 4.

(8) If squares have three sides only, then 2 + 2 = 5.

While conditional combinations of falsehoods as in (8) are employed in nat-


ural language for special emphatic purposes - emphasising the incorrectness
or unlikelihood of the antecedent - they are not used to assert truths in the
ordinary sense and combinations like (6) and (7) are probably not used at all
with intention. All three cases according to the logical account of implication,
however, express truth. In order to arrive at an approximation to English if
we require a connective which is not strictly truth-functional and, for that
purpose, we should appeal to modal logic rather than the classical systems
of the propositional calculus (section 3.5).
40 Background notions from formal logic

While it is possible to prove any theorem by means of truth tables, it


is usually more efficient to do so with the aid of inference rules. We may
derive all such rules from the following set of axioms - the system is due to
Whitehead-Russell (1910-13)).

(9) Axioms

a. (ρ ν p) —> p.

b. q — > ( p v q).

c. (ρ ν q) —> (q ν p).

d. (q —> r) — • ((ρ ν q) —> (ρ ν r)).


Given these axioms, derivation may be by: uniform substitution - substi-
tution of one variable for another; substitution by definition; application of
the rule of "detachment", modus ponens\ and "adjunction".
In the system, three equivalences provide the definitions for "substitution
by definition".

(10) Substitution by definition

(D.l) (p & q) = -(-ρ ν -q).

(D.2) (p q) = (-ρ ν q).

(D.3) (p < — q) = ((p — q) & (q p)).

The rule of detachment is:

(11) (p — • q). p. Therefore, q.

The adjunctive rule merely states that: "if both ρ and q are true", then (p
&q).
As a simple example of how the calculus works, consider the following
proof of (12).
(12) Theorem: (p —• -p) —> -p.

a. Uniformly substitute -p for ρ in axiom a, giving: (-ρ ν -ρ) —> -p.

b. Substitute (p —> -p) for (-ρ ν -ρ) in (a) by definition (D.2). QED
As in the case of the Boolean algebra outlined in the previous chapter, the
above system is both complete and consistent.
The semantics of the propositional calculus is, indeed, impoverished. Since
the logical constants denote particular functions which have been assigned
The nature of propositions 41

to them, say by a function, V, we may regard that assignment as comprising


part of the semantics of the system. Moreover, since the axioms depend upon
the logical relations, we might think of them as semantical. However, since
the propositional variables are uninterpreted, they are without meaning. Thus,
a formula like:

(13) (pv-p).

is valid no matter what ρ stands for, while:

(14) (p&-p).

is invalid by necessity. This is not to say, of course, that such formulae can
never find expression in every-day discourse. As remarked in chapter 2, the
tautological status of (13) may be exploited to express resignation, as in:

(13) a. I will get the job or I won't get the job.

and the contradiction in (14) is frequently used to express a mean of gradable


properties, as in:

(14) a. John is young and he isn 't young.

3.3 The nature of propositions

Although, being without interpretation in the propositional calculus, propo-


sitional variables are of no semantic interest, the notion of a proposition is.
as the discussion in chapter 1 suggests, fundamental to the semantics of nat-
ural languages. It seems appropriate, therefore, to comment briefly - though
derivatively - upon this general question at this point.
In chapter 1, I took a simple view of a proposition as a function with
domain in the set of possible worlds. Cresswell (1973) provides a more
sophisticated discussion of the nature of propositions and of their properties:
necessity, impossibility, identity, etc.. I shall largely confine myself to his
discussion.
Oversimplifying for the moment, Cresswell equates propositions with the
sets of possible worlds in which they are true - where a given world is said
to be "possible" iff it is consistent with the logic concerned.
Thus, on this view, a proposition is a set and has the properties of sets.
For example, if a proposition, p, implies another, q, then the members of p,
i.e. worlds in which ρ is true, form a subset of q, i.e. worlds in which q is
42 Background notions from formal logic

true. This is so because, if the implication holds and ρ is true, then q must
be true under the usual interpretation of — I f q is true, however, ρ need
not be so, again according to our material understanding of — H e n c e , the
axiomatic status of the equation ((-ρ ν q) = (p —> q)).
This way of looking at propositions is most illuminating, especially when
we come to consider such modalities as logical entailment (section 3.6).
The simple equation of a proposition with a set of possible worlds is, how-
ever, preliminary in Cresswell's discussion and he later abandons it in favour
of a relation between propositions and the more complex notion, Heaven.
This is done to avoid unwanted consequences, such as there being only one
necessarily true proposition and to allow for the solution of semantic prob-
lems such as those surrounding belief-type statements. I shall describe this
elaboration later in the discussion of modal logic (section 3.7). Here, it is to
be noted that there are other candidates for the role in question, including
moments of time (Prior, 1968). There are also those, including Montague
(1973), who take possible worlds as primitive notions. I shall not attempt to
evaluate these alternatives in this study.

3.4 Monotonicity

Before turning from the calculus of propositions altogether, it is worth re-


marking that its inferential arguments, like those of all classical systems of
logic, are monotonic. A system is said to be "monotonic" when the validity
of its arguments is in no way affected by the introduction of new facts. In
everyday reasoning, by contrast, the underlying system is nonmonotonic. For
example, the argument (15) has a conclusion which may be "defeated" by
the introduction of a hitherto unknown fact, say, that Jennifer is a part-time
journalist.

(15) Jennifer, a linguist, earns £400 a month. Therefore, her annual in-
come is £4,800.

Clearly, the nonmonotonicity of arguments in natural - nonformal - dis-


course is an important aspect of human psychology. I doubt, however, that it
can, or should, be incorporated into the semantic description of natural lan-
guages. Certainly, I do not regard this major difference between formal and
natural systems as disqualifying the former from a central role in describing
the latter.
The predicate calculus 43

3.5 The predicate calculus

The calculus of propositions treats propositions as atomic elements and is,


therefore, in no way concerned with their internal structure. The predicate
calculus - also known as the calculus of functions, or the calculus of classes -
contains the propositional calculus as a subpart. This calculus, however, goes
further in that it treats propositions in terms of their internal structure. Es-
sentially, this means that the calculus analyses propositions in terms of their
functions - verbs - and the arguments to those functions - noun phrases.
It seems appropriate to begin the discussion with a brief presentation of
one of the oldest classifications of statements in formal logic. Aristotle distin-
guished four classes of statement based upon the types of quantification and
the positive/negative polarities. Statements are either universally quantified,
symbolised "V" or particularly quantified, symbolised "v". In addition, they
are either positive or negative.
The four types of statement are:

(16) Universal affirmative: All birds have feathers, symbol, A.

(17) Particular affirmative: Some birds fly. symbol, I.

(18) Universal negative: No birds are mammals, symbol, E.

(19) Particular negative: Some birds do not fly. symbol, O.

Using argument variables, function variables, quantifiers, logical constants


and appropriate brackets, each of these statements may be symbolised as
follows:

(16) a. (i.x)(B(x)^F(x)).

(17) a. (v.x) (B(x) & F(x)).

(18) a. (V. x) (B(x) —> -M(x)).

(19) a. (v,x)(B(x)&-F(x)).

These symbolisations may be verbalised as:

(16) b. For all x, if χ is a bird, then χ has feathers.

(17) b. There exists at least one χ such that χ is a bird and χ flies.

(18) b. For all x, if χ is a bird, then it is false that χ is a mammal.


44 Background notions from formal logic

(19) b. There exists at least one χ such that χ is a bird and it is false that χ
flies.

Since a universally quantified statement is equivalent to a particular one in


which it is denied that a single χ exists that fails to . . . , both of the universal
statements above may be substituted by:

(16) c. -(v,x) (B(x) & -F(x)).

(18) c. -(v,x)(B(x) & M(x)).

These conjunctive expressions are verbalised as:

(16) d. It is false that there exists at least one χ such that χ is a bird and
does not have feathers.

(18) d. It is false that there exists at least one χ such that χ is a bird and χ
is a mammal.

The statements considered so far have been simple in that they have in-
volved only one argument and, consequently, only one quantifier. If a given
function's degree > 1, then, clearly, more than one variable may be required
and each will have to be bound by its own quantifier. Thus, the statement:

(20) Everybody loves someone.

requires, for its symbolisation, a universal and a particular quantifier along


with the requisite variables and restricting functions, as follows:

(20) a. (1,x) (v,y) ((H(x) & H(y)) —- L(x,y)).

In this expression, the inner clause restricts χ and y to humans, while the outer
clause asserts that χ loves y under the condition stated in the inner clause.
(20) is, of course, an Α-statement since it makes a claim about all humans
and so the universal quantifier has the particular quantifier in its scope, i.e.
to its right. It is to be observed that, even if there are those who love only
themselves, the symbolisation is still correct since there is no rule that only
one variable may be used for a given individual. The rule is, rather, that if
different values are involved, they must be represented by different variables.
Thus, the symbolisation of (21) below is incorrect.

(21) Everyone loves someone else.

a. *(V,x), (v.x) ((H(x) & H(x)) —• L(x,x)).

Symbolisation (21a) would, in fact, mean:


The predicate calculus 45

b. Everyone loves himself/herself.

When a statement is universal, as in (20a), the universal quantifier must, of


course, have any other in its scope and the main clause must be conditional.
In fact, however, (20) is ambiguous. On another reading, it is equivalent to
the normal sense of (22), which, being particular, is symbolised, along with
the alternative reading of (20), as (22a). In this symbolisation, the particular
quantifier has the universal in its scope and the main clause is conjunctive.

(22) Someone is loved by everybody.

a. (v,y) (V.x) (h(y) & (H(x) —* L(x,y)).

The difference in quantifier scope between (20a) and (22a) reflects a funda-
mental difference in meaning between the two expressions - one that features
frequently in the linguistic literature. In (20a) it is claimed that, for everybody,
there is at least one person he/she loves, even if it be only himself/herself.
In (22a), on the other hand, it is asserted that there exists at least one person
of whom it is true to say that everybody loves him/her.
It is appropriate, at this point, to refer briefly to the oldest part of the
calculus of classes, namely, syllogistics. Of the three types of syllogism,
the categorial is part of the predicate calculus, while the hypothetical and
disjunctive are, properly speaking, part of the propositional calculus. The
categorial syllogism is important in the present context not only because of
its pervasive exploitation in everyday argument, but because the semantic
problems associated with such syllogisms form an important part of current
philosophical and linguistic discussion and frequently figured in Montague's
writings.
Consider the following syllogism:

(23) All angels fly. All cherubim are angels. / All cherubim fly.

This argument is in the mood, "Barbara", since its premises and conclusion
are A-statements.
The terms of the syllogism are identified as follows:

(24) The "minor" term is the subject of the conclusion. The "major" term
is the predicate of the conclusion. The "middle" term appears in the
premises but not in the conclusion.

According to (24), the minor term of (23) is cherubim, the major term is
fly and the middle term is angels.
We may summarise the rules of validity for a categorial syllogism as
follows:
46 Background notions from formal logic

(25) Rules of syllogistic validity

(R.a) No conclusion follows from two negative premises.

(R.b) If one premise is negative, the conclusion must be negative.

(R.c) No conclusion follows from two particular premises.

(R.d) The middle term must be distributed at least once.

(R.e) Any term distributed in the conclusion must be distributed in the


premises.

(R.f) No term may be ambiguous.

We may say that a term is "distributed" if and only if it applies to each and
every member of a class. In an Α-statement, the subject alone is distributed. In
an I -statement, neither term is distributed. In an Ε-statement, both terms are
distributed. Finally, in an O-statement, the predicate term alone is distributed.
Armed with the above definitions of terms and rules of validity, it is simple
to decide whether or not a given syllogism is valid - the conclusion neces-
sarily follows from the premises. Thus, for example, the following argument,
although it has about it the ring of truth, is clearly not valid since it infringes
rules (R.c) and (R.d).

(26) Some humans are women. Some women have long hair. / Some hu-
mans have long hair.

In this case, common sense tells us that the syllogism is not valid since
it is nowhere stated that all women are humans - the middle term is not
distributed - and it might well be the nonhuman women only who are blessed
with long hair. This example suggests, moreover, that rule (R.c) is superfluous
in view of rule (R.d) which, by the definition of distribution, disallows a
syllogism with two I-statements as premises.
It is to be noted that proper nouns may be taken as distributed terms, so
that, for instance, a statement like (27) is an A-statement:

(27) Michael is an angel.

Thus, (27) has the same quantified status as:

(28) All that is Michael is an angel.

Thus, the famous syllogism:

(29) All Greeks are mortal. Socrates is a Greek. / Socrates is mortal.


The predicate calculus 47

is a syllogism in the mood, Barbara. In fact, there are those, e.g. Russell
(1946), who object to this analysis, but it appears to enjoy general acceptance
and is in accord with Montague's (1973) treatment in which a proper name
denotes the set of properties which constitutes a unique individual.
An argument variable which is not bound by a quantifier is said to be "free"
and a formula containing free variables is said to be "open". The formulae
given so far have all been "closed" and, therefore, represent propositions. It
is common to call an open formula an "open proposition", but I shall, for
the time, continue to use "propositional function" or "formula" for formulae
with free variables.
When we consider the formation rules for the predicate calculus, we find
that they correspond to those for the propositional calculus save for the fol-
lowing additional rules needed to govern the construction of open and closed
expressions.

(30) Formation rules

(R.a) If Τ is a term, constant or variable, and F is a predicate of η argu-


ments, then F(Tn) is a wff, i.e. a proposition or formula.

(R.b) If χ is a variable and σ is a formula in which χ alone is free, then


((ν,χ) σ) is a wff, i.e. a proposition.

(R.c) If χ is a variable and σ is a formula in which χ alone is free, then


((V.JC)CT) is a wff, i.e. a proposition.

Turning to semantic issues, the proving of theorems in the predicate calcu-


lus is achieved by converting expressions containing quantifiers into atomic
propositions as in the propositional calculus and proceeding with the cal-
culation through the normal rules of inference. However, whereas in the
propositional calculus the truth values of propositions are taken as given, in
the calculus of classes, the situation is considerably more complex. This is
so because we are predicating properties of individuals. Thus, in order to say
whether a given assertion is true, it is necessary to know whether a given
individual exists and does, in fact, have the predicated property.
In order to establish such facts, we assume a particular state of affairs
or possible world. Part of such a state of affairs will be the individuals
it contains. Let us extend our earlier notion of individuals (chapter 1) to
include whatever might be called "things". Thus, the individuals in a state of
affairs may include, beside people, buildings, trees and the like, ideas, days
of the week, even propositions. Let us further assume that the properties of
48 Background notions from formal logic

the respective individuals are fixed and that the relations holding between
them are also part of the specification of the state of affairs.
We say that the set of individuals in a given state of affairs is a "domain",
6. We may then say that a predicate, F, has its domain in δ. While most
proper nouns will have unique values in <5, some, e.g. Pegasus may have no
value at all in a given state of affairs. Further, there is likely to be a large
number of elements in the domain which do not have names of their own.
If F(x)6 is a propositional function with domain in δ, then an individual,
a, in δ, satisfies F(x) if and only if F(a/x) is true - where "a/x" means 'a
substitutes x'. The set of all individuals satisfying a given function is its
extension.
If V is a function which assigns values, i.e. denotations, to names, Ω is
a function assigning values to predicates and G assigns values to variables,
then < V.(2,G > is an "Interpretation", I. If F is a one-place predicate,
then (2(F) is an unordered set and, if F is an η-place predicate, then u ( f )
is an ntuple and will frequently be ordered. If L is some language, say a
fragment of English, then L' is an interpretation of L. I return to a more
formal treatment of these assignments in section 6.7 below.
Given our understanding of the particular and universal quantifiers, we
may say that the following propositions are true with respect to a given
interpretation, I, if and only if the first is satisfied by at least one element in
δ and the second by each individual in the extensions of the predicates.

(31) ((v.x)(F(x)&H(x)).

(32) ((V.x)(F(x)^H(x)).

It will be apparent that the properties of — s p e c i f i c a l l y that ρ does


not have to be true for (p —> q) to be true, permit us to make universally
quantified statements about individuals which do not actually exist in a given
state of affairs, as well as actual members of δ. Thus, (33) may be vacuously
true even for a world lacking flying horses.

(33) All flying horses are immortal.

McCawley (1981) discusses some interesting problem cases which arise


from the vacuous truths which the nature of if sometimes forces upon us.
Two of his examples are (34) and (35).

(34) Every person who loves all of his children is saintly.

(35) Every parent who loves all of his children is saintly.


The predicate calculus 49

Clearly, under the normal value assignments of English, these two sentences
are synonymous. However, if, with McCawley, we are prepared to accept
that a man who has no children is a person who loves all of his children,
then, it would seem that if (34) and (35) are true, so is (36).

(36) Every man who has no children is saintly.

Since the equation of (36) with (34) and (35) is ridiculous and leads to
contradictions, we may wish to restrict universal quantification to individuals
judged, in some way, to be relevant to the purposes of the discourse. This is
a sensible principle, but it is important to recognise that it is one of use, not
of formal logic.
While it is easy to state the conditions under which a simple particularly
or universally quantified statement is true, the situation is less straightforward
when two or more quantifiers are involved, as in (37).

(37) Some horses hate all stable-hands.

(37) has the following symbolisation, where F = horse, Κ = stable-hand and


Η = hate.

(37) a. (v.x) (V,y) (F(x) & (K(y) H(x.y))).

In order to evaluate this expression, we must first give a value for the inner
propositional function, ((V. >') (K(y) —> H(x,y))). This is simple with respect
to the values assigned to y, but not so for χ since that variable is free.
A way out of this difficulty is suggested by the axiom referred to earlier
in this chapter, namely (ρ —> (ρ ν q)). Let the inner formula in (37a) be
satisfied by a/y. If F(a/y) is true, then so is (F(a/y) ν F(a'/x)), where a' is any
arbitrary assignment.
Once the inner propositional function in (37a) has been satisfied, it is as
easy to evaluate the remaining function as for any expression involving a
single quantifier. We simply assign an appropriate value to χ which may, but
need not, be a'. Since this technique may be applied repeatedly in the evalu-
ation of a given expression, there is no bound upon the levels of complexity
which can be accommodated.
The calculus outlined here quantifies over individuals only - even though
the notion of an individual is very broadly conceived. A system limited in
this way is known as a "first order" calculus. It is possible to extend the first
order calculus to quantify over properties. Such a "higher order" or "second
order" calculus has all of the expressive power of the classical calculus, but,
in addition, can analyse sentences like (38).
50 Background notions from formal logic

(38) Some birds are bright yellow.

Let F = yellow, Κ = bright and let Β = bird. A possible symbolisation for


(38) would be:

(38) a. (v,F) (V,x) (B(x) & (F(x) & K(F))).

Reichenbach (1947) suggested that adverbs could be treated in a similar


manner. Thus, rapidly in (39) is a property which may be predicated of the
verb runs, so that the verb is an argument to a function.

(39) Percy runs rapidly.

Such a sentence could be symbolised as:

(39) a. (v,F) (F(Percy) & K(F)).

Where F = runs and Κ = rapidly.


Clearly, if adjectives and adverbs can be treated in this manner, so can
sentence modifiers like necessarily which take propositions as arguments, as
do such clauses as it is strange that. In Montague's work, including (1973),
the principles of the higher order calculus are central and will be assumed in
this study.
It is important to stress that the interpretation of expressions in the pred-
icate calculus is, classically, extensional. Although we may think of it as
based upon the notion of a possible world, that is not to say that we are to
pass beyond that simple idea and conclude that the richer notion of sets of
possible worlds - alternative assignments - plays a part in such interpreta-
tions. It is only when the calculus is supplemented by a modal system that
the possibility of alternative interpretations for given sentences becomes a
reality.

3.6 Modal logic

The notion of a possible world, or state of affairs, provides the foundation of


so-called "Modal logic". In this logic, both the calculus of propositions and
that of classes are supplemented by a set of modalities, the most important
of which are Necessity and Possibility. Thus, a proposition, p, is said to be
true or false by necessity. Alternatively, ρ may possibly be true or possibly
false.
Modal logic 51

Maintaining, for a while, the simple view of propositions referred to in


section 3.2, a proposition is necessarily true if it is true in all possible worlds
and is necessarily false if it is true in none. We may reformulate these alter-
natives as follows. A proposition, p, is necessary if it has all possible worlds
as members, i.e. contains the universal set. A proposition is necessarily false
if it has no possible worlds as members, i.e. is equal to the void set. It follows
that ρ is possibly true if there is at least one possible world in which it is
true, i.e. if it has at least one member. As noted earlier, the notion, Possible,
is interpreted as 'consistent with some logic', McCawley (1981).
It will be recalled that, earlier (section 3.2), certain complex proposi-
tions were said to be valid and others invalid. Since validity is rooted in
truth-functionality, we may say, with Cresswell (1973), that a proposition is
logically valid iff the semantic values assigned to its logical constants are the
same in all possible worlds. Thus, using by now familiar examples, (40) is
logically valid in all classical logics - "L" = Necessarily:

(40) L(pv-p).

This is so since, in such logics, a proposition, p, must either be true or not


true. By contrast, (41), being contradictory, is logically invalid in classical
logics:

(41) (p&-p).

If we consider the statuses of (40) and (41), it is apparent that they exhibit
a kind of necessity which is purely formal. Such propositions depend for
their status upon the functions represented by the logical constants not upon
the content of the atomic propositions involved.
The kind of necessity exemplified by (40) and (41) contrasts with "con-
ceptual necessity" exhibited in:

(42) If Mary is a mother, then she has borne children.

Obviously, (42) is necessary in all possible worlds in which the assignments


of semantic values to the words mother and has borne children are the same.
However, equally obviously, this necessity is not preserved under substitution,
as (43) demonstrates:

(43) If Mary is mature, then she has borne children.

The point of the above examples, of course, is that the substituted an-
tecedent in (43) is taken to have the same truth value as its counterpart in
(42), but the complex expressions differ in truth value. We observe, therefore,
that conceptual necessity, unlike logical validity, is nontruth-functional. It is
52 Background notions from formal logic

this nontruth-functional necessity which gives modal logic its chief interest
for the linguist.
The examples (42) and (43) are both complex. However, certain simple
propositions can also be necessary. Thus, reverting to a problem touched on
in chapter 1, Frege's famous sentence (44) is true by necessity, but (45) is
false, even though the semantic assignments are referentially the same in both
cases.

(44) Necessarily, the evening star is the evening star.

(45) *Necessarily, the evening star is the morning star.

As these sentences demonstrate, the necessity operator has the important


property of disallowing the substitution of referentially equivalent expres-
sions whose senses are not identical - the so-called "Leibniz' law". Along
with other structures, they create "opaque" contexts and, for that reason, fig-
ure prominently in the discussion of central issues in semantics, including
intensionality.
Hughes-Cresswell (1968) employ the necessity operator in combination
with material implication, — a n d conjunction, &, to define two other im-
portant logical notions, namely, Logical entailment and Equality.
If we allow the symbol to represent entailment, then it has the following
definition.

(D.4) If (p <—> q), for any ρ and any q, then L(p —> q).

Using the definition of a proposition as a set, we can gloss "Logical


Entailment" as the relation between a subset, p, and its containing set, q,
such that the one is necessarily a subset of the other, i.e.:

(D.5) If ρ and q are sets, if (p <—> q), then (p c q).

To illustrate:

(46) If Jack broke the window with his fist, then Jack broke the window.

Clearly, the antecedent of (46) logically entails the consequent. If it is


the case that Jack broke the window with his fist, then it necessarily follows
that he broke the window. The reverse relation does not hold. Moreover,
if the consequent is false, then the antecedent is false: if (p q), (-q >
-p). In chapter 6, I shall discuss the relation of presupposition which though
somewhat like entailment and often associated with it, turns out to have a
rather different logical structure.
Modal logic 53

It will be recalled, from chapter 1, that Keenan-Faltz (1985) define en-


tailment in terms of the notion of informativity. Ρ entails q if ρ is at least as
informative as q. This intuitive account is in accord with the more formal,
set-theoretic definition given above. When we claim that ρ is at least as in-
formative as q, we refer to the fact that it may be semantically more detailed.
The force of the adverbial in (46) is to narrow down the act of breaking to
a particular subset of such acts.
Second, let the symbol, < — h a v e the following definition:

(D.6) If (p <—> q), for any ρ and q, then L((p q) & (q p)).

According to (D.6), the set of possible worlds which are the proposition,
p, is a subset of those which are the proposition, q, and vice versa. Therefore,
ρ is equivalent to q and q is equivalent to p. Thus, human being is equivalent
to person, but pianist merely entails musician. This relation is often known as
"strict implication", McCawley (1981), and is regarded by many to be closer
to the meaning of English if than is material implication. Like entailment,
of course, if the consequent, q, is false, then so is the antecedent, p.
To symbolise Logical Impossibility, or invalidity, we merely prefix the
necessity operator to a negated proposition, as in:

(47) L-(p).

What (47) asserts is that the proposition, p, simple or complex, is the empty
set. It should be noted, that logical impossibility, invalidity, is not the same
thing as conceptual impossibility. Whereas the former is a matter of logic,
the latter is not.
As with the other operators, impossibility must be related to that of pos-
sible worlds, i.e. worlds consistent with some logic. I am not, here, thinking
merely of physical impossibility.
If a proposition, p, is not impossible, then it is a possibly true proposition.
Thus, using Μ to stand for the possibility operator, (48) is a true statement
if ρ is not impossible:

(48) M(p).

The assertion here is, of course, that there is at least one possible world in
which ρ is true, i.e. ρ contains at least one member.
It is obvious from the gloss of "possible" as 'not impossible', that Μ is
equivalent to L flanked by negatives. Thus, an alternative to (48) is:

(49) -L-(p).
54 Background notions from formal logic

Similarly, L must be equivalent to Μ flanked by negatives, so that an assertion


that ρ is necessary could legitimately be symbolised:

(50) -M-(p).

Notions of necessity, impossibility and possibility can clearly be conceived


of in a number of ways other than those discussed above. In every-day usage,
necessity is frequently thought of in terms of presumed knowledge. We say
that, because we know, or believe we know, that such and such is the case,
something else must be the case. Thus, (51) is necessarily true, given the
truth of the fact about Harvey, but it is not logically so:

(51) Harvey discovered the circulation of the blood. So, before Harvey,
that blood circulated was not known.

This kind of necessity, "Epistemic Necessity", is a type of conceptual ne-


cessity. It is at the heart of nonmonotonic reasoning and is central to our
understanding of belief-statements and others involving verbs of "proposi-
tional attitude".
Yet another way in which we may think of necessity is in terms of be-
haviour. If we live in a world with a certain moral code, then that code places
obligations upon us as well as permitting us to make certain choices. It is
obvious that the notions of obligation, permission and prohibition have much
in common with those of necessity, possibility and impossibility. Thus, in a
rather loose way, (52) and (53) correspond to necessary and possible truths
in formal logic.

(52) Honour thy father and mother.

(53) A man may own more than one ox.

A system based on moral concepts is known as a "Deontic logic". It is, of


course, important to acknowledge that such a system is not entirely parallel
to modal logic. Most importantly, the operators, Obligation and Permission,
are based not upon the notion of a logically possible, that is a consistent,
world, but on that of a morally ideal one.
While modal logic has obvious applications to the formal study of natural
language, it may, at first, seem that deontic and similar logics have little to
do with such studies. As Allwood et al. (1977) point out, however, deontic
concepts such as obligation are reflected in language in much the same way
as their logical counterparts. Thus, for instance, the modal verbs in English
are used to express obligation etc. as well as other types of necessity. Hence,
the ambiguity of (54) between epistemic and deontic interpretations.
Modal logic 55

(54) The charity must receive support.

It is also true that such systems can be valuable tools in the furthering
of our understanding of fundamental concepts such as the nature of possible
worlds and the relations which may hold between them.
Thus far, I have taken for granted the plurality of possible worlds. In
fact, it was not until Kripke's famous paper (1959) that the idea of basing
a semantics of modal logic on sets of possible worlds was accepted. Before
then, modal logic had been founded on one-world systems and so was of
rather limited interest from the viewpoint of natural language studies. The
introduction of multiples of worlds into the system made it possible to apply
modal logic to the analysis of natural language expressions in which mean-
ing is not wholly extensional. Thus, for instance, (55) represents a common
type of expression which cannot be accounted for within the one-world, i.e.
extensional, framework of ordinary logics.

(55) Perhaps the Peking martyrs will be avenged.

However, once propositions are interpreted in terms of sets of worlds


rather than a single world, questions naturally arise concerning the relations
which hold between these worlds. In chapter 1, 1 referred to the problem
of cross-world identity. Another important question is that of accessibility.
Given a world, w\, as a starting-point, what characterises sets of other worlds
which are accessible from it?
In so far as classic logics are concerned, we might say, borrowing from
Cresswell (1970), that certain worlds are "classical" in that the laws of validity
hold in them. Thus, in a classical world, & will always behave in accordance
with the standard truth table for conjunction, and similarly for negation and
the other logical functions. It follows, therefore, that so far as such functions
are concerned, any classical world is accessible from any other. However,
we may wish to allow even what are usually logical words such as and and
not to represent functions which do not behave in the standard manner and,
if that is so, then the sets of worlds involved will not be accessible from
classical worlds. Of course, it does not follow that because a classical world
is accessible from another classical world that the same holds for nonclassical
worlds.
Another way of looking at the notion of accessibility is in terms of com-
patibility. A world in which there is no distinction between a square and a
circle is not compatible with one in which those figures are distinct. Cresswell
(1973) uses a situation much like this to introduce heavens into his system as
the constitutive elements of propositions. Consider the following assertion.
56 Background notions from formal logic

(56) Jack believes that circular objects are square.

If we retain the notion of a proposition as a set of possible worlds, we


are obliged to say that the proposition which Jack believes is the empty set,
namely, a world in which there are square circles. This would seem to amount
to saying that Jack believed nothing! Moreover, it could be taken to mean
that (56) had exactly the same meaning as:

(57) Jack believes that (6 + 3 = 3).

The unwarranted equation of (56) and (57) seems to be forced upon us


simply because, if a proposition is a set of possible worlds, then the propo-
sitions in the complements of both of these sentences contain the same set,
namely, the empty one. Such considerations oblige us to regard as identi-
cal propositions which are logically equivalent. Clearly, two propositions are
identical only if they contain the same set of possible worlds and, since the
same requirement defines the relation of logical equivalence, it seems to fol-
low that propositional identity equals propositional equivalence. This is very
undesirable. We certainly would not wish, for example, to claim that because
(58) is true, then (59) must be true.

(58) John believes that p.

(59) John believes that-(-p).

Similar considerations hold, of course, for logically valid propositions. If


such a proposition is a set of possible worlds, then it is the set of all such
worlds and, thus, all valid propositions are erroneously regarded as identical.
This error is like that referred to in chapter 1, namely, that the meaning of a
proposition is its truth value.
It would appear, then, that a proposition cannot be a set of possible worlds
in a straightforward way.
Cresswell's proposal is that we think of propositions as sets of heavens
which are, themselves, made up of "protopropositions". Thus, in the case of
(56), Jack's belief-heaven would contain at least the protopropositions:

(56) a. The elements of the set S are circular.

(56) b. The elements of the set S are square.

In this discussion, it is presumed that the lexical items concerned have


their standard usage. It is a trivial fact that, for example, circular and square
could be made synonymous.
Modal logic 57

In this theory, protopropositions are sets of possible worlds and may not,
therefore, be anomalous. However, heavens are mere collections of proto-
propositions. There is no requirement that they should not contain contradic-
tory protopropositions such as those above.
In order to distinguish between expressions like (56) containing contra-
dictions and those conforming to possibility, Cresswell establishes a subset
of heavens which he calls "world-heavens". A world-heaven is a set of pro-
topropositions which jointly determine one and only one world. Hence, we
conclude that propositions are sets of heavens and that some of these heavens
are world-heavens, i.e. define logically consistent sets of possible worlds.
For two propositions to be identical, they must contain the same heavens.
This is a much stronger condition than that on equivalence which requires
only that the propositions contain the same world-heavens. If two propositions
are identical, they are, of course, equivalent - equivalence is a necessary
condition of identity - but the reverse does not hold.
It is to be stressed that these remarks concern propositions, not the actual
sentences which denote them. Thus, we can scarcely claim that two statements
are identical just because they denote propositions which contain exactly the
same set of heavens. Obviously, at the most general level, a French and
English sentence denoting the same proposition are not identical and, on a
narrower view, two different statements may, in the same language, have
identical referents without themselves being identical. I return to the difficult
question of identity in chapter 4.
Cresswell's approach outlined above has much in common with that of
Hintikka (1962). In essence, Hintikka proposes that a sentence like (56) be
analysed in terms of the operator Believe and a nucleus consisting of a propo-
sition whose content may be part of John's belief-world. The proposition is,
then, true if it is compatible with John's belief-world. I shall return to Hin-
tikka's analysis and a more recent study by Cresswell (1985) later (chapter
4).
As I suggested earlier, for modal logicians, much of the interest in this
logic centres around the question of accessibility or compatibility. Among
the issues which they address are those pertaining to the properties of the
accessibility relation within a given system. For example, within propositional
logic itself, one possible world must obviously be reflexive with respect to
accessibility. This is a clear requirement of logical consistency. Hence, in
such a system, the following axiom holds:

(60) (p —> p).


58 Background notions from formal logic

Moreover, within such a logical system, the relation of accessibility must


also be transitive and symmetrical.
In contrast to the situation within propositional logic, that in deontic logic
is less clear. Thus, for instance, since the worlds in a deontic system are
morally ideal worlds, the actual world is not accessible from its ideal self.
Thus, when the actual world is included in a deontic system, the accessibility
relation is not reflexive (McCawley, 1981). Hence, under such circumstances,
the axiom (60) does not hold.

3.7 Lambda abstraction

At this point, it will be useful briefly to present the outlines of a technique


which makes possible the conversion of a propositional function into an
expression of another type, frequently a predicate. This technique, known
as "lambda abstraction", is employed extensively in much current work in
natural language formalisation, including Montague (1973), Lewis (1970) and
Cresswell (1973, 1985).
Consider, first, a rule which states that a given category of expressions, 7,
is formed by the combination of a functor category, β , and n-arguments of
category a . For the purposes of exposition, let 7 be the category of Sentence,
β that of Intransitive verb and a that of Nominal. Such a rule would combine
a nominal like Percy with a verb like runs to yield a sentence:

(61) Percy runs.

A system of rules which builds up complex expressions in this way is


known as a "Categorial grammar" and its discussion will figure prominently
in much that follows - especially chapter 7. For the present, let us assume
that we have such a grammar and that it can be used to provide the sentences
of a natural language with descriptions of their syntactic derivations.
In cases like (61), the situation is perfectly straightforward. However, (61)
is, by no means, representative of the majority of sentential constructions.
Recalling the discussion of the interpretation of sentences involving multiple
quantification (section 3.5), how would one derive a sentence like (62)?

(62) Everyone is related to someone.

One symbolisation of (62) in the predicate calculus would be:

(62) a. (1,x), (v,y) ((H(x) & H(y)) — R(x,y)).


Lambda abstraction 59

As in the earlier examples, however, (62) is ambiguous - some might insist


that everyone is related to God - so that an alternative representation is also
required, namely:

(62) b. (v,v), (V.x) (H(y) & (H(x) —-> R(x,y)).

Obviously, since (62) is ambiguous, it must be provided with two distinct


semantic representations corresponding to (62a) and (62b). These alternatives
must both include the formula, (R(x,y)), which is common to (62a) and (62b).
At the same time, we must provide (62) with syntactic derivations which, as
far as possible, mirror its semantic representations.
As a first step, we might say that a verb like is related to is of category,
3, and that it combines with two words of category a, such as everyone and
someone, to form a sentence. This simple approach is not, in itself, sufficient
to accommodate the facts of (62a) and (62b), including the formula which
they share and the relative orderings, in which they differ, of the quantifiers.
Let us say that we have a rule which says that, when a formula contains
a variable which is within the scope of the lambda operator, Λ, the whole
has the status of an intransitive verb. Such a verb would have the following
form:

(63) (X.x (F(x))).

A structure like (63) might, for example, be exemplified by the following


lambda abstract:

(63) a. (X.x (Runs(x))).

If we wished to represent the structure of a simple sentence like (61), using


lambda abstraction, we would have:

(61) a. (Percy(\.x (Runs(x)))).

Presuming that the individual denoted by Percy satisfies the propositional


function, (Runs(x)), (61a) can be converted into the standard logical equiv-
alent of (61) simply by the substitution, Percy/x, and the deletion of the
operator, Λ, along with the superfluous brackets. If, instead of performing
the actual substitution, we include the variable among the deletions, we have
(61) exactly.
The representation of (61) as (61a) is, however, more complex than either
the syntax or the semantics requires. We gain nothing whatever from its
employment. However, consider again the more complex problem of (62)
with its alternative readings. Since lambda abstraction allows us to create
verbs from propositional functions and, since it can be used to reflect scope
60 Background notions from formal logic

relations between quantifiers, we can provide (62) with alternative semantic


representations as follows:

(62) c. (Everyone (A,* (fA.y (R(x,y))) someone))).

(62) d. ((X,y (Everyone (X,x (R(x,y))))) someone).

At first sight, these structures seem rather cumbersome. However, quite


apart from being easily assigned a semantic interpretation, their power resides
in the fact that, while the scope relations are reflected correctly, the linear
positions of the English words correspond to that of the actual sentence (62).
Again, as with the representation of (61), (62c,d) may be converted into
structures equivalent - assuming the use of the logical signs - to (62a,b),
simply by the substitutions, Everyone/x and someone/y, and the indicated
deletions. Alternatively, the natural English string, (62), may be obtained by
deleting all logical symbols.
Probably the best known exponent of the technique just outlined is Cress-
well (1973, 1985). Cresswell shows that, by employing lambda abstraction,
an enormous array of natural-language structures can be represented so as
closely to reflect the link between syntax and semantics. I shall follow his
example in this study.
There are, of course, several technical aspects to lambda abstraction which
I have not mentioned. I shall, in chapter 7, show how Cresswell incorporates
lambda abstraction directly into a categorial language in such a way as to
allow for the construction of any category of expressions. Here, I shall com-
ment only on the principle of Lambda conversion.
Clearly, if an expression, a , is to be converted into another, a ' , then α
must be semantically equivalent to a ' . Cresswell (1973) discusses, at length,
an equivalence relation, due to Church (1941), called "bound alphabetic vari-
ance". The details of this relation are complicated, but the basic principle is
not. Two expressions, a and a ' , are bound alphabetic variants if and only if
they differ just in respect of free variables.
The equivalences in question will be as follows.

a. If a differs from a ' only in having free χ where a ' has free y, then
a — a'.

b. If a differs from a ' only in having free variables where a ' has none,
then a = a ' .

c. If a is ((A,x (7,x))/?), for some β, and a' is 7, β, then a = a'.


Montague's intensional logic 61

Any expressions, a and a ' , meeting one of these equivalences are bound
alphabetic variants and may be converted into each other.
As indicated, from the viewpoint of mathematical semantics, the attrac-
tion of lambda abstraction is that, incorporated within categorial grammar, it
makes possible the construction of semantic representations in terms of the
syntax. Thus, it permits the straightforward association of the two planes of
meaning and expression.

3.8 Montague's intensional logic

If we combine the apparatuses of the higher-order predicate calculus and an


enlarged version of modal logic which allows for intensional interpretations,
along with lambda abstraction, we obtain an Intensional logic. If, further, we
enrich such a system with tenses, it becomes a Tensed intensional logic. Mon-
tague (1973) employs such a logic to provide the semantic representations of
natural-language expressions. This section will focus upon his development.
The enlargement of the modal logic takes the form of the addition of two
more operators, one for intensionality and the other for extensionality. Using
Montague's symbolisation, we indicate the former, for any expression a , by
A
a and the latter by v a .
This enlargement is made possible by virtue of the interpretation of modal
systems in terms of sets of possible worlds rather than an arbitrary world as
in the case of the predicate calculus. Instead of thinking of the denotation of
a given expression, say a definite description, as some entity in a possible
world - its extension in that world - we may regard the expression as referring
to the set of its extensions in all possible worlds.
To repeat the commentary in chapter 1: the range of its possible extensions
is what is understood to be the intension of an expression. Thus, the intension
of the evening star is the set of all of its possible extensions, including, in our
world, the planet Venus. The intension of the morning star is its extension in
each possible world, including the planet Venus in our world. We conclude,
therefore, that while the evening star and the morning star have the same
extension in our world, they might have different extensions in other possible
worlds and, thus, they have different intensions. This is why Frege's famous:

(64) *Necessarily, the evening star is the morning star.


62 Background notions from formal logic

is false, i.e. denotes a false proposition, whereas, as we have already noted:

(65) Necessarily, the evening star is the evening star.

is true, i.e. has a true proposition as its denotation.


The intension/extension distinction is an ancient one in philosophy (see,
for example, Dowty et al. 1982). However, in its modern form, it is usually
traced to Carnap's development of Frege's idea that an expression has both a
sense and a reference. In Carnap's original treatment (1947), the intension of
an expression was a function with state descriptions or models as its domain
of arguments and extensions in its range of values. Lewis (1970), along with
other scholars, including Montague (1973), refined on the Carnapian view of
this function by making its domain an ntuple of relevant factors - relevant
to the determination of meaning - including possible worlds and moments
of time. Such ntuples are called "indices" and their elements "co-ordinates".
In chapter 6, I shall discuss indices in some detail, including Lewis's
system. Here I shall simplify by saying that the domain of a function is a set
of indices made up of a possible world and moment of time and its values
are extensions. I take it that certain expressions may have extensions which
coincide with their intensions. Thus, a sentence denotes a proposition and
has a proposition as its intension, i.e. a function from indices to truth values.
The intension of many proper nouns, e.g. Scott, is a function from possible
worlds and moments of time to a unique individual and is, thus, coincident
with its extension.
As the remarks in the opening chapter suggested, the notion of an intension
is fundamental to an understanding of the notion of Meaning. Again following
Lewis (1970), we may say that to say what a meaning is is, in part, to say
what it does. Since intensions relate possible worlds to extensions, they are
obviously part of what a meaning does.
Since this chapter is devoted to background notions in formal logic, it
would be pleasing to present intensional logic as a purely formal system.
However, one of Montague's major interests in developing his system lay in
its applicability to natural language. It seems strained, therefore, to discuss
this logic without referring to its application. What follows will, therefore,
constitute an important - albeit derivative - part of what I have, myself, to
say about the semantics of natural languages.
In the previous section, I introduced, in very general terms, the notion of
a categorial grammar. While I shall not elaborate on this notion until chapter
7, it is again convenient to use it here. Essentially, a categorial grammar is a
simple context-free grammar which allows for the construction of infinitely
many "derived" categories from a small number of "basic" ones. Such gram-
Montague '5 intensional logic 63

mars were developed by Ajdukiewicz (1935) who was indebted to Lesniewski


and, ultimately, to Husserl.
Montague (1973) assumes two basic categories, e and t. We may think
of these as the categories of entities and sentences respectively. The letter
"t" stands for "sentence" because only sentences denote truth bearing expres-
sions.
From these two basic categories, we derive others of the form, (a/a) or
(a/b). In terms of functions, such derived categories have their range of values
in the category named by the leftmost symbol and their domain of arguments
in the category represented by the rightmost symbol. Thus, (a/a) takes, as
argument, a member of category, a, and yields a value in a. Similarly, (a/b)
takes a member of b as argument and has a value in a.
The utility of such a system is that it readily lends itself to the con-
struction of complex expressions in a manner which explicitly reflects their
compositional structure. Such a system, therefore, lends itself naturally to the
exploitation of Frege's principle of compositionality.
As a simple instance of a derived category in English, let (t/t) represent the
category of sentential adverbs, including modal operators, such as necessarily.
Such expressions concatenate with sentences to form sentences, as in (64).
Functionally: a sentential adverb is a function whose argument is a sentence,
member of t, and whose value is a sentence, member of t. In like fashion,
we may think of a one-place predicate, such as walks, as a function with
argument in e and range in t. Thus, walks, being of category (t/e), takes
an expression of category e, say Percy, as argument and yields, as value, a
member of t, namely the sentence:

(66) Percy walks.

Obviously, there is more to a categorial grammar than the above para-


graphs suggest. For instance, it proves useful to set up a system of rules of
functional application which spell out the applications of functions to argu-
ments and any morphological or other changes those applications may bring
about. In addition, it is necessary to provide a lexicon which lists the basic
expressions belonging to each category, basic or derived. The lexicon will,
of course, be language-specific.
Let us assume, in addition to the two syntactic categories, e and t, the
existence of a semantical object, s. S may be thought of as the set of senses
or intensions. Let Y represent the set of "Types". Then Y is the smallest set
such that:
64 Background notions from formal logic

(67) Members of Y

a. e and t e Y;

b. where a, b e Υ, < a , b > e Y;

c. if a e Y, then < s, a > e Y.

self-evidently, (67) allows for both basic and derived types.


The intensional logic employs constants - primed tokens of their natural-
language counterparts, e.g. man' = man - and variables of each type. Each
type is written in the reverse order to that used for the syntactic categories
and 7' is replaced by ','. Finally, derived types are enclosed in angles. Thus,
the type notation corresponding to (a/b) is < b, a >.
(C) of (67) permits intensional types of any category, basic or derived.
Montague uses an assignment function, F, whose domain is the set of all
constants such that:

(68) If a e Y and a e constanta, then F(a) e < s, a >.

Thus, F assigns to each constant an intension. For instance, Atlantis' no


longer denotes a member of type e, but an intension of type < s,e > . Verbs
like looks for', which create opaque, or "oblique", contexts, take as object
argument a member of < s,e >, say F(Atlantis'), and yield, as value, an
intransitive verb, e.g. looks for'(F(Atlantis')), which denotes a member of
type, < s,<< s,e >,t » . Such a verb, in its turn, takes a member of
< s,e > as argument, say F(Percy'), and yields, as value, a proposition,
member of < s.t >, i.e. the proposition denoted by:

(69) Percy looks for Atlantis.

Since < s,t > is a function with domain in possible worlds and moments of
time and range in truth values, the value of the proposition denoted by (69)
will be in {1,0}.
This account of (69) is greatly simplified and departs from Montague's
treatment on several counts, including the fact that he would take for as a
separate item - an intensional preposition. The point is that the verb looks
for does not presuppose the existence of its object and the fact that it takes
a member of < s, e >, rather than the extensional type, e, allows for this. In
addition, the subject of the verb must be rendered extensional by a suitable
postulate, see below.
As another illustration, consider an oblique context created by the senten-
tial adverb necessarily. Let necessarily' be of type, < s,< s.t > t >. That
is to say: let it be an intensional adverb. Accordingly, necessarily' takes a
Montague 's intensional logic 65

member of < s.t > , a proposition, as argument and yields a proposition as


value. This permits the solution of Frege's evening star paradox. Necessarily
claims universal truth or falsehood of its complement. Necessarily', there-
fore, demands that its argument be an intension not an extension, which is
particular to some arbitrary world and moment of time.
We tend to think of oblique contexts in terms of classes of verb, adverb, ad-
jective, etc. which create them. However, as Cresswell (1973) demonstrates,
the necessity for "intensional objects" is not restricted to such constructions.
He gives the following example of a sentence which depends, for its ambi-
guity, crucially upon the fact that the subject noun phrase denotes a function
whose values change with its temporal arguments.

(70) The prime minister of New Zealand will always be a British subject.

Whether such sentences provide "the most plausible evidence" of the need
for intensional objects, or whether they are, at some level of analysis, of
a kind with the more obvious opaque cases, is unclear. However, it is very
apparent that opacity must be of central concern in semantics and Montague's
intensional logic was constructed with this concern at its heart. I take up the
topic, including Cresswell's examples, again in chapter 4.
I gave the type for necessarily' above as < s,< s.t > t >. This is,
however, a simplification. In fact, the function, F, assigns all constants a
member of type < s.a >, so that necessarily' will actually be of type,
< s. < s,< s.t > t > > , in which < s.< s.t > t > = a. This manoeuvre
proves, ultimately, to lead to simplification in the grammar. Its effect is to
make a completely general treatment possible in which all expressions are
initially treated intensionally. During the computation of the meaning of any
sentence, those intensions which are inappropriate are reduced to extensions
through the use of the operator V in a manner to be discussed later in this
section. Thus, while, on one reading, the object noun phrase in (69) must
remain intensional, that in (71) must be extensionalised. This is so because
photograph, like find mentioned in chapter 1, requires an extensional object.

(71) Percy photographed Atlantis.

One alternative to complicating the types would be to complicate given


semantic rules so that they sometimes apply to intensions and, at others,
to extensions. This alternative is, however, undesirable since it diminishes
the generality of the analysis without resulting in a significant reduction in
complexity. Yet another alternative is to interpret the class of individuals so
widely as to include intensional objects which may, then, act as arguments to
66 Background notions from formal logic

functions just as ordinary extensional objects do. This is the approach which
I adopt in later chapters of this study.
It is to be noted that (c) of (67) allows for an infinite iteration of intensions.
This is so because, if < s, a > is a type, then so is < s, < s, a » and so on.
This unwanted iteration is of no semantic consequence since, by definition,
an intension is a function from all possible worlds and moments of time
to extensions. Hence, the intension of an intension is a constant function
and endlessly to apply it to itself leads to nothing more than an infinity of
identicals.
In the interests of readability, I shall often omit one or all tokens of the
intensional symbol, s, when referring to given types unless they are central
to the point at issue.
It will be evident that the operation of a categorial grammar, as sketched
earlier, is completely general. There is nothing to prevent the generation of
arbitrary sequences most of which would be ungrammatical. If we say that,
where a and b are categories, (a/b) is a category, then, since adverbs and
common nouns are categories, so is (adverb/common noun). This sanctions
the creation of such odd strings as necessarily leopard. Indeed, if we treat all
words as part of the lexicon, including articles and other grammatical words,
we could even generate such monsters as leopard the, or such the and. There
is nothing wrong with this liberality in the context of universal grammar.
Indeed, its presence is to be welcomed since we can never be sure that
a given concatenation will not be required. One consequence is, of course,
that, in concert with most current theory, we are led to view language-specific
rules as filters which permit the passage of only a subset of the grammar's
output. Parallel considerations hold for the peculiar semantic types which are
assigned to the relevant expressions. However, it may be that the semantic
rules which remove nonsense are less language-specific than their syntactic
counterparts. I discuss a logical example below.
The intensional logic, like any other formal system, must contain a set
of formation rules which specify those expressions which are well-formed.
Such expressions are called "meaningful expressions" and their specification
is such as to allow for the licence referred to above, while retaining the
customary restraints of standard logics. These rules provide for the inclusion
of lambda expressions and the usual wffs of the predicate calculus as well as
the modal, tense and intensional operators. I reproduce Montague's formation
rules below, commenting only on those which are of special interest or whose
significance is not immediately apparent.
Montague's intensional logic 67

(72) Formation rules

(R.a) Every variable and constant of type a e MEa.

Comment: The intensional logic employs denumerably many variables of


each type and each is a meaningful expression of that type. Among such
variables are those which range over predicates as in the higher-order predi-
cate calculus. The logic also employs infinitely many constants of each type.
It is necessary that these sets be denumerable since each variable is indexed
by a natural number. The idea of a language with an infinite vocabulary is
not, as Cresswell (1973) shows, implausible, especially if we regard numbers
as words.

(R.b) If a e MEa and u is a variable of type b, then (A,U(Q)) e ΜΕ^.α>.

Comment: This rule permits lambda expressions as meaningful expressions


and determines their status. For example, suppose that α is a formula, i.e.
of category t, and u is a variable of type e, then ( A , u ( a ) ) is a verb, that
is, a member of ME<e t>. As we saw in the previous section, this verb, a
function, applied to an argument of the appropriate type, namely of e, yields
a sentence.

(R.c) If a e ME<ah> and β e MEa, then a(3) e MEh.

Comment: This rule stipulates the domain of arguments for any function and
gives the status of the value for those arguments. To illustrate: let a be a
verb, i.e. a function of type < e.t > , and β an appropriate argument, i.e. a
member of e, then the value of et for that argument is a member of t. If α =
runs' and β = Percy', then runs'(Percy') c. t.
Although rule (R.c) dictates an inflexible condition on wellformedness -
the argument for a given function must be of a specified type - it leaves free
the specification of function and argument. Thus, paralleling the earlier dis-
cussion, we could, conceivably, have a function, < e, < t.t > > , represented,
say, by necessarily", which took as argument a member of e, say Percy', to
yield a sentential adverb, i.e. (necessarily" Percy'). This adverb would then
concatenate with a proposition in the usual way to yield a proposition, e.g.:

a. *((Necessarily Percy) it is snowing).

(R.d) If α, β e MEU, then (a = 3) e ME,.

(R.e) If φ, ρ e ME, and u is a variable, then: -(φ), (φ & ψ), (φ ν (φ


> Ο, (Φ <—> R ), ((V,m) (ο)), ((v,u) (φ)), (L(cp)), (W (φ)), (Η (ο))
e ΜΕ,.
68 Background notions from formal logic

Comment: This rule provides for all of the customary wffs of the predicate
calculus - of any order. It also allows for the operators of modal logic -
represented by L, so that possibly must be derived - and tense operators.
Montague's tense provision is rather slight, allowing only for future, sym-
bolised W, and past, symbolised H, with present unmarked. Cresswell (1973)
argues that so-called "Tense logic", even in Prior's own formulation (1968),
is not particularly helpful in the analysis of the tenses of natural language.
Dowty et al. (1981) provide a full discussion of tense in the context of Mon-
tague grammar and I shall briefly return to his treatment below.

(R.f) If α e MEa> then ( Λ α ) e ME<s^a>.

(R.g) If a e ME<s a>, then ( v ct) e MEa.

Comment: (R.f) and (R.g) establish the wellformedness of expressions pre-


fixed by the intensional and extensional operators. It is obvious that v . is
productive only if a denotes an intension - its application to an extension-
denoting expression would be vacuous.
It will be seen below that the utility of these operators is in providing
flexibility in applying functions to arguments, in spite of the rigid constraint
of (R.c) above.

(R.h) Nothing is a meaningful expression except as provided for by (R.A)


to (R.G).

Having provided the syntax of the logic, through the formation rules, it
is now time to turn to its semantics. First, the possible semantic values -
possible denotations - for the different types must be provided. This is done
by the following definition, where A, I, J represent the set of individuals,
the set of possible worlds and the set of moments of time respectively. The
notation is to signify 'the set of possible denotations of a with respect
to A,I,J' - for brevity, I shall often simply write Da. As usual, the notation
xy signifies a function with domain in y and values in x.

(73) Possible denotations of types

a. D*-,J = A.

Comment: This simply establishes the set of possible semantic values for any
expression of type e as the set of individuals.
As noted earlier, since the set of individuals is taken in the context of all
possible worlds and moments of time, A must include individuals which do
not occur in the actual world, such as Pluto. The inclusion of such nonactual
individuals is, of course, necessary if the logic is to be useful in the analysis
Montague 's intensional logic 69

of natural language. It will also be recalled that many things are included in
the set of individuals which may not, normally, be thought of as individuals,
including propositions and formulae.
Since we can, and often do, talk about possible worlds and moments of
time, these entities also may be members of A. Thus, to say that an expression
has such-and-such a semantic value relative to < A,/, J > is to say that it
has that denotation with respect to the set A. The subsets I and J are picked
out for especial mention because they are denotation-determining in many
cases. That is to say, they are used to fix the extensions of expressions.
I should, however, mention that the intensional logic does not contain
expressions which denote ordered pairs of I and J, indices, directly. This point
is made by Dowty et al. (1981) who refer to a thesis by Gallin (1972). It seems
that this reflects the fact that "natural languages do not make explicit reference
to indices". Thus, for example, the phrase here and now has absolutely no
meaning in the absence of a context of use.

(73) b. = {0,1}.

Comment: As in most current work in formal semantics, the denotation of a


proposition is asserted by Montague to be a truth value.
It is to be noted that Montague's intensional logic employs a binary truth
system, falsity/truth. His commitment here is a profound one since it involves,
among other things, adherence to the Russellian theory of definite descriptions
(see chapters 1, 4 and 6).
This approach obviously has significant implications for the semantics. As
we saw earlier, it is clearly possible to take another view, namely, that adopted
by Strawson (1950). This alternative claim amounts to saying that sentences,
like Russell's, which suffer from presuppositional failure - it presupposes
the existence of a king of France - are neither true or false. This approach
leads naturally to the adoption of a multi-valued system in which, in addition
to 0 and 1, we might include an indeterminate value, or even a range of
approximations to truth or falsehood. The standard introduction to such logic
is probably still Rescher (1969) at least for non-Polish speakers. I return to
Russell's account and the controversy it generated frequently, especially in
chapter 6.

(73) c. D<a.h> =D°a.

Comment: (c) says that the denotation of a complex type < a.b > is a
function from the possible denotations of a to the possible semantic values
of b. Thus, if < a.b > is an intransitive verb, member of < e.t > , then
its possible denotation is a function from individuals or sets of individuals
70 Background notions from formal logic

to truth values. If < a,b > is a transitive verb, member of < e. < e,t >>,
its denotation is a function from individuals to a function from individuals
to truth values. The denotation of loves' is a function from A - taken as
objects - into a function from A - taken as subjects - into { 0 , 1 } .
To say that the values of certain functions are truth values is, of course,
to say that those functions are propositions.

(73) d. D<s,a> =D'aXJ.

Comment: The possible denotations of type < s,a > are functions from all
indices - ordered pairs of possible worlds and moments of time - into the
possible denotations of a.
As discussed earlier, the need for such functions in the semantics stems
from opaque contexts in which we cannot say that given expressions have
ordinary extensions as their semantic values. Montague uses the term "in-
dividual concept" for members of < s,e >; "property" for members of
< s, < a,b >> and "proposition" for < s,t >.
Montague provides further justification for his use of individual concepts
as denotations for nouns by his treatment of certain "extraordinary common
nouns" such as temperature. Such nouns, he claims, denote functions with
shifting values. As he, himself, expressed it (197lb/1973; p264): "the indi-
vidual concepts in their extensions would in most natural cases be functions
whose values vary with their temporal arguments". The most interesting cases
are those in which the noun in question occurs in subject position, as in:

(74) The temperature is rising.

I return to the analysis of sentences like these later (chapter 4). In the mean-
time, it is apparent that there is much in common between Montague's "ex-
traordinary common nouns" and Cresswell's extraordinary complex proper
nouns such as the prime minister of New Zealand.
Of course, the extraordinary behaviour of some nouns in subject position
does not alter the fact that, in the majority of cases, to predicate a property
of something does imply that thing's existence. Thus, it is a convenient
simplification to say that a verb denotes the set of elements of which it is
true. Montague capitalises on this convention by allowing any expression 7
of type < a,t > to denote the extension of a. Thus, if a e a, then 7 ( a )
asserts that the things denoted by a are, in fact, members of the set denoted
by 7. Thus, if 7 is an intransitive verb, then a e e and 7 denotes a set of
individuals. It is easy to adjust this formulation so that, if 7 is a member of
< a. < b.t > > , then 7 denotes a set of ordered pairs, i.e. is a two-place
verb.
Montague's intensional logic 71

Convenient simplifications aside, we have seen already that the expression,


< s,a > , is completely general. There is no reason why a should not stand
for a derived type, including < < s. e >, t > . In such a case, < s, a > would
represent the type, < s , « s,e >,t > > . Such properties are the denotations,
stricto sensu, of intransitive verbs and are functions from world-time pairs
to functions from individual concepts to truth values, i.e. propositions.
As observed in section 6.5, properties may, themselves, have properties.
Thus, we might say that, in (75a) the property of walking has the property
of being universally instantiated by human beings.

(75) a. Every man walks.

In such cases, every man has the complex type < s,<< S , « s.e >,
t » , t » . This type looks formidable indeed. However, if its s-tokens are
ignored, it will easily be seen to represent a type which takes an intransitive
verb, < e, t > to form a sentence, t.
The elegance of this way of treating quantifier phrases can readily be
appreciated by looking at an example. Let π be a property of the type just
described and let the notation, { x } , mean 'x has such-and-such a property'.
Then, the logical equivalent of (75a) will be:

(75) b. ( λ , π (i.x) (man', χ — > π {x}))(walks').

Here, the lambda expression is a noun phrase and (75b), after lambda
conversion, becomes the proposition:

(75) c. ( V , x ) (Man'(x)—> walks'(x)).

In great measure, the beauty of (75b) resides in the clarity with which it
shows the status of every man as a higher-order property of the verb walks.
This is not, however, to claim that it fully represents the meaning of every. In
chapter 7,1 shall discuss further the status and structure of quantifier phrases.
In fact, Montague extends the term "property" to refer to the denotation
of any type < a,b >>. In this generalised use, we may, for example,
have properties of propositions, i.e. members of <s,<s,t>t>. As noted
earlier, a sentential adverb such as necessarily' would be of this type.
As a final case, consider an expression, 7 , which is of type < s, < a,<
b,t >>>. If we allow the second member of this expression to be of type,
<< s,e >,<< s,e >,t > > , it will be seen that the denotation is a two-
place relation. Montague calls a relation of this kind a "relation-in-intension".
Of course, for many transitive verbs, as noted already, their status in the final
computation will have to be extensional.
72 Background notions from formal logic

(73) e. By Sa or the set of senses of type a, is understood D<JS a > .

Comment: Dowty et al. (1982) explain the difference between Montague's


"sense" and "intension" as that between the set of possible intensions of
a given type and the actual intension for a given expression of that type.
The assignment function, F, which has all constants as its domain, yields as
value for any constant, a , of type a, a member of Sa. Thus, F(leopard') has a
common-noun intension selected from the possible common-noun intensions,
i.e. one which picks out the set, {x: leopard(x)}, in each world-time pair.
Evidently, intensions will always be senses, but there may well be senses
which, being never selected, are not intensions in a given language. Thus,
by setting up the set of senses, Montague frees the logic from any particular
language and, indeed, from the confines of human languages in general, be
they natural or artificial.
Having established the possible denotations for each type, the system is
completed by the construction of an "interpretation" or "intensional model"
which, as with the models of modal logic, assigns extensions to actual mean-
ingful expressions. Such an intensional model, symbolised, M, is a quintuple
< >4,7,7, > . Here, A, I, J are as before; —> is a simple ordering on J,
i.e. an ordering on moments of time; F is the assignment function referred
to above. Obviously, the crucial element of Μ which distinguishes it from a
mere denotation-function is the ordering —
As well as nonlogical constants, such as walks' and leopard', the logic
employs denumerably many variables and these, as in the predicate calculus,
are assigned values by a function, g. Thus, g(u) e Da, whenever u is a variable
of type a. For example, if he„ is the nth variable of type, e, then the value
of g ( h e m ) is the individual denoted by hem under the assignment.
In the presentation of the semantic rules, it proves convenient to adopt
notational conventions which explicitly indicate an intensional vs extensional
interpretation-assignment of semantic values. Thus, by a M , g is understood
the intension of the meaningful expression, a , with respect to Μ and g. By
contrast, by a M , l j - g is meant the extension of a with respect to Μ and g.
in this latter notation, < i.j > , a "point of reference", represents a member
of the product set, < IXJ >. To illustrate: let a = the king of France', then
a M ' g is the function which picks out an individual at each possible world
and moment of time, or returns 0. On the other hand, denotes some
individual at a particular world-time pair, or returns 0.
Montague's semantic rules which spell out the precise interpretation for
any given meaningful expression are elaborations of and extensions to those
Montague's intensional logic 73

for the predicate calculus discussed impressionistically in section 5. I present


them below with comment where this seems necessary.

(76) The semantic rules

(R.a) If α is a constant, then a M · 8 is F(a).

Comment: As stated above, the possible denotations of nonlogical constants


are intensions, not extensions.

(R.b) If a is a variable, then q m ·'··/·« is g(a).

Comment: Naturally, the possible denotations of variables are extensions, not


intensions. Hence the specification of the pair, < i j > , in the rule.

(R.c) If a e MEa and u is a variable of type b, ( Λ . u a ) M l - j - g is that


function, h, with domain, Dh, such that, whenever χ is in the domain,
h(x) is , where g' is the M-assignment like g except for the
possible difference that g'(u) is x.

Comment: This is a formal statement of Tarski's (1941) strategy by which


the truth of a formula is established by satisfaction, outlined in section 5.
Of course, (R.c) is concerned with lambda-expressions and such expres-
sions are functions. Thus the equation of the lambda-expression with the
function, h, whose value for some element, x, in the domain of the variable,
u, is the g'-assignment in G.

(R.d) If a e ME<a,h> and 8 e MEa, then (a (3) is aM,iJ-g(ßM.ij.g^


ί
that is the value of the function ' for the argument 8M .

Comment: If a is the constant functor walks' and 0 is the constant Percy',


then the value of a for the argument 8 is a truth value. In general: the inter-
pretation of any functor is a function which gives a value for an appropriate
argument.
M
(R.e) If a, 8 e MEa, then (a = is 1 iff a i s ·'·.'·•<·'.

Comment: Since a and 0 may be any meaningful expressions whatever, the


arguments to the equality relation are not specified. It is required only that
they be of the same type.
Thus, if a and 8 denote extensions, then (a = 8) is true at a world-time
pair < i.j > iff a and 8 have the same extension at that world-time pair.
Hence, the sentence:

(77) Mr Major is prime minister of Britain.


74 Background notions from formal logic

is true at present, but was not always so. If, on the other hand, a and β
denote intensions, then (a = β ) is true iff a and β have the same extension
at each world-time pair. It follows, as Dowty et al. (1982) point out, to claim
intensional equality for two expressions is far stronger than merely to claim
their extensional equality.
It will be noted that Montague defines equality in terms of is. I shall
discuss some of the problems surrounding this verb in the next chapter. Here,
it suffices that Montague distinguishes two uses of the substantive verb, viz.
the be of predication and the be of identity, symbolised, = . In view of the
earlier discussion and that which is to follow, it would appear that his usage
is somewhat loose here.

(R.f) If φ e ME,, then -Q m j ·!·* is 1 iff is 0 and similarly for &,


v, —> and <—>.

(R.g) If φ e ME,, u is a variable of type a, ((ν, is 1 iff there


exists χ e Da such that is 1 and g' is as in (R.C); and
similarly for ((V, u) φ).

(R.h) Modal and Tense Rules

(R.h.l) If Φ 6 ME,, then is 1 iff φΡ^'Λ'« is 1 for all i' e I and


j' e J .

Comment: Thomason (1974) observes, in a footnote, that we must thus in-


terpret "L" as 'always necessary'. This is so, of course, because Montague
includes moments of time in his models.
As remarked earlier, Montague does not employ the possibility operator,
M, so that that modality must be symbolised, "-L-". The interpretation for
"-L-" will be like that for "L", except that the universal quantification over
members of I and J must be made particular. Clearly, for -L-, i' and j' must
be an ordered pair, member of < IXJ > and there is no reason for freeing L
from the same condition even though it is not technically necessary.
Μ
(R.h.2) is 1 iff φ i s 1 for some j' such that j -> j' and
j/j'·

Comment: Take j as the moment of utterance and j' as some moment later
than j, then:

(78) It will snow.

is true at the time of utterance, j, iff at j' the sentence:


Montague's intensional logic 75

(79) It is snowing.

is true.

(R.h.3) (H0) M > i J < g is 1 iff d / 4 ^ ' * is 1 for some j ' such that j ' -> j a n d j '
ΦΙ
Comment: If j is, again, the moment of utterance, then a past tensed sentence
will be true at j iff its present tense form was true at some moment, j ' , earlier
than j.
Dowty et al. (1981) show how this simple tense system can be expanded
very considerably without the necessity of introducing additional symbols -
though we might wish to use them for stylistic reasons. Thus, for instance, it
will always be can be obtained by flanking W by negation signs and similarly
for it has always been, using H. Naturally, these extensions lead to increas-
ingly complex interpretive rules as the moment of utterance's relation to the
time at which the present tense sentence is true becomes more complicated.

(R.i) If a f MEa, then ( A n) w ·'··'··'·' is

Comment: Since A is the intensional operator, when prefixed to q , the re-


sulting expression denotes an intension which, by definition, is the set of all
possible extensions of a at all possible worlds and moments of time. Hence,
at any world-time pair, < / . /' > , Λ α denotes an intension and so no particular
member of < IXJ > is mentioned in the second part of the rule.

(R.j) If α e ME<x a>, then ( V Q ) W •''>/.* is a


M
-iJ^<iJ>).

Comment: If a denotes an intension, then prefixing a with the extensional


operator converts it into an extensional expression. This is indicated in the
rule by specifying the function's argument < i.j >.
We may illustrate the mechanics of the two operators, A and V. It will
be recalled that (R.c) in (72) lays down a strict condition by which the type
specifications of argument and function must match. Thus, a function of type,
<< s.e >.t > , must have an argument of type < s.e > , while < e.t >
must take an e as argument.
In the semantic analysis of natural-language expressions, this strictness is
at times inappropriate. The operators, A and V, are prefixed to an expression
to reverse its intensional/extensional status - as indicated in (R.i) and (R.j).
Thus, Ae — < s.e > and v < s.e > — e. Hence, if ο f « .ν.ί- > . i > and
3 € e, then a( ß) = t. If 7 e < e.t > and δ e < s.e > , then 7(v<5) = t.
A

I have already used the notation, ©{x}, to mean that χ has such-and-such
a property, φ. This brace-notation is an abbreviation for v <?(*).
76 Background notions from formal logic

These manoeuvres prove useful in the semantic representation of many


expressions. To use a familiar example from Montague (1973), the ambiguous
sentence (80) requires that, on one reading, there must be at least one unicorn
to be true - the de re reading - but, on the other - the de dicto reading - this
condition is not required. Montague's representations - slightly simplified -
are (80a) and (80b). In each, u is an individual variable of type e and J is a
constant of type e.

(80) John seeks a unicorn.

a. (de re): ((v,u) (unicorn(u) & seek(j,u))).

b. (de dicto): (Seek(AJ, (λ, φ (v,u) (unicorn(u) & φ {Au})))).

In the first representation, the existential quantifier phrase has the entire sen-
tence in its scope and the whole may be glossed as:

(80) c. There exists at least one individual, u, such that u is a unicorn and
John seeks u.

The second representation is rather more complex. First, the oblique context
is created by writing the verb at the beginning with the consequence that
its arguments must be intensionalised, AJ and Au. The object argument is
the lambda-expression which asserts that the object of John's search is some
individual concept which has the property of being a unicorn.

(R.k) If φ is a formula (that is, member of MEt), then φ is true wrt


< Μ,i,j > iff is 1 for every M-assignment g.

Comment: (R.k), which Montague listed separately, states that a prepositional


function is satisfied with respect to a model if it is true for every assignment
of values to its variables in that model.
Montague's system is completed by a set of meaning postulates which filter
out interpretations of the intensional logic which would not be "reasonable
candidates" for interpretations of English. These postulates lay down the
conditions which must hold in order for given propositions to be true. Thus,
for example, a proposition involving the verb find can be true iff both its
subject and object are extensional. By contrast, the verb seek requires merely
that its subject be extensional. Moreover, a proposition in which seek figures
is true iff another which is just like it save that its verb is try to find is true.
While several of the postulates are very general, for example, proper nouns
are rigid designators, others are designed to handle semantic problems which
are particular to Montague's data. Thus, for example, the rule which states
that the subject of an intransitive verb, e.g. run, must be extensional ex-
Montague 's intensional logic 77

plicitly excludes the verbs rise and change from that requirement. Within
Montague's theory, this special treatment is afforded to such verbs in order
to accommodate the behaviour of nouns like price and temperature which,
as we have seen, Montague took to be extraordinary.
Chapter 4
Vagueness and ambiguity

4.1 Background

There can be few indeed who are not aware of the fact that all natural
languages are pervaded by vagueness and ambiguity. Indeed, there have been
those, including Frege, who have seen these hallmarks of natural language
as shortcomings so severe as to render them unsuited to exact reasoning.
While it would not be appropriate, here, to enter into a discussion of the
psychological merits and demerits of ambiguity and vagueness, it is to be
noted that there have been many scholars, e.g. Sweet (1891), who have seen
these not as disadvantages of natural systems, but as contributing to their
fundamental purpose, everyday communication.
From the viewpoint of mathematical semantics, however, there can be
no equivocation. The characteristic vagueness and ambiguity of natural lan-
guages provide the greatest challenge to the hypothesis that there is no es-
sential difference between them and formal, artificial systems. It is a major
preoccupation, therefore, of formal linguistics to search for ways to overcome
this challenge. From the perspective of general linguistics also, the ability to
resolve ambiguities and cope with vagueness are essential features of the hu-
man linguistic capacity, so that to account for them should be a fundamental
aim of any theory of language.
In fact, progress to date has been almost entirely in the description and
explanation of ambiguity, with barely any significant advance toward a formal
theory of vagueness. This is in spite of the current enthusiasm for Prototype
theory among cognitive linguists - see, for instance, Taylor (1989).
One reason for the relative lack of progress in respect of vagueness is
that the majority of linguists have concerned themselves with the structural
properties of sentences rather than with theories of lexical content and it is
on the latter level that vagueness is most obviously manifested. Ambiguity,
on the other hand, though it frequently derives from meanings at the word
level, is prominent at the higher, structural level. Structure in sentences is a
question of dependencies and connections and these phenomena are governed
by rules. Such rules are of central interest to formal linguists, mathematicians,
Background 79

psychologists and philosophers of language alike. They are, in considerable


measure, language independent and their study, therefore, offers insights into
universal grammar. Lexical content, by contrast, seems to be a language-
specific matter so that neither lexicography nor psycholexicology provide
firm bases from which to draw universalistic inferences.
In the framework of mathematical semantics, the question of vagueness,
at least for the overwhelming majority of items, may be treated as a sec-
ondary problem whose solution can be put off until questions of structural
meaning are settled. We are, after all, free to assign arbitrary meanings to the
basic items of the lexicon and arbitrarily to decree that those assignments be
distinct, so in effect, ruling vagueness out of consideration. With only one
significant exception, I follow this general approach, though I obviously do
not totally ignore questions of word meaning, including in my remarks on
ambiguity. Indeed, my discussion of semantic rules in chapter 8 has rele-
vance to the meanings of lexical items though it is primarily concerned with
general assignments to parts of speech in the context of truth and pragmatic
valuations.
The exception to which I have just referred is the set of indefinite quantifier
compounds whose members include: something and everything as well as the
temporal quantifiers such as always. These items, by virtue of their logical
status, quite apart from the fact that they are of very high frequency, cannot
be passed over. While it would be possible cavalierly to assign them arbitrary
initial values which are later adjusted, it seems best to treat them differently
since, being indefinite, their domains are restricted by the sentential context.
Quine (1966) outlines the method for restricting the domain of these
quantifier-noun compounds. An instance is provided by:

(1) Percy drank something.

Clearly, something cannot, here, be allowed to range over just anything there
is. For the symbolisation to make sense, its domain must be restricted to
drinkables. Thus, a reasonable symbolisation would be (lb) rather than (la).

(1) a. (Vx) (drank(Percy,x)).

b. (Vx) (drinkable(x) & drank(Percy,x)).

Appropriate treatments for negative occurrences involving the suppletive any-


thing are assumed, as in:

(2) Percy didn 't drink anything.

So too for the universal compound everything in:


80 Vagueness and ambiguity

(3) Percy lost everything at the races.

(4) Percy didn 't give everything to the tax man.

The universal and particular temporal quantifiers always and sometimes,


along with the negative cases, including never, receive analogous treatments,
being restricted to moments of time.

(5) Percy always wears a bowler.

This may be roughly symbolised as:

(5) a. (V,jc) ((time(x) & {wears a hat at x}(Percy)) —> {wears a bowler
at x](Percy)).

The classic account of ambiguity in English is Quine's (1960). Innumer-


able writers have based their discussions of the subject on his ideas. While I
shall, hopefully, be able to expand somewhat on what he had to say, drawing
as I shall on more recent work, my general approach will be cast in the mould
of his discussion.

4.2 Ambiguity

Ambiguity is usually thought of as operating on both the word and structural


level. Thus, to take a classic illustration, it is common to claim that the noun
bachelor is so-many ways ambiguous and, on the basis of this fact, to derive
the ambiguity of:

(6) John is a bachelor.

While this illustration is sanctioned by tradition, it is unrevealing. Quine


suggests that we may use truth assignment as a criterion for ambiguity. Thus,
if (6) is true when taken to say that John never married, but false when
asserting that he has a university degree, then (6) is ambiguous. The point is
that words in isolation have nothing to do with truth. We should, therefore,
strictly speaking, reserve the term "ambiguity" for truth bearing expressions
and use the term "polysemy" to refer to the potential of individual words
to contribute to ambiguity. I shall not, however, always be consistent in
reflecting this distinction simply for reasons of style.
Accepting the need for a word to be polysemous in order to contribute
to ambiguity, it seems proper to distinguish between genuine polysemy and
Ambiguity 81

pseudopolysemy which results from homonymy. Thus, Quine's (1960) ex-


ample:

(7) Our mothers bore us.

is not truly ambiguous since bore, past of bear, is a mere homonym to to


bore. Similarly, the noun pen 'enclosure' is homonymous with pen 'writ-
ing instrument'. The doubt which arises in interpreting cases like (7) comes
not from ambiguity but from confusion of identical sounds/spellings, which
scarcely seems to be a semantic issue.
Homonymy is of interest in etymology, but is not a condition for ambigu-
ity. It is important, of course, to be clear on what one means by "homonymy".
As Lyons (1968) points out, it would be wrong to assert that the word ear
'organ of hearing' is homonymous with ear 'spike of corn' just because there
were originally two separate words involved. The two words came together
so long ago that their original independence is quite unknown to the vast
majority of the present speech community.
In spite of the labours of lexicographers, when polysemy is extreme, it
constitutes mere generality and this does not lead to ambiguity. Quine (1960)
makes this point with reference to such adjectives as hard. There seems to
be no good reason to say that because hard may combine with question and
also with chair it results in ambiguity. Indeed, as Quine says, there is an "air
of solipsism" about such conjunctions as:

(8) Some questions and chairs are hard.

This is an important point, but it is a contentious one if applied to some


terms, notably, true and false. Quine's (1960) suggestion is that the difference
between true sentences and true laws of logic is just that between sentences
and laws of logic. Certainly, at the very least, it is necessary to distinguish
between the use of these adjectives in modifying, say, love and proposition.
True love is love and false love is not love. To assert that a proposition is
true, however, is to say much more than that it is a proposition. Similarly,
to say that a proposition is false is not to say that it is not a proposition. I
return to these predicates below, section 4.
We tend to think of ambiguity in terms of senses, but it is also possible
for it to arise as a consequence of function. Particularly striking examples of
this are provided by structures consisting of an agentive noun modified by
certain adjectives. Quine's illustration (1960) is poor violinist. When poor
is used predicatively, then the expression refers to a violinist who is either
unskilled or impoverished. When poor is used attributively, the phrase refers
to a violinist who plays badly. This kind of ambiguity is, probably, rare in
82 Vagueness and ambiguity

actual use. I do not think one would normally use a phrase like poor violinist
unless the property of being a musician was the point at issue. It would be
odd to say:

(9) John is a poor violinist.

to assert that John is an impoverished violinist, though, as always, possible


contexts of situation can be thought up which would favour such a reading.
The ambiguities considered so far have involved so-called "lexical" words,
but ambiguity is also created by "grammatical" words. Particularly interesting
are polysemous quantifiers like the article in a leopard. In such cases, the
quantification may be particular or universal, though the ambiguity is often
resolved by the sentential context. Thus, (10) is taken as particular because
the auxiliary is present progressive, but (11) would normally be universal
because the auxiliary is just present.

(10) A leopard is crossing the path.

(11) A leopard hunts at night.

The same is true if the substitutes for a in these examples. I shall return to
the interpretation of quantified phrases in chapter 8.
A grammatical word which figures very prominently in the philosophical
and linguistic literature in the context of ambiguity is the substantive verb
be. I referred in chapter 3 to the fact that Montague (1973) treated this verb
as having two uses, predication or identity. Thus, in (12) is is predicative,
but in (13) it denotes the relation of identity.

(12) John is in Canada.

(13) John is Dr Welsh.

Hintikka (1989) points to the fact that some philosophers, including Frege
and Russell, treat the substantive verb as four-ways ambiguous. For such
scholars, be may be predicative, or existential. It may denote identity or
class inclusion. Existential be usually appears in combination with there as
in:

(14) There is a mouse in the pantry.

The be of class inclusion occurs in such sentences as:

(15) A leopard is a cat.

According to Hintikka (1989), the important claim in the Frege-Russell


theory is that the uses of be are logically irreducible, not that they appear in
Ambiguity 83

different syntactic environments. Whatever the logical merits of this claim,


it does not carry over into natural language. On the one hand, it is often
difficult - Hintikka would say impossible - to distinguish one sense from
another and, on the other hand, the sentential environment is often decisive.
To do justice to the semantics of be requires a monograph in itself. I shall
limit myself to a few intuitively grounded remarks here.
Montague (1973) provides two examples of his analysis of be. They are:

(16) Bill is Mary: (Bill = Mary).

(17) Bill is a man: (Man(Bill)).

I shall defer discussion of (16) for section 4, when I take up the distinction
between equivalence and identity in the context of propositional attitudes.
In the meantime, it may be accepted that in (16), the equality symbol does,
indeed, denote the identity relation since the proper nouns which flank it
must be interpreted as without sense, i.e. extensionally only. Equivalently, in
the spirit of chapter 1, we may regard proper nouns as having senses which
are indistinguishable from their denotations. In many cases, however, such
an exclusively extensional interpretation is not possible and, hence, the verb
be predicates equivalence. The clearest instances of this are those involving
definite descriptions, as in the much quoted Fregean sentence:

(18) Necessarily, the evening star is the morning star.

Montague's predicative example (17) is instructive because it shows that


what might, at first, look like class inclusion can be thought of as mere
predication. Obviously, to assert (17) is to assert that Bill is a member of the
class of men, but the symbolisation which accompanies (17) claims that the
predicative be amounts to the same thing. Hintikka's (1983) treatment of be,
though very different to Montague's general development, displays a similar
attitude to such cases. Thus, for example, he treats is in (19) as predicative.

(19) Steffe is a school girl.

What distinguishes cases like (15) from others like (17) is that overt quan-
tification is involved in both noun phrases. The symbolisation should, there-
fore, reflect this additional complication, but the predicative status of be is
not affected.
If the subject noun phrase of (15) were pluralised then the verb and com-
plement would have to agree in number, giving:

(20) Leopards are cats.


84 Vagueness and ambiguity

The only additional assertion which (20) seems to make over (15) is that
the class of leopards, like that of cats, has more than one member. If the
subject were definite and plural, then the reference is either to a subset of
the set of leopards - say those immediately in view - or, perhaps, as Chafe
(1970) suggested, the class in question is felt to be limited to a relatively
small number. In all such variants, be is still predicative.
The case of existential be is very unclear. Hintikka (1979) shows that, in
his Game-theoretic semantics, it is necessary to replace the be in the phrase
there is by a be of predication. Thus, to use his own example:

(21) There is a school girl who can beat anyone at tennis.

may be related to the sentence:

(22) Steffe is a school girl and she can beat anyone at tennis.

This is not the place to attempt a discussion of Hintikka semantics. How-


ever, it is quite apparent that he is correct in his analysis. Clearly, in order
to establish the truth of (21) one would need to test out various possible
satisfactions of the main clause and these would involve predicative be.
There remain troublesome cases like (23), where existential be does seem
to be required.

(23) I think. Therefore, / am.

In such cases - which are, of course, very rare in normal discourse - be can
be replaced by to exist. Searle (1971) argues that to exist is not, in fact, a
predicate at all and Russell's (1905) discussion demonstrates the same point.
Much though we might wish them further, however, such cases as (23) do
occur and they therefore provide the need for existential be.
The ambiguities discussed so far may, as already suggested, be eliminated
by the simple expedient of decreeing that a given word shall have one and
only one interpretation. Just as we may assign different logical constants
to the polysemous connectives, or and i f , so there is nothing to prevent us
deciding, for example, that there are four or five constants corresponding to
the English word bachelor and distinguishing each by indexing it. What is,
clearly, essential is that the vital senses are distinguished and the case of be
provides a paradigm instance.
We might, of course, treat be as a purely grammatical morpheme which
bestows predicate status on nonverbal constituents. Such a treatment certainly
has the appeal of being simple!
Structural ambiguity 85

4.3 Structural ambiguity

On the level of syntax, ambiguity arises when there are alternative ways in
which parts of complex expressions may be inter-related. I shall discuss a
few only of such ambiguities.
Even though I shall not treat ambiguities of coreference, illustrated in (24),
until the next chapter, it seems appropriate, in the present general discussion,
to point out that they can be very complex in natural languages. Quine's
(1960) example (25) is a striking instance.

(24) John told Percy that he had lost the ticket.

(25) A lawyer told a colleague that he thought a client of his more critical
of himself than of any of his rivals.

Another source of syntactic ambiguity is uncertainty as to scope relations


within a given sentence. Thus, to use a very familiar example, again from
Quine (1960), it is not clear whether all has not in its scope or vice versa
in:

(26) All that glisters is not gold.

If we take all as having wide scope, including the negative, then the sentence
is false. This is so because it asserts that nothing which is gold glisters.
On the other hand, if we give the negative wide scope, including all, the
sentence is true. These contrasting scope assignments are clearly exhibited
in the alternative symbolisations:

(26) a. (V,x) (Gl,x —> -G,x): 'If any χ glisters, it is not gold.'.

b. -(V.JC) (Gl,x —> G,x): 'It is false that, for each x, if χ glisters, χ is
gold.'.

It will be seen that (26a) is an Ε-statement, while (26b) is equivalent to an


O-statement.
Another important and much discussed case of scope ambiguity involving
quantifiers and negation, again due to Quine (1960), is provided by (27):

(27) Sally didn't write to everybody.

This sentence has two possible readings. On one - the least likely - not is
in the scope of everybody and the whole means (27a). In the second, the
quantifier is in the scope of the negator and the meaning is equivalent to
(27b).
86 Vagueness and ambiguity

(27) a. (V.jc) (person(x) —> -{wrote to}(Sally,x)).

b. -(V,jf) (Person(x) —> {wrote to}(Sally,x)).

(27a) is extremely unlikely because, in English, the proposition it repre-


sents would probably be encoded as:

(28) Sally didn 't write to anybody.

As Quine shows, in English, everybody has narrow scope, while anybody


has wide scope. Thus, to repeat his original examples, the scope of any in
(29) stretches over both clauses to include he as well as member. In (30), the
scope of every is just the referent of member.

(29) If any member contributes, he gets a prize.

(30) If every member contributes, I'll be surprised.

Ambiguities in scope relations are not, of course, restricted to the kind


illustrated by those above in which a quantifier and the negative are involved.
Two quantifiers may also compete for scope width. A much discussed instance
of this type, discussed in chapter 3, section 5, is:

(31) Everybody said something.

On one reading, everybody has something in its scope and something is,
therefore, nonspecific. On the other reading, something has everybody in its
scope and is, in consequence, specific.
Sentences like (31) have added interest to the linguist because they may
be transformed into the passive voice with a resultant change in the linear
order of the quantifiers, as in:

(32) Something was said by everybody.

The debate here is whether the passive structure retains the ambiguity of its
active counterpart. Many, e.g. Lakoff (1971a) have argued that, in fact, the
active/passive contrast, at least for most speakers, resolves the scope am-
biguity - the scopes are reflected in the linear ordering. Others, including
Cresswell (1973), maintain that the alternatives are both to be regarded as
ambiguous. In a classical transformational grammar, of course, this debate
reflects on the broader issue of the meaning-preserving properties of trans-
formations.
The fact that there is disagreement as to the ambiguous status of such cases
might, at first, appear worrying. It might be thought to bring into question the
practicality of attempting, in a principled way, to formalise natural languages.
Structural ambiguity 87

After all, if we cannot agree on what is ambiguous and what not, how can we
ever reach a stage at which we can claim to have removed ambiguity? The
only sensible response is, probably, to ensure that our treatment is sufficiently
general that it provides not only for the clear cut cases but can be applied to
the less certain ones as needed.
An interesting clear cut case is presented by a sentence like (33), where
the negative can be construed as being within the scope of the propositional
attitude verb believes or vice versa.

(33) Percy doesn 't believe that the universe is infinite.

Taken in the first sense, (33) is equivalent to:

(33) a. Percy believes that the universe is not infinite.

On the other reading, it is equivalent to:

(33) b. It is false that Percy believes that the universe is infinite.

While (33a) claims that Percy views the universe as finite, (33b) may mean
that he has no opinion one way or the other.
As we have already observed in connection with the sentence:

(34) John seeks a unicorn.

questions of ambiguity involving quantified phrases in opaque contexts can


also be analysed in terms of scope relations. The same phenomenon is ob-
served in the more complex (35), involving a that- clause - here the com-
plement of a verb of propositional attitude - where two different readings
are assigned to the noun phrase an aunt of yours, one specific and the other
not - the example is much like Quine's (1960):

(35) I understand that you visited an aunt of yours.

In the specific reading, an aunt of yours refers to some definite individual. In


the nonspecific reading, the reference is indefinite. Quine - and later, Mon-
tague (1973) and Cresswell (1973) among others - resolves this ambiguity
by supposing that the noun phrase in question is interpreted either as being
outside the opaque context or as being within its scope.
Using Quine's (1960) convention - reminiscent of the notation of the
lambda calculus - we may display the difference as:

(35) a. Some aunt of yours is such that I understand that you visited her:
i.e. specific reading.
88 Vagueness and ambiguity

b. I understand that some aunt of yours is such that you visited her:
i.e. nonspecific reading.

In (35), an aunt of yours occupies object position in the embedded sen-


tence. It is to be noted that the ambiguity is not dependent on this fact. (36)
is ambiguous in a precisely analogous way:

(36) / understand that an aunt of yours visited you.

I referred above to the fact that something is interpreted either specifically


or nonspecifically in context with some other quantifiers. The same holds for
opaque constructions comparable to (35) and (36), as can be seen from:

(37) I understand that somebody visited you.

(38) I understand that you visited somebody.

In Quine's notation, these become:

(37) a. Somebody is such that I understand that he/she visited you.

b. I understand that somebody is such that he/she visited you.

(38) a. Somebody is such that I understand that you visited him/her.

b. I understand that somebody is such that you visited him/her.

The ambiguous status of opaque contexts involving quantifier words like


somebody seems to be confined to complement constructions. Thus, both
specific and nonspecific readings are possible in (39), as they were in (37)
and (38), but in (40), the interpretation is specific only.

(39) Percy is looking for somebody to help him.

(40) Percy is looking for somebody.

Thus, while (39) can be displayed in Quine's notation as ambiguous, (40)


cannot have the reading:

(40) a. Percy is looking for somebody such that Percy is looking for him/her.

Indeed, such an interpretation seems quite peculiar.


Ambiguities involving scope are not confined to quantification and nega-
tion. As is well known, groupings in modifier-head structures can also be
interpreted in alternative ways. A striking example based on Quine's (1960)
illustration is:

(41) Percy collects big European butterflies.


Structural ambiguity 89

Here, it is not clear whether Percy collects European butterflies which are
big as European butterflies go, or European butterflies which are big as any
butterflies go. In the first case, big is assigned wide scope, namely (European
butterflies). In the second case, its scope is narrow, namely, (butterflies).
In a traditional, transformational grammar, the different scopes in (41)
would, of course, be explained on the assumption that in the wider of the two
European is derived from a restrictive relative clause, while, in the narrow
case, it comes from a nonrestrictive, appositive clause. This can readily be
seen by the punctuation of:

(41) a. Percy collects big butterflies which are European.

b. Percy collects big butterflies, which are European.

A similar, though less complex ambiguity is present in:

(42) The cheerful girls outnumbered the boys.

Ambiguities of grouping are notoriously common in adverbial construc-


tions. Thus, in (43) in the park may be grouped either with the verb or with
the object noun phrase:

(43) John saw Sally in the park.

This kind of ambiguity is not, of course, present if the object is a proper


noun. Hence, there is no uncertainty as to grouping in:

(44) John met Sally in the park.

Another source of grouping ambiguity is the option of conjunction reduc-


tion, which is very frequently employed in English. The following warning
seen on a mountain pass in Africa illustrates:

(45) Stopping and feeding of baboons is prohibited.

This may be grouped in either of the following ways:

(45) a. (Stopping (and feeding of baboons) is prohibited).

b. ((Stopping and feeding of baboons) is prohibited).

Of course, the fact that common sense requires that we take one reading over
the other is neither here or there.
Another important source of ambiguity involving conjunction is exhibited
in sentences like:

(46) Percy and his mother are playing cards.


90 Vagueness and ambiguity

It is not possible to say whether, in such a case, it is claimed that Percy


and his mother are playing cards together or separately. In the one reading,
we might wish to say that the conjunction is of two noun phrases and, in
the second, of two separate sentences. This latter suggests the operation of
reduction by the deletion of one member of an identical verb phrase pair,
though it is not easy to reconcile the notion of identical verb phrase with the
requirement that the participants be different.
One major motivation for reductions, including those exhibited above, is,
presumably, to avoid prolixity, even at the expense of creating ambiguity.
This natural goal seems often to be behind various nominalisations such as
the one illustrated in Chomsky's (1957) famous sentence:

(47) Flying planes can be dangerous.

As is well-known, of course, the ambiguity in (47) stems from the dual role
played by flying which may represent either a reduced relative clause:

(47) a. - which are flying -

in which case it is intransitive, or a gerundive:

(47) b. NP's flying o f -

in which case, it is taken as transitive. An equally famous instance of an


ambiguous nominal is:

(48) The shooting of the hunters -

where hunters may be either the subject of the verb or its object.
The ambiguous status of (48) is immediately apparent. More subtle, at first
glance, is the ambiguity of (49), quoted by Charniak-McDermott (1985),
where the functional roles of what and the dog are unclear.

(49) What did you give the dog?

Of course, common sense - or experience perhaps - tells us that the most


likely reading is one in which what is the object and the dog is the indirect
object, but this reading is not mandatory. Without access to some deeper level
of analysis, a deep structure or a logical representation, in which to is present,
we can have no way of deciding what function is assigned to which noun
phrase. The important role played by experience/common sense in interpret-
ing such examples is revealed starkly by comparing (49) with (50) which is
grammatically similar, but favours precisely opposite role assignments to the
interrogative and lexical noun phrase.
De dicta vs de re 91

(50) Who did you give the dog?

Although homonymy in itself is not the same thing as ambiguity, it some-


times combines with grammatical function to produce ambiguous structures.
Thus, in (51), it is not clear whether film represents a verb or a noun, func-
tioning as subject or noun-modifier. It is unclear whether rushes is a verb or
a plural noun, or if like is a verb or prepositional adverb.

(51) Film rushes like a lunatic.

If film is a verb, then (51) is an imperative. If it is a noun acting as subject,


(51) is a statement - probably a newspaper headline - and rushes is a verb
and so on.

4.4 De dicto vs de re

Verbs of propositional attitude create opaque contexts which result in ambi-


guity summarised in the ancient distinction between "de re" and "de dicto"
readings. The most obvious illustrations of the distinction are in alternative
interpretations of reported speech. In the de dicto reading of (52), the senten-
tial complement is taken to be exactly what Percy said. In the de re reading,
the complement is judged merely to be equivalent to what he actually said.

(52) Percy said that Sally cheats at cards.

Here, since we cannot be sure that Percy did not, for instance, say:

(53) The vicar's daughter cheats at cards.

we are obliged to regard (52) as ambiguous.


(52) probably represents the simplest case of this kind of ambiguity. If
embedding operations are performed on ambiguous structures, very dense
systems of complex ambiguity can be created. Consider, for example:

(54) Percy says that Jack heard that Sally wants to be introduced to the
butcher.

This sentence is ambiguous in several ways, depending on whether Jack,


Sally or the butcher are taken de re or de dicto either alone or together in
all combinations.
92 Vagueness and ambiguity

The terms "de re" and "de dicto" are not always to be taken literally. It
will, for example, be recalled from chapter 3, that Montague employed them
to stand for extensional and intensional readings of sentences like:

(55) Percy seeks a unicorn.

This usage is reminiscent of Russell (1905), who employs the terms "primary"
and "secondary occurrence" for "extensional" and "intensional" respectively.
The terms "de dicto" and "de re" are also used - e.g. Allwood et al.
(1977) - to refer to the ambiguity of cases like:

(56) John thinks that every prizewinner is a Mexican.

Here, in the de re reading, John's belief is, indeed, about actual prizewinners -
he is not mistaken in identifying any given person as a prizewinner. In the de
dicto reading, on the other hand, John's belief does not, necessarily, pertain
to all prizewinners. This is so because he may mistakenly identify at least one
loser as a prizewinner or, equally mistakenly, identify at least one prizewinner
as a loser. In the de re interpretation, therefore, each member of the set of
all prizewinners stands in the relation to Percy of being thought by him to
be Mexican. In the de dicto reading, on the other hand, Percy stands in the
relation of believer to a set of propositions only some of which concern
prizewinners.
A most extensive and sophisticated discussion of statements involving
propositional attitudes is provided by Cresswell (1985). One of his fundamen-
tal claims is that the source of the ambiguity characteristic of such sentences
is to be located, not in the verb itself, but in the complement clause and,
specifically, in the complementiser that. This treatment may be illustrated by
Cresswell's example sentence:

(57) Percy believes that 5 + 7=12.

As already argued in chapter 1, the truth of (57) does not imply the truth of:

(58) Percy believes that 12 = 12.

In most semantic theories, including Montague (1973), the complement


object of verbs like believe is taken as a proposition, i.e. as being of type
< s, t >. Such an interpretation, however, does require that (58) follow from
(57) since, in this view, the meaning of < 5 + 7 > is understood to be its
referent, namely, the number 12.
Cresswell's solution is to assume that on one reading, the de re interpreta-
tion, the object of Percy's belief is not a proposition, but rather the references
of the parts of the complement in terms of the structural relations holding
De dicto v.v de re 93

between them. In effect, Percy believes of the ordered pair < 5 . 7 > that it
has the property of summing to 12. He believes that, given as argument the
ordered pair < 5 , 7 > , the function denoted by " + " yields 12 as value. The
pair is ordered since the truth of (57) does not even require us to assume the
truth of:

(59) Percy believes that 7 + 5 = 12.

A Fregean approach would involve distinguishing between the "sense"


of the expression "5 + 7" and its extension, the number 12. In Cresswell's
analysis, the sense of "5 + 7" is the structure < 5.7. + > and the sense of
the entire complement in (57) is the structure: < < 5,7. + > 12. = > .
It may, at first, seem implausible that < 5 + 7 > could have a sense
different from < 7 + 5 > . In fact, Cresswell's reconstruction of a situation
justifying the claim is, in my view, difficult to accept. He asks us to imagine
a particular division of a line on a map. If it is 5 kilometres from a to b and
7 from b to c, then, it clearly makes a difference to read the map from c to a
rather than from a to c. The weakness of the illustration comes from the fact
that it is really about distances, not about numbers. In effect, it amounts to
saying that distance a precedes distance b, going in one direction, whereas
distance b precedes distance a going in the other and this can surely have
little to do with + .
This objection is, however, a quibble and the general correctness of the
view that " < m+n > " could have a different sense from " < η +m > " seems
inescapable. Provided we are not restricting our attention to a community of
logically omniscient beings, there is no reason to deny the possibility that
someone should fancy that ordering is crucial in addition. He might, for
example, have learned a set of equations by heart without ever learning
about the function of addition itself.
The word that, in this solution, is a surface form corresponding to two
distinct underlying items. One of these, thats, is a function which operates
on the references of the parts of the complement sentence to yield its sense
which is, on one reading of (57), the extension of the clause:

(60) thats 5 + 7= 12.

The other underlying item is thato which is a function whose domain and
range of values are in the set of propositions.
The adoption of this solution has a number of consequences. On the level
of syntax, it requires that the word that be separated from the verb of propo-
sitional attitude. This is in contrast with Montague's approach (1973), where
believe that is treated as a single syntactic unit. This is not a trivial issue
94 Vagueness and ambiguity

since we would wish to achieve a one-to-one relation between the syntactic


and semantic analyses. If we trace the source of ambiguity to that and insist
that that be part of the verb, then, presumably, we would be obliged to set
up two alternative underlying items for each such verb and this would be
implausible.
On the semantic level, one important problem which Cresswell's treatment
highlights is the status of certain predicates such as is true and its negative
is false. It would be pointless to try to reproduce his treatment of this subject
in detail. We may, however, note that these truth predicates have two uses,
one in which they apply to quotations - the usage in Tarski truth formulae,
chapter 1 - where paradoxes may result, and one in which they apply to
propositions as in:

(61) That 5 + 7 = 12 is true.

In this last usage, is true, being taken as applying to a proposition, obliges


us to take the complementiser that as representing the underlying item thato,
not thats. Since thato denotes the identity function, its value is identical with
its input and, hence, the complement clause:

(62) Thato 5+7—12

is simply:

(63) 5+7—12.

Thus, is true in (61) is semantically redundant. On this view, it could be said


that the function of is true is to bestow sentence status on the that clause.
As Cresswell (1985) points out, however, the redundancy of is true is not a
general property of that predicate since it is not redundant in cases like:

(64) Percy believes something which is true.

A major problem resulting from the requirement that non-quotational is


true be predicated of propositions is that we are then forced to accept that
the identity relation holds between, say, (57) and (58). This is so because, if
(64) is taken as a report of (57), then it must also be a report of (58) and, as
we have seen, this need not be so.
Examples like (57) and (58) are extremely interesting because they involve
the equality symbol " = " . I suggested in the previous chapter that the notion
of equivalence is somewhat weaker than that of identity. Let us assume that
the terms "equivalence" and "equality" denote the same relation.
From the viewpoint of the first-order predicate calculus, two expressions
are identical if the one can be substituted for the other without affecting truth
De die to vs de re 95

values. The kinds of sentences we have been considering in this section are
not, however, first-order. Verbs of propositional attitude make higher-order
statements about other statements or propositions. We thus need to define the
notion of identity more carefully.
In the tradition of Leibniz, we might say that two expressions are identical
if everything truly said of the one is truly said of the other, but this is very
rarely the case in sentences of propositional attitude. Thus, " = " is identity in
(58), but, in (57) it does not denote identity but some laxer notion. Even in
a simple case like:

(65) 4 + 4 = 8.

I would claim that " = " does not stand for identity since I could, for example,
say of "4 + 4" that it consists of three symbols and I cannot say the same
of "8". On a nontrivial level, "4 + 4" is not identical with "8" because, as
Frege would say, its sense is different and it is this profound fact which is
reflected in the superficial fact about the number of symbols. Further, it is
this profound fact that is so clearly captured by Cresswell's (1985) treatment
of that in which the notion of sense is linked with structure.
It seems, therefore, that, even in the case of mathematical statements, we
must be clear as to whether we are talking of sense or reference, or both, when
we claim that the relation involved is one of identity or mere equivalence.
While the value of < 4 + 4 > is equal to 8, " < 4 + 4 > " is not identical
with "8".
The need for caution in the interpretation of " = " is dramatically demon-
strated by a pseudoproblem which Montague - Kalish (1959) examined at
some length and for which they proposed a solution turning on the meaning
of that much like Cresswell's discussed above. I commented briefly on this
"puzzle" in chapter 1 and repeat it here as:

(66) The number of planets = 9. Kepler was unaware that the number of
planets > 6. Therefore, Kepler was unaware that 9 > 6.

The source of error in (66) is not at all mysterious. It is located in the


meaning of " = " which is, here, not identity. The major premise of (66)
asserts that the set of planets is in a one-to-one relation with the sequence
of numbers whose last member is 9. Whether we regard 9 as a set or merely
as the successor of 8, makes no difference. The point is that nowhere in the
syllogism is it claimed that the number of planets is identical with the number
9. The same situation holds with respect to "6" and "the number of planets"
in the minor premise. In the conclusion, however, "9" and "6" denote the
96 Vagueness and ambiguity

successors of 8 and 5, respectively. Hence, the conclusion does not follow


from the premises and the whole is invalid.
These remarks on " = " reflect the common-sense view that no two things
which are different are ever identical. This is truistic, but important even so.
For one thing, it raises the serious question: what constitutes a difference?
I do not pretend to have a deep answer to this question. However, it seems
reasonable to say that, from the linguistic point of view, the recognition of
difference depends upon our power to express degrees of difference in the
language at our disposal - such a language might, of course, be graphical,
gestural, etc.. If any difference between two objects is so fine that it cannot
be stated, then, the objects have "the identity of indiscernibles".
This line of thinking suggests that " = " denotes identity rather rarely, as
in pointless assertions such as:

(67) 8 = 8.

Elsewhere, " = " denotes the laxer notion of Equality. If we accept Cresswell's
(1985) treatment of that, it would be appropriate to say that " = " denotes
identity when in the scope of thatQ and equality in that of thats.
As we saw earlier, in English, be often does duty for " = " . The remarks of
the above paragraphs seem, therefore, to hold for that verb also. Thus, when
Montague (1973) treats is in (68) as denoting identity, he is claiming that
the object named Bill is identical to that named by Mary and, presumably,
the purpose of the utterance is to make a linguistic point.

(68) Bill is Mary.

As suggested above, the case is rather different when the names have
sense, as in:

(69) The evening star is the morning star.

The referents of these expressions are intensional functions and, as K n e a l e -


Kneale (1962) say, each has a property which the other lacks. The referent
of the evening star has the property of being known by all rational men to
be identical with the referent of the evening star and likewise for the referent
of the morning star. Thus, in (69) is does not denote identity.
It is further clear that the same holds for be in such peculiar cases as:

(70) The number of wonders of the world is prime.

In this utterance, reminiscent of the Kepler paradox, is is predicative rather


than identifying. This much is obvious. However, the case is interesting be-
De dicto vs de re 97

cause its peculiarity arises from the mistaken belief that "7" could be substi-
tuted for the number of wonders of the world to yield the true statement:

(71) 7 is prime.

Clearly, (71) is necessary, but (70) is certainly not. Even so, the erroneous
identification of the number of wonders of the world with the number 7 gives
the impression that we could legitimately construct the syllogism:

(72) 7 is prime. The number of wonders of the world is 7. Therefore, the


number of wonders of the world is prime.

In this strange argument, the is of the minor premise is not the is of identity,
so that, unless we assume some novel name for the successor of 6, the middle
term is ambiguous and the conclusion does not follow.
When the substantive verb is flanked by proper nouns in the complement
of a verb of propositional attitude, the semantic facts may not always be
so straightforward. As Cresswell's (1985) discussion demonstrates, in these
circumstances, the ambiguities which may arise can be subtle indeed. How
are we to interpret the object clause of the propositional attitude verb says
in (73)?

(73) Percy says that Venus is Aphrodite.

If this is a de re report of what Percy said, is may be predicative and Percy


may, in fact, have uttered a number of different sentences, such as:
(74) Aphrodite is the goddess of love.

In the de dicto interpretation, (73) must surely have to be a mere variant of:

(75) Percy says "Venus is Aphrodite. ".

This seems unlikely because (75) is barely acceptable. We do not use the
present, simple says with a quotational object - unless the usage is historic
present, or a stage direction or some such. Even so, if (73) may be taken
as a mere variant of (75), we are forced to assume that be, in the de dicto
reading, denotes identity.
Consider the following set of circumstances. Percy, at time j, utters:

(76) Cytherea is Aphrodite.

At time j ' , later than j, John says to Sally:

(77) Aphrodite is Venus.

Sally, who is ignorant of the identity of Venus, says at time j", later than j ' :
98 Vagueness and ambiguity

(78) Percy says that Cytherea is Aphrodite.

A reasonable rejoinder to this might well be (73).


This linguistic toing and froing might be disturbing. Cresswell is assuredly
right when he says that, in the case of verbs like believe, we should locate
ambiguity in the complement clause rather than in the verb of propositional
attitude itself. In the case of sentences like (73) through (78), however, the
ambiguity does seem to reside in the verb of the main clause.
The important point to emerge from such cases is that we must take into
account the fact that say has two senses - one corresponding to 'utter' and
one to 'claim'. In the 'claim' sense, say may be used in the simple present
and the object is de re or de dicto. In the 'utter' sense, - save for special
uses such as historic present - say happens only to be used in the past tense
and the complement must be taken de dicto.
The complement clauses discussed so far in this section have all been
sentential. McCawley (1981) discusses cases like (79) and (80) where the
complement is infinitival.

(79) Monty wants to become the president.

(80) Monty wants to meet the president.

It is immediately apparent that (80) is ambiguous as between a de dicto


and a de re reading, depending on whether Monty wants to meet anyone who
is the president or a particular individual irrespective of whether he or she
remains president.
The case of (79) is, however, more subtle. As McCawley shows, this
sentence is three ways ambiguous. On one reading, a de dicto reading, Monty
wants himself to hold the office of president. On two other readings, both de
re, Monty wants (a) to himself become the person who is the actual president,
or (b) wants himself to become whoever might hold the office of president.
I shall return to the verb want in chapter 8, when I propose semantic rules
for selected items.

4.5 Intensions and temporal quantifiers

I referred, in chapter 3, to Cresswell's (1973) account of intensional objects -


Montague's individual concepts. These objects are of especial interest in that
they are functions whose values may vary with their temporal arguments.
Thus, the function which is the denotation of the king of France will have, as
Intensions and temporal quantifiers 99

value, individual χ at time j and individual y at time j ' . Since the function's
value is always a unique individual, the expression the king of France is a
complex proper noun. Let us overlook the tedious fact that this particular
function is a partial one with no value at present.
Cresswell pointed out that sentences containing such complex proper
nouns may be ambiguous when a temporal quantification is involved. I shall
discuss only cases with always, ignoring sometimes and scores of others.
Thus, (81) is ambiguous in that, while it must be true of the intensional ob-
ject - the king-of-France function - it need not be so of all its values: he
who is presently king may abdicate and convert to some other religion.

(81) The king of France will always he a Christian.

In considering such a sentence, it is important not to confuse the functional


object with its arguments. Thus, one reading of (81) is symbolised as a
straightforward existential quantification over a set of people, (81a). On the
second, it is symbolised along the lines of (81b) - I ignore the uniqueness
clause.

(81) a. (V.x) (V. ν) (King of France(x) & {(time(y)) —> Christian at(y.x))}.

(81) b. (V.χ) (V.>·) {{(king of France(x) & time(y)) —> Christian at(y.x)}
<—> king of France at(y.x)}.

It is evident that the degree to which sentences like (81) are felt to be
ambiguous depends as much on the nature of the property being predicated
as it does upon the presence of a temporal quantifier. Certain predicates
denote what are intuitively felt to be essential properties of the intensional
object concerned. Other predicates merely denote properties of any one or all
of its values. Thus, the property, Baldness, is not a necessary part of being
king of France and, hence, (82) though it is technically ambiguous, is likely
to be regarded as being either true or false of some individual.

(82) The king of France will always he bald.

Sentences (81) and (82) have future-time reference. If the tense is past,
the situation with respect to ambiguity changes, as can be seen from:

(83) The king of France has always been a Bourbon.

If (83) is taken to make a claim about the intensional object, then it is false -
there was a time when the kings of France were drawn from other families.
If it is taken as an assertion about some particular individual, then it may be
true or it may be false.
100 Vagueness and ambiguity

Again, if the predication in (83) is changed from one of being a Bourbon


to one of baldness, then the ambiguity is located not in the function/value
dimension, but in the present/past dimension.

(84) The king of France has always been bald.

In the one interpretation, (84) is a melancholy assertion about some present


individual. In the second interpretation, it is a general claim about all kings
of France, including, perhaps, some present one, any one or all of whom
may, at some stage, have boasted a fine head of hair.
In the present tense, when a quantifier like always is involved and the
verb denotes an essential property of the intensional object, only a functional
interpretation seems sensible, as in:

(85) The king of France is always head of state.

However, when the denotatum of the verb is of uncertain status, then either
interpretation seems possible. An instance is:

(86) The king of France always worships at Montmartre.

The situation, in English, is, however, complicated by the fact that the
simple present is frequently used to express habitual aspect. Thus, when no
temporal quantifier is involved, assertions may be interpreted as ambiguous
or not according as they are in simple present or present progressive. Thus,
in:

(87) The king of France rules over a powerful nation.

the reference may be equally well to the intensional object as to any of its
values. If it is true of the intensional object, it is obviously true of any value,
but the reverse situation does not hold. However, in:

(88) The king of France is going to Notre Dame.

it can only be some present individual who is in question - this case is similar
to one discussed earlier in section 4.2.

4.6 Modalities

Straddling the boundary between syntactic and lexical ambiguity is that which
arises from the set of modal verbs in English. These verbs function as aux-
Modalities 101

iliaries and may be treated either paradigmatically or syncategorematically,


according as they are viewed as lexical items or as logical operators.
Thus, for example, we may treat may as polysemous and include 'possi-
bility' and 'permission' among its senses, or we may take the ambiguity in
(89) below as reflecting differences as between logical modality and deontic
structure - see chapter 3, section 5.

(89) Percy may marry Sally.

Whatever the comparative merits of these approaches, it is clear enough


that the ambiguities to which modal verbs give rise are significant. Thus,
parallel to the two readings of (89), we have two readings for (90), one
involving permission, the other capability.

(90) Percy can play bridge.

Likewise, in (91), an interpretation involving necessity is as plausible as one


centred on obligation.

(91) The charity must be supported.

The analysis of ambiguities of this kind turns out to be a somewhat more


complex matter in English than would at first appear. One important com-
plicating factor is the idiosyncratic nature of the interaction between modal
verbs and negation. Thus, for example, while (91) has two readings, (92) is
unambiguous - at least in respect of must.

(92) Percy must not be supported.

The modality of (92) is, clearly, obligation. If we wish to retain the necessity
reading of (91) under negation, we must supplant must with can, as in:

(93) The charity can't be supported.

However, since can is, itself, many-ways polysemous, (93) is open to readings
over and above those appropriate to (91). Specifically, (93) can be interpreted
in terms of the modality of possibility as well as necessity and obligation. To
add to the confusion, can, in British English, denotes possibility in negative
constructions only, while, in American English, (94) could be read as meaning
'It is possible that . . . ' :

(94) My glasses can be in the desk.

To obtain a comparable reading in British English, can must either be sup-


planted by may or could.
102 Vagueness and ambiguity

The interpretation of English modal verbs may, also, be influenced by


sentence-type. Thus, for example, must is not ambiguous in questions. In (95),
the only plausible interpretation is one involving obligation, not necessity.

(95) Must Percy be supported?

On the other hand, can in (96) is ambiguous between permission and ability.

(96) Can Percy be supported?

It might be objected that the remarks of this section are too specific to
English to merit inclusion in a general study in semantics. Indeed, a similar
objection could be brought against other problems which are addressed here,
such as the language-specific use of given quantifiers, article systems, con-
junctions and so forth. However, I cannot imagine how one would profitably
explore the semantics of natural languages without considering specifica of
natural languages. As with the remaining discussions of this study, I take
the problems associated with English modal verbs to be typical of issues in
universal semantics. If one were to consider specifica from other languages, -
not necessarily languages which make use of auxiliary verbs - we would,
I am sure, encounter complexities which, while they might well differ in
magnitude, would, ultimately, be of similar kind.
Ideally, of course, a study in natural-language semantics should be based
upon representative data drawn from all of the world's languages. To meet
such an ideal seems, however, to be beyond the reach of a common mortal.
Less ideally, one should scrutinise data taken from several, preferably unre-
lated, languages. The difficulty is, of course, that to be sensitive to semantic
issues frequently requires a very high degree of competence - if not native-
speaker competence - in the object language. As I confessed in the opening
chapter, I am a native speaker of English only and it is that fact which per-
suades me that I should restrict myself to English data. It is certainly not my
intention to provide a detailed account of English semantics.

4.7 Regimentation

My brief remarks on the characteristic vagueness of natural languages and


my discussion of selected types of natural-language ambiguity have been
largely intuitive in the present chapter. Even so, it is evident enough that, if
we are to approach their formalisation, it will be necessary, in Quine's terms
(1960), to devise ways by which to "regiment" natural languages, regimented
Regimentation 103

forms of, say, English will differ in the degree to which they depart from its
nonregimented, i.e. natural form. The degree and nature of the regimentation
will, of course, depend upon the interests of the investigator.
Quine's use of regimentation is extremely subtle and involves several
devices ranging from special syntactic structures - such as the use of such
that referred to and employed in this chapter - to bracketing. Paraphrasing
English through such devices yields an artificial language which may be
of considerable value especially in "analytical studies of reference, belief,
desire" etc..
Quine's use of regimentation is, as the above quotation suggests, motivated
by the desire to provide a tool for the investigation of important questions
in epistemology. In that enterprise, the chief concern is with the sharpness
of the tool, not its own intrinsic properties. For Montague and other schol-
ars whose work inspires this study, the focus is upon language itself rather
than the linguistics of cognition. This difference in focus leads, necessarily,
to a difference both in the degree and nature of regimentation. Montague's
"disambiguated language" is to be regarded as a system for representing the
meanings of natural-language expressions unambiguously and in accordance
with the principle of compositionality. In order to achieve such an analysis,
we must be able to provide an unambiguous representation on both the syn-
tactic and semantic levels. Since several natural-language terms, for example,
personal pronouns, place and time adverbs, . . . can only be interpreted by ref-
erence to the context of use, we must, moreover, be in a position to consider
the contextual properties of utterances.
In pursuit of these priorities, I shall, in the next chapter, present a brief
overview of relevant work in Binding theory and, in the next, a slightly
fuller account of linguistic pragmatics. After these preliminaries, I shall offer
a syntactic analysis of English within the framework of categorial grammar
which will enable me to draw up semantic rules for the language in such
a way as to come close, at least, to reaching the goal of syntactic-semantic
homomorphism to which this study aspires.
Chapter 5
Logical form in binding theory

5.1 Levels of representation

At the outset of this study, I described the central preoccupation of the lin-
guist as the description of the rules which connect the two planes of content
and expression. The issue of levels of representation is a major one in much
current research. How many levels are required? What do they contain and
how, if at all, can they be psychologically justified? While many, including
Chomsky (1981b, 1986) envisage the need for a large number, including sur-
face, deep, lexical, phonetic and logical form, others, such as Köster (1987),
appear to advocate the rejection of a multilevel approach in favour of one
based on the notion of Domain.
From the viewpoint of semantics, however, there is little need to argue
the merits of a multilevel approach. The prevalence of ambiguity in natural
languages, if nothing else, encourages the analyst in the direction of such a
methodology.
A fundamental requirement of the semantic analysis of any natural lan-
guage is that it reflect the compositional nature of sentence meaning. As
the mathematical parable in chapter 1 suggested, we most naturally think of
meanings in terms of a level of semantic representation on which, as Cress-
well (1985) put it: "there is a one-to-one correspondence between a sentence
and its senses". Clearly, this level cannot be that of surface structure itself,
but must be some underlying stratum.
Whether we call the plane of content "deep structure", as in Cresswell
(1973), or "logical form", as in Cresswell (1985), also Van Riemsdijk-
Williams (1986) and Chomsky (1986), is of no consequence. The minimal
requirements of such a representation are clear enough. It must display all
of the semantic information captured by tensed, intensional logic, including
those aspects specifically formulated in the predicate and modal calculuses,
such as function-argument structure and the scopes of quantifiers and op-
erators. It must provide an unambiguous representation for each sentence,
departing as little as possible from surface structure. Finally, it should take
Logical form 105

account of those contributions to sentence meaning which are due to prag-


matic factors.
While much of the apparatus needed to satisfy these requirements is avail-
able in the formal logics already discussed in chapters 2 and 3, and while
my approach to syntactic derivation will be categorial rather than transfor-
mational, it is obvious that current work in Government and binding theory
is semantically crucial. I shall, therefore, briefly discuss some aspects of that
work before moving to more traditional preoccupations of formal semantics.
Though I shall draw on Chomsky's own writings, especially (1986), my most
important source is Van Riemsdijk-Williams (1986) and it will be that text
to which I refer unless otherwise stated.

5.2 Logical form

From the semantic point of view, the most important development within
research in the framework of transformational grammar has been the theory
of logical form.
Very similar, in some respects, to representations in Montague Grammar,
logical form represents the structural meanings of sentences in a logical lan-
guage without reference to lexical meaning. Unlike Montague Grammar -
broadly conceived - however, it disregards pragmatic factors and is uninter-
preted, so that the notion of Truth plays no part in its theory of meaning -
in so far as it can be said to have such a theory rather than taking meaning
for granted.
In essence, logical form, as described by Van Riemsdijk-Williams, is
an annotated version of the shallow structure of sentences. Shallow struc-
ture rather than absolute surface structure is chosen because it has not been
subjected to certain deletions - for instance, the deletion of complementiser
that - and various stylistic movement rules.
The semantic phenomena represented at the level of logical form in Bind-
ing theory have to do with coreference and scope assignment. These phenom-
ena may involve variables and depend upon the indexing of noun phrases.
Variables and indices, scope restrictions and brackets represent the annota-
tions of shallow structure and are subject to a system of rules. In current
versions, indexing is allowed to operate freely and, as in the case of Mon-
tague Grammar, inappropriate derivations are filtered out by wellformedness
conditions.
106 Logical form in binding theory

Variables are introduced into the scopes of quantifier phrases and wh-
phrases and, like any other noun phrases, receive indices.
Indices are also introduced to bind the results of movement rules which
leave traces. Thus, for example, the rule which moves an object noun phrase
to the front in a passive sentence leaves a trace behind which is coindexed
with the moved noun phrase, as in:

(1) Percyi was shocked tj by the news.

Other rules coindex two noun phrase constituents under certain conditions.
The rule for interpreting reflexives, for example, coindexes an antecedent
noun phrase with its reflexive pronoun. Another such rule coindexes a subject
noun phrase with an empty category Pro - not to be confused with "Pron"
'pronoun' - occupying subject position in an infinitive complement, as in:

(2) Percy, tried Pro, to leave.

Disregarding the indexing of variables introduced into the scopes of quan-


tifiers and wh-phrases for the moment, it is clear that the coreference relations
holding between a moved noun phrase and its trace, or an antecedent noun
phrase and a reflexive, or between a subject noun and Pro are similar. This
fact is important because it means that the same constraints on wellformed-
ness which filter out inappropriate representations apply to all three sources
of coindexing.
In fact, the formulation of constraints on wellformedness may be regarded
as the major preoccupation of Binding theory, with very wide implications
for linguistics as a whole, including psycholinguistics and mathematical se-
mantics.
As mentioned above, the level of logical form also reflects quantifier in-
terpretation. Quantifiers are assigned scopes and, as in the predicate calculus,
they bind variables lying within their scopes. These bindings are also rep-
resented by indexing. Thus, quantifier interpretation has the effect (a) of
separating the quantifier phrase from the string which is its scope and (b) of
introducing variables into those strings. Typical of such representations are
(3a, b) which display alternative readings for (3).

(3) Everyone loves somebody.

(3) a. [Everyonej [somebodyj [xj loves yj]]].

b. [Somebodyj [everyone,· [xj loves yj]]].

A striking feature of (3a. b) is that quantifier words, rather than the usual
logician's quantifiers, bind the variables. It is also to be noted that the scopes
Wellformedness in binding theory 107

retain the surface word order of the natural language expression - as they
do in Cresswell's lambda formulae, chapter 7. Unfortunately, however, the
quantifiers themselves do not always retain their surface order.
In some respects, the relation between a quantifier and its variable is like
that between a moved noun phrase and its trace. This is especially so when
wh-movement is involved. Thus, Chomsky (1986) points out that who, in (4),
is an operator binding its trace tn in the same way that a quantifier binds its
variable.

(4) Who did you meet tj ?

Indeed, Chomsky actually identifies the trace with a variable, so that a pos-
sible representation for (4) could be:

(4) a. [For which x, [x,·: person] did you meet *,·]?

Here, the surface word order is perfectly preserved in the logical form rep-
resentation. In the case of who, restriction to persons is required to maintain
the who/what distinction.
The similarity between quantifier interpretation and wh-movement on the
one hand and ordinary trace binding on the other, however, does not extend
to all constraints on wellformedness. As we shall see later, important filtering
conditions do not apply uniformally over traces and variables alike.
Before turning to some of the constraints on wellformedness, I should
mention that, in the theory of binding, as in this study, pronouns are taken to
be base-generated rather than being introduced transformationally as substi-
tutions for lexical noun phrases. This lexicalist approach is adopted, in part,
because it avoids the problem of deriving pronouns in such sentences as:

(5) He eats cabbage.

In a sentence grammar, such cases are difficult to treat through substitutions.


In a context grammar, of course, their solution would not be far to find - he
indicates old information, which can, in principle, be recovered.

5.3 Wellformedness in binding theory

5.3.1 Some typical problems of coreference


Before considering some wellformedness constraints of Binding theory, it
may be useful to provide further examples of typical problems involving
108 Logical form in binding theory

coreference. I shall, for the moment, avoid problems involving quantification


and/or wh-questions.

(6) He admires Percy.

(7) Percy admires him.

(8) Percy persuaded him to retire.

In these sentences, the lexical noun may not be coreferential with the pronoun.
However, in the following, coreference is possible.

(9) Percy believes that he is unintelligent.

(10) Percy persuaded Bill to meet him.

In contrast to the disjoint references in (6-8) and the optional coreferences


in (9, 10), Percy and himself in (11) must have identical indices.

(11) Percy admires himself.

Whereas, in (12), Percy and himself must not share the same index.

(12) Percy persuaded Bill to admire himself

If we consider the relevant relation holding between Percy and who in


(13), it is clear that it is one of coreference but in (14), the same items have
disjoint reference.

(13) Percy, who is silly, admires film stars.

(14) Bill, who admires Percy, also admires film stars.

A comparable situation can also be seen to hold in the following:

(15) Percy tried to persuade Sally to marry him.

If we assume, as above, the presence of the empty category Pro, occupying


the subject position in each infinitive clause of (15), then, the subject of to
persuade Sally must have the same index as Percy and be disjoint in reference
with the subject of to marry him, and vice versa for that subject and Sally.

5.3.2 Some conditions on coreference


In order to derive logical form representations, we can either construct rules
of indexing which contain constraints in their structural descriptions, or we
can allow indexing to apply freely and put the load of filtering on general
Wellformedness in binding theory 109

conditions. In fact, it seems that it is easier to follow the latter course. Let
us, then, assume an indexing rule of the form:

(R.l) Index noun phrases freely.

The constraints on wellformedness which filter out the host of unacceptable


representations flowing from the total - and uninteresting - liberality of (R. 1)
are embodied in a set of principles. These principles, moreover, are taken to
belong to Universal Grammar so that they apply to all human languages.
I shall not attempt to discuss the psychological merits of this claim here.
However, I should acknowledge that, in the eyes of some, e.g. Köster (1987),
Chomsky and his followers, in formulating these essentially psychological
conditions, are working from a view of language which is fundamentally
opposed to that taken by Montague. If this really is so - some remarks in
Chomsky (1986) are particularly suggestive - then there would probably
be little sympathy among transformationalists for my desire to incorporate
such conditions into an account of semantics which has its inspiration in
Montague's work.
If we wish to search for universal principles of grammar, it seems natural
to do so from the starting point of phrase structure. One of the most significant
achievements of modern linguistics must, surely, be the insight that significant
linguistic rules, purely syntactic as well as semantic, are structure dependent.
Sentences are not mere linear sequences of lexical items, but hierarchies of
phraseal structures.
Just as computational operations such as question formation exploit struc-
ture, so too the same principle of dependency is at the heart of constraints
on coreference in logical form representations.
The simplest illustrations are provided by the behaviour of pronouns in
sentences like those listed in the previous section. As we know, a pronoun
may be free, in which case, its reference is determined by the extrasenten-
tial context. Alternatively, it may be bound, in which case, its reference is
determined by some antecedent in the sentence.
Using illustrations from Chomsky's (1986) lectures, the pronoun in (16)
is free, while who, in (17), is bound and it may either be bound or free.

(16) He wrote the book.

(17) The man, who wrote it, destroyed the book.

In (16), we interpret he by referring to the general context in which the


sentence is uttered. In (17), on the reading in which it is bound, we determine
110 Logical form in binding theory

the references of who and it by referring to the references of the man and
the book.
Such remarks, however, are impressionistic only. The crucial point is that
these facts about reference are reflected in facts about structure and these, of
course, can be formally stated.
Let us begin with the following three definitions:

(D.l) Domain of x: the domain of χ is the smallest phrase in which χ


appears.

(D.2) Command: χ commands y if χ immediately dominates y.

(D.3) C-command: χ c-commands y if the first branching node dominating


χ dominates y and neither χ dominates y, nor y x.

The definition (D.3) of c-command may be illustrated by the follow-


ing bracketing in which Sz is seen immediately to dominate NPX which
c-commands NPy. In the expression, Sz, of course, labels the sentence node,
NPX is the subject node and NPV is the object node.

(18) [Sz NPX [VP NPy]].

Van Riemsdijk-Williams (1986 ) give the following definitions of "bound"


and "free":

(D.4) Bound: χ is bound if χ is coindexed with a c-commanding NP.

(D.5) Free: not bound.

Chomsky (1986) uses these notions to state a very general constraint:

(R.2) A pronoun must be free in its domain.

Given these definitions, he must be free in (16). Similarly, him must be


free in (19).

(19) Percy greeted him.

This is so because the smallest phrase in which the object appears is the verb
phrase, where it is free.
The pronoun it in (17), however, may legitimately be bound - though it
need not be - because the smallest phrase in which it appears is the verb
phrase of the relative clause and it is, there, free. Hence, its binding in the
main clause is not a violation of principle (R.2).
A parallel situation is reflected in the freedom of him in:

(20) Percy asked Sally to marry him.


Wellformedness in binding theory 111

Here, the pronoun may, but need not, be bound to the subject of the main
clause, but must be free in the infinitive clause - I return to infinitives shortly.
If we replace him in (20) by a lexical noun, as in:

(21) Percy asked Sally to marry Percy.

coindexing between the two tokens of Percy is clearly impossible. We as-


sume, therefore, the following general principle.

(R.3) A lexical noun must be free in all domains.

Principles (R.2) and (R.3) summarise the fundamental difference between


pronouns and lexical nouns in logical form representation. Let us refer to
them, along with a principle governing the reference of bound anaphors -
principle (R.4) to be stated shortly - as "the opacity condition".
The situation in respect of the relative pronoun who in (17) is quite dif-
ferent to that of it in the same clause. Quite obviously, who cannot be free
in that sentence. Intuitively, this is so since its reference must be identical to
that of the subject phrase the man of the main clause. We conclude, there-
fore, that who is here an anaphor bound to - coindexed with - the man, its
antecedent.
Selfevidently, to say that who is an anaphor in (17) is not to say very
much. If we consider the reference of himself in (22), it is obvious that that
pronoun also enjoys the status of a bound anaphor.

(22) Percy admires himself.

There must be some general principle of universal grammar underlying such


cases. Van Riemsdijk-Williams first define the prior relation "be in the do-
main o f ' as follows:

(D.6) In the domain of: χ is in the domain of y if χ Φ y and y c-commands


x.

Their formulation of the bound anaphor constraint now is:

(R.4) An anaphor must not be free in the smallest domain of a subject in


which it occurs.

It is easy to see that the reflexive himself in (22) must obey this condition.
The bracketing in (23) displays the structural relations involved.

(23) [v Percy \ [vp admires himself]].


112 Logical form in binding theory

Percy and himself must be coindexed since Percy is immediately dominated


by the smallest S - the only S in this case - so that himself is in the domain
of Percy and condition (R.4) is met.
We may display the dependency relation between the man and who in
(17) as follows:

(24) [5 The man ι [5 who 1 wrote it ] destroyed the book].

Here, it is clear that the pronoun who, while it is free in its own domain, the
relative clause, is not free in the smallest S of the subject of the main clause.
Hence, who is bound to The man and must obey condition (R.4).
Condition (R.4) also, as required, filters out some representations which
are inappropriate, such as:

(25) *[$· Percy \ thinks that [5 Sallys admires himself]].

Here, it is apparent that the anaphor is not bound in the smallest domain of
the c-commanding NP Sally and so the binding condition is violated.
Consider again the sentence:

(26) Percy tried to persuade Sally to marry him.

Making use of Pro, this could have the logical form representation:

(26) a. [5 Percy 1 [yp tried [comp Pro\ to persuade Sally2] [COmp Pr°2 to marry
himill].
As required by condition (R.2), the pronoun him is free in its domain, the
second complement clause and has disjoint reference with Pro2, which is the
subject of that clause. As mentioned earlier, being free in its own domain,
him may legitimately be bound to some antecedent noun phrase in a higher
clause, in this case, Pro the subject of the first complement clause. Pro\ is
immediately dominated by the VP node and, therefore, has the VP phrase as
its domain. It is free in this domain and can, therefore, be coindexed with
some antecedent outside, namely, the subject of the main clause Percy. Since
him is bound to Pro \ it, also, has Percy as its antecedent. Finally, Pro2 is
free in its own domain, the second complement, and may, thus, be bound to
the object Sally of the first complement clause.
While (26a) is a legitimate representation of (26), it is not the only possible
one. This is so, of course, because (26) is ambiguous. Instead of coindexing
him with Percy, the pronoun could be isolated in its reference. This situation,
as we have seen, is not blocked by the opacity condition. Of course, the more
deeply embedded the pronoun becomes in structures like (26), the greater the
number of ambiguities. Thus, (27) is three ways ambiguous:
Wellformedness in binding theory 113

(27) Percy got Bill to try to persuade Sally to marry him.

Each of these readings will be reflected in logical form by different indices


on him.
Consider, now, a case like:

(28) The man who Percy admires is a film star.

Superficially, this is like (17). Again, the pronoun who must be bound to
the subject of the main clause the man. A moment's reflection, however,
suffices to convince us that, in spite of this similarity, the two sentences are
not parallel.
While, in (17), who is the subject of the relative, in (28) it is the logical
object, which has advanced to the front of the clause by NP-movement,
leaving a trace behind. This trace must be coindexed with its head, suggesting
the representation:

(28) a. [5 The man\ [REL whox Percy admires is a film star].

Here, the pronoun is an operator which binds the trace T. Such traces are
bound anaphors subject to condition (R.4) and so are their heads.
The requirement that NP-traces be subject to (R.4) is demonstrated by
the ungrammaticality of (29), where the head Bill has been moved out of a
c-commanding position with respect to the trace so that the latter is free -
the example is Van Riemsdijk's - :

(29) *Bill 1 was believed Sam to have seen t\.

The fact that who in (28) is an anaphor follows from arguments parallel to
those advanced in respect of the pronoun who in (17). The two cases are not,
however, the same. Since who in (28) is a fronted object, it may be deleted,
giving:

(30) The man \ Percy admires t\ is a film star.

Such a deletion in (17) would yield the ungrammatical string:

(31) *The man wrote it destroyed the book.

Consideration of the relative clauses in (30) and (31) shows that, while
in the former, the verb admires retains its lexical property of taking a sub-
ject and an object - represented by the trace the verb wrote in (31) has
lost an essential property since it has no subject. It is an important principle
of universal grammar, in this theory, that lexical items retain their essen-
114 Logical form in binding theory

tial properties on all levels of representation. This principle, "the projection


principle", is infringed in (31) and so the string is not acceptable.
The projection principle can be employed to explain many facts. One
such is related to the distribution of reflexive pronouns. Such pronouns are
lexical and we may assume that, in the lexicon, they are marked as having
the property of requiring an antecedent noun phrase. The projection principle
will, then, rule out such ungrammatical strings as (32):

(32) *Themselves were swimming often.

As Van Riemsdijk-Williams point out, it is necessary that such a property


be stated for these anaphors since it is not embodied in principle (R.4).
It might, at first, appear that (R.4) should be restated so as to account for
the ungrammaticality of (32). While, however, this might be done, it would
not be strictly appropriate since (R.4) is a condition on representations, not
a constraint on syntactic rules.
Since the immediate discussion involves reflexives, this seems an appro-
priate point to refer to certain problems which, in Hintikka's view (1989)
undermine the entire theory of binding based on coreference. According to
Hintikka, each of the following sentences is acceptable.

(33) Tom and Dick admire each others gift to him.

(34) Tom and Dick admire each others gift to himself

(35) Tom and Dick admire each others gift to them.

By contrast, (36) is, Hintikka claims, unacceptable.

(36) Tom and Dick admire each others gift to themselves.

The importance of these examples is that, if Binding theory cannot provide


suitable representations for them, it cannot predict what is grammatical and
what is not.
Take (33) first. Hintikka's assertion is that, if him is taken anaphorically,
it must be coreferential with Tom and Dick or Tom or with Dick. The first
possibility is, clearly, ruled out by the number mismatch involved. The second
seems inappropriate since it would imply an ambiguity on the anaphoric
reading which is not felt to be present.
Hintikka's difficulty with (34) is similar in that himself being a bound
anaphor, must obey principle (R.4) and the only head in its smallest governing
category to which it could be bound is each other. However, if this binding
is permitted, there is a number clash.
Wellformedness in binding theory 115

Even more worrying, string (36), which Hintikka thinks is unacceptable,


would be predicted as wellformed by Binding theory.
Each of these examples poses other difficulties of coreference. For in-
stance, in (34), both Tom and Dick must be coreferential with each other,
but then, there would be no meaning difference between (34) and (35).
If Hintikka is, indeed, correct, Binding theory is confronted with a serious
challenge by examples like these. Clearly, more work is needed. However,
I should say that I do not entirely trust Hintikka's intuitions. I do not, for
example, find (36) unacceptable. Indeed, I find it more so than (33) which
seems, to me, to be particularly odd if taken to mean:

(37) Tom admired Dick's gift to Tom and Dick admired Tom's gift to Dick.

However, as the literature shows, there is a range of sentences containing


reflexives which do seem to violate principle (R.4). Van Riemsdijk-Williams
cite, among others, the following:

(38) These pictures of himself pleased John.

Clearly, the antecedent of himself must be John, yet the latter does not c-
command the former. Such sentences, along with Hintikka's examples, obvi-
ously indicate the need for more detailed studies of reflexives and reciprocals
within the framework of Binding theory. The work of Jackendoff (1972) is
still very relevant. What seems certain, however, is that the opacity condition
has immense application and does, in spite of a number of minor exceptions,
constitute a "fundamental law of language".
Another recognised problem, though of a different sort, is posed by the
empty category Pro. This constituent occurs without antecedent in:

(39) To err is human.

Such sentences raise questions as to the status of Pro in the theory. It has
become standard, e.g. Chomsky (1986), to distinguish between two uses. In
one, Pro is a bound anaphor which must have an antecedent, as in:

(40) Percy] tried Pro ι to leave.

In the other use, Pro is a pronoun which occurs free, as in (39). In its
pronominal use, Pro cannot be marked for case, so that it must occur in
subject position - see below, section 5.4. Interestingly, the Pro-pronoun seems
only to have human, or human-like, reference. Thus, while sentences like (39)
are commonplace, (41) is probably unacceptable and (42) must be interpreted
metaphorically:
116 Logical form in binding theory

(41) *To snow is delightful.

(42) To neigh is alarming.

5.3.3 Wh-questions and quantifiers


As indicated earlier in this chapter, a very important achievement of Binding
theory is its unification at the level of logical form of wh-questions and
overtly quantified sentences.
As noted, the rationale behind this unification is to be found in the fact that
wh-words and phrases, such as who or which leopard, bind trace variables in
much the same way as quantifiers in the predicate calculus bind term vari-
ables. Thus, it seems reasonable to represent wh-questions and overtly quan-
tified sentences similarly. I shall discuss the representation of wh-questions
first.
The detailed analysis of wh-questions in English is complex. My remarks
will not pertain to so-called "echoes", as in:

(43) You had what for lunch?

Nor will I refer again to multiple questions, as in:

(44) Who did what with what to whom ?

In order to derive the logical form representation of a wh-question, we


may follow a procedure very similar to that employed in deriving quantified
expressions. The wh-quantifier is prefixed to a string which is its scope and
in which it binds indexed variables.
Consider, first, (45) and (46) with their logical form representations.

(45) What did Percy find?

a. (?x\) [x\: thing [Percy find x\]]?

(46) Who did Percy meet?

a. (?xι) [χι: person [Percy meet x\]]?

In these representations, restriction is specified on the quantifier in ac-


cordance with the relevant selectional restrictions on the wh-words, what
and who. In other cases, restriction is determined explicitly in the surface
structure, as in:

(47) Which leopard did you photograph ?

a. (?x\) [x\: leopard [ You photograph X\]]?


Wellformedness in binding theory 117

If we confine our attention to cases like (45) and (46) for the moment, it
seems reasonable to assume that the question words who and what are to be
interpreted as having the status of proper nouns or definite noun phrases. This
is in spite of the fact that the range of appropriate responses to wh-questions
like (48) can include assertions involving reflexives, as in (49).

(48) Who does Jack admire?

(49) Jack admires himself.

Correctly to interpret cases like (48) requires that we take care not to confuse
the question with its possible responses.
The logical form representation for (48) should, thus, be:

(48) a. (?x\) [x\: person [Jack admire x\]]?

Since, on this view, wh-traces are indexed variables with the status of proper
nouns, they must, in accordance with the rules of coreference outlined in the
last section, be free in all domains. Van Riemsdijk-Williams offer a most
ingenious example to demonstrate this last claim.

(50) Who does he think he said he saw?

Clearly, if the wh-variable is equivalent to a proper noun, it cannot be con-


strued anaphorically with any of the pronouns in (50). Thus, representations
like the following have to be filtered out:

(50)a. *(?χ\) ίχ\: person [ he think he said he\ saw x\]]?

An anaphorical interpretation is, however, possible, though not necessary,


when the pronoun in question follows the variable, as in the optional repre-
sentation corresponding to (51).

(51) Who does Sally claim hopes that she will win ?

a. (?x\) [x\: person [ Sally claims x\ hopes that she ι will win]]?

This structure, however grotesque, again accords with the referential freedom
bestowed on proper nouns by the opacity condition. The variable, x\, is free
in all domains and the pronoun may legitimately be bound to it.
The logical representations given so far have the pleasing property of re-
flecting the surface word order. It would clearly be advantageous to retain this
characteristic since the logical representations are then very easily associated
with their surface forms. The situation has, however, been rather simple thus
far. Consider, now, the more complicated case:
118 Logical form in binding theory

(52) Whose sister does Percy hope that he will marry?

The moved wh-phrase in this case consists of two parts, the wh-word
whose and its complement sister. If we regard the process of wh-interpretation
as the insertion of variables for traces, then we might propose a representation
as follows:

(52) a. (?x\) [x\: person [x\ 's sister^] does Percy hope that he will marry
him

As Van R i e m s d i j k - Williams point out, however, such a representation


does not reflect the fact that the references of he and whose must be disjoint.
This is so because, unlike the trace tj, the variable JCJ is not c-commanded by
he. A solution which they advance, without insisting upon, is to "reconstruct"
the logical form representation in such a way as to bring the pronoun and
variable into the desired c-command relation, as follows:

(52) b. (?x\) [x\: personf does Percy hope that he will marry x i ' s sister]]?

The disadvantages of this solution are, of course, that it necessitates the


statement of a complicated rule of reconstruction and divorces the word order
of the logical representation from that of the surface structure.
A number of alternative solutions have been suggested, including the elab-
oration of the coreference constraint on pronouns, Higginbotham (1983),
and premovement marking of disjoint reference, Van Riemsdijk-Williams
(1986).
A simpler alternative might be to allow wh-interpretation to insert variables
into wh-traces so that the traces are complex structures which might include
several variables as well as case specifications. Thus, the trace in (52) would
be something like {JCI'S, X2}. There is, of course, nothing objectionable in
marking variables for case. Although we do not normally think of a variable
in this way, as Chomsky (1986) has argued, traces, including trace variables,
have reference and do bear case information. I return to the question of case
later in more detail. For the moment, it suffices to remark that some very
general solution is necessary as evidenced by the widespread nature of the
problem. The following is another commonplace instance.

(53) To whom did Percy say he had written?

This sentence would seem to require a representation along the lines of (53a).

(53) a. (To (?x\)) [x\: personf did Percy say he had written {x\.(h,t}ff?
Wellformedness in binding theory 119

The inclusion of the wh-variable in the trace left behind by wh-movement


allows us to state the constraint on pronominal and variable coreference
in terms of Van Riemsdijk-Williams's "leftness condition". This condition
states that a pronoun may not be coreferential with a variable to its right. It
is needed to account for the referential facts of sentences like:

(54) Who did the news that she had got married shock?

Given that (54a) is a reasonable representation of (54), the leftness condition


will ensure that she and x\ are disjoint.

(54) a. (?x\) [x\: person[ the news that she had got married shocked x\]]?

Cases like these clearly do not rely on the opacity condition for the disjoint
reference of the pronoun and variable since the pronoun does not, at any
level, c-command the variable.
The leftness condition is very elegant because it is so simple. It happens,
also, to be a condition with extremely wide application since it constrains
referential relations between pronouns and variables in wh-questions in which
no wh-movement has taken place. Thus, in (55), his may be coindexed with
the wh-variable since the latter is to its left, as (55a) shows.

(55) Who lost his wallet?

a. (?x\) [x\: person[ x\ lost his wallet]]?

The wh-variable in (55) is not, of course, a trace in the proper sense since
no movement has taken place. It may, therefore, be advisable, with Van
Riemsdijk-Williams, to write a rule of wh-interpretation specially designed
to cover such cases. Since quantifier interpretation also does not involve
traces, the two rules would be similar.
The rule of quantifier interpretation adjoins a quantifier with restricting
and/or predicate clauses and inserts quantifier variables. The clause into which
the variables are inserted then becomes the scope of the quantifier. Logical
form theory thus displays well-known ambiguities of scope in much the
same way as the predicate calculus, except that the word order of the natural
language expression is usually preserved in the scope. To repeat an earlier
example, representations of the two scope assignments in (56) are given in
(56a) and (56b).

(56) Everyone loves someone.

a. (Everyone \), (someone^) [x\ loves ΧΊ].

b. (Someonei), (everyone\) [x\ loves Xj].


120 Logical form in binding theory

It will be seen that one undesirable feature of these representations is that


the surface word order of the actual quantifiers has not been preserved. This,
as remarked above, can be remedied by the use of lambda abstraction.
An important finding of Binding theory is that the leftness condition which
plays such an important role in the interpretation of wh-questions is also
at work in quantified expressions. Thus, the pronoun he in (57a) can be
coindexed with the quantifier variable to its left, but such anaphorical linking
is not permitted in (58a) because the variable is to the right of the pronoun.

(57) Everyone says he will join in the march.

a. (Everyone \) [x\ says he ι will join in the march].

(58) He fights with everyone.

a. (Everyone ι) [He fights with χι].

5.4 Case

Chomsky (1986) claims that the theory of case is part of the theory of univer-
sal grammar alongside Binding theory. One general principle of the theory of
case is that every referential expression must have case. It seems appropriate,
therefore, that logical form representations should make provision for case
marking. This is not, of course, at odds with the fact that many languages,
such as Chinese, do not employ overt cases, or, as in the case of English,
make use of a very impoverished overt system.
It might, however, be argued that to include case marking at the level
of logical form is not well motivated. Logical form representations have to
do with coreference and scope and, perhaps, case relations could be accom-
modated elsewhere, for example, at the level of shallow structure itself. I
shall not explore such alternatives further here. Indeed, in (6.3), I advocate
the inclusion of case at the level of logical form as a natural expression of
ordering among participants in an utterance.
As is well known from the intense work on case which followed Fillmore's
famous paper (1968), it is all too easy, in one's enthusiasm for distinguishing
finely between one semantic structure and another, to slide into a situation
in which there seem to be almost as many distinctions as there are different
verbs. Let us assume, for our immediate purposes, a rather general case sys-
tem based on the notions of Nominative, Accusative and Oblique. Following
Case 121

Chomsky (1986), we may then set up the following general principles of case
assignment.

(59) Case assignment


a. Finite verbs assign nominative case to their subject and accusative to their
object.
b. Infinite verbs assign accusative case to their object.
c. Prepositions assign oblique case to their complement.

It is to be noted that finite verbs, unlike infinites, are marked for tense and
subject concord. We may assume, with Chomsky, that it is this tense marking
feature which assigns nominative case to the subject of finite verbs. This
assumption also explains, of course, why, save in exceptional circumstances,
the subjects of infinitives are not case marked. It follows from the general
principle that referential expressions must carry case that they cannot occupy
subject position in infinitive clauses. Hence, the ungrammaticality of strings
like:

(60) *He to run is unusual.

To make (60) grammatical, it is necessary to save the case on He and, in


English, this is done by use of the vacuous preposition for, which bestows
oblique case on its complement, as in:

(61) For him to run is unusual.

It is clearly important, from the standpoint of logical form, that we dis-


tinguish vacuous prepositions from those with a genuine semantic function.
Chomsky (1986) provides a very useful discussion of a particularly prevalent
problem in English - and other European languages - namely, the use of of,
or its equivalent. The general nature of the problem may be illustrated by the
following pair.

(62) The middle of the road was covered in snow.

(63) Jack had a barrow full of snow.

In (62), of has a genuinely semantic function which might be glossed as


'partitive'. Thus, if the preposition is absent, the second article is inappro-
priate and the meaning of the phrase is fundamentally different, requiring
that there be at least three roads. In (63), of is a vacuous preposition merely
bestowing oblique case on the complement of the head, full, of an adjective
phrase. The motivation for this use is to be found in the fact that Adjectives
122 Logical form in binding theory

do not assign case to their complements, hence, the use of the preposition
to carry out this function. Nouns do not assign case either, thus, we find the
use of of to case mark complements of noun heads in noun phrases such as
foot of the mountain. By contrast, both verbs and prepositions do assign case
to their complements and, hence, of is not employed vacuously in verb or
prepositional phrases.
An alternative to saying that nouns and adjectives do not assign case to
their complements is, of course, to say that they assign genitive case. There
probably are reasons for taking this alternative stand. However, the notion,
Genitive, does not seem to be particularly well defined beyond such broad
types as alienable and inalienable possession and, for our present purposes,
we might as well consider genitive to be a subcategory of oblique.
In the previous section, I referred to the fact that, in current Binding theory,
variables are regarded as referential expressions which must receive case
assignment. Wh-movement creates a "chain" in which the moved constituent
is the head. Chomsky (1986) makes the point that, since the head carries the
semantic content of its trace variable, it cannot be moved to a position which
already carries case. If this restriction were infringed, then the head would
be ambiguously marked for its semantic role.
An interesting case which clearly illustrates the above remarks is the pas-
sive construction. In English, the passive operation results in the logical ob-
ject of a transitive verb being moved to the position of grammatical subject.
Since subject position is assigned nominative case, the head of the chain must
be in the nominative. The trace, however, being in object position, carries
accusative case. This is why (64) is wellformed, but (64a) is informed.

(64) He was arrested tacc during the march.

a. *Him was arrested tacc during the march.

If the logical subject is specified, it must, of course, appear in the oblique


case and this is achieved through the use of the preposition by, as in:

(65) He was arrested by her.

It was noted earlier that the empty category Pro can occur as a bound
anaphor, as in:

(66) They] wanted Pro ι to leave.

Such cases are paralleled by others in which the subject of the complement
clause is disjoint from that of the main clause and is, therefore, specified, as
in:
Case 123

(67) They wanted Jack to go.

When an infinitive clause complements a verb like believe, Pro may not
occur, so that (68) is unacceptable.

(68) *Sally believed to be unwanted.

In such cases, the reflexive must occur, as in:

(69) Sally believed herself to be unwanted.

The accusative and infinitive construction in (69) is similar to the alternatives


shown in (70):

(70) Sally believed him/Jack to be unwanted.

In part, the reason for the ungrammaticality of (68) resides in the fact that
Pro cannot be marked for case. Verbs like believe assign accusative case
even across clause boundaries, hence necessitating the case-marked reflexive
rather than the unmarked bound anaphor.
These remarks are, however, somewhat superficial. The complexities sur-
rounding clausal complements are considerable and frequently depend upon
whether the main verb is a true verb of propositional attitude, like believe or a
mere attitude verb, like want. Believe may take either infinitival complement,
as in the cases just cited, or sentential complement, as in:

(71) Sally believes that she is unwanted.

Want readily takes infinitive complements, as in (72), but only marginally


admits sentential complements, as can be seen from (73), in which case, the
subjunctive modal is required.

(72) Sally wants to go.

(73) ?Sally wants that she should go.

Believe may be followed by an oblique object, including a reflexive anaphor,


plus a sentential complement, as in (74) and (75), but this option is not open
to want, as seen from (76).

(74) Sally believes of Jack that he is unwanted.

(75) Sally believes of herself that she is unwanted.

(76) *Sally wants of Jack that he should go.

Since believe is a true verb of propositional attitude, it is necessary that its


complement be a proposition and this seems to motivate the specification of
124 Logical form in binding theory

an explicit object of which the proposition is or is not true. This specification


is provided, in case the complement is sentential, either simply by the subject
of the sentential complement itself, or, redundantly, with an additional oblique
object. If the complement is infinitival and the belief concerns the subject of
the main clause, a bound anaphor is required, i.e. a reflexive pronoun. If the
subject of the complement is disjoint from that of the main clause, it appears
as an accusative marked lexical noun or pronoun.
Want, on the other hand, since it is not a true verb of propositional attitude,
requires no such specification and, hence, tolerates the empty category, Pro.
Pro combines with intransitives to yield clauses which rather than denot-
ing propositions, take "open propositions" as their referents. I discuss open
propositions later (6.3). For the moment, it suffices to say that they are not
true propositions, but, rather, unevaluated propositional functions.
These observations are, obviously, neither profound nor complete. They
do not, for example, quite fit the facts of a verb like claim which is usually
treated as a full propositional attitude verb, e.g. Cresswell (1985). However,
they do seem to have some intuitive value.
The fact that want permits infinitive complements with Pro gives rise to
a number of interesting ambiguities typified by Chomsky's sentence:

(77) They wanted to leave the meeting happy.

Here, the adjective may qualify the meeting. Alternatively, it may be taken to
refer to the "remote" subject they of the main clause. Chomsky's contention
(1986) is that adjectives may not modify heads across clause boundaries and
we must, therefore, assume the presence of Pro as subject of the infinitive.
It is this empty category which provides the head for happy, in the remote
reading, and Pro must, in this case, function as a bound anaphor.
The situation is different in cases like (78).

(78) To slander is sinful.

Here, an analysis involving Pro might seem implausible at first. However,


the idea that the infinitive is a complement with Pro as its subject, receives
support from examples like:

(79) For Peter to slander is unusual.

The Pro in (78) is not, however, a bound anaphor since it has no antecedent.
Rather, it is the pronominal use referred to above. This is further suggested
by the possibility of extraposition, yielding:

(80) It is sinful to slander.


Logical form in semantic representation 125

The relation between it and the infinitive in (79) also suggests that the latter
is to be treated as a nominal rather than as a verbal. I return to this issue in
(7.3.3).

5.5 Logical form in semantic representation

In this chapter, we have looked at logical form representations and their moti-
vation in Binding theory only superficially. However, it is evident, from these
few remarks, that the facts which such representations capture are central to
the meaning of many sentences. Facts of coreference and scope assignment
should find expression in semantic representation and, unless we decide to
treat the representations provided by Binding theory on a unique level, a com-
plete treatment should incorporate them into a more inclusive representation.
I shall not formally attempt such an incorporation here.
Indeed, the theory of logical form as developed in Binding theory is so
powerful that it may seem, at first, that there remains little to be accounted
for in a formal way. However, quite apart from the fact that they make no
appeal to truth conditions, the representations which the theory allows are
inadequate in several important respects. They do not reflect the composi-
tional nature of sentence meaning. They do not provide for all of the semantic
information embodied in logical formulations. The ambiguities which they
reflect are confined to those arising from scope assignment and coreference.
They make no provision for pragmatic factors. Finally, as formulated here,
they frequently depart radically from surface structures in respect of word
order. In what remains, some of these aspects of sentence meaning will be
taken up in greater detail.
Chapter 6
Pragmatics

6.1 Definition of pragmatics

The term "pragmatics" is widely employed in current linguistic research as


a synonym for "study of language use". In this very broad usage, pragmat-
ics covers an enormously wide spectrum of research from sociolinguistics,
psycholinguistics, speech-act theory and stylistics.
In the tradition of Montague (1968), Lewis (1970) and Cresswell (1973),
I use the term "pragmatics" as a name for the attempt to formulate means by
which to assign truth values to sentences which are dependent upon context of
use for their interpretation. However, unlike those authors, I also include, un-
der pragmatics, certain aspects of language use which make a given utterance
felicitous or appropriate. This enlargement sometimes leads to a conflict in
terminology, as in the case of presuppositions, customarily divided into two
classes, semantic and pragmatic. However, the practice of loosely equating
truth values with notions like felicity is well established - see for instance
Lakoff (1975) - and the terminological conflicts which occasionally result
seem, to me, to be harmless since the intention is always obvious.
A typical example of a context dependent sentence is provided by:

(1) Tomorrow, I leave here for Paris.

Clearly, such a sentence depends for its truth or falsehood not just upon the
assignment to the verb leave and to Paris which may be presumed to be
given, but also on the assignments to tomorrow, here and /, all of which
have denotations which vary according to the context of use.
An example of a different sort is provided by an imperative like:

(2) Fetch the luggage.

It is not possible to assign such performatives a semantic value within the


framework of traditional logic.
On a profound level, many have argued that performatives, including im-
peratives, do not have any direct relation to truth values. Thus, it would not
be appropriate to respond to (2) by saying "That's true/false.". On a more
Indices 127

superficial level, such imperatives appear not to meet the wellformedness


conditions on sentences in that they seem to be subjectless. However, they
clearly have meaning and so it must be possible to say what the world would
have to be like in order for them to at least be used appropriately. In such
cases, therefore, we need to appeal to notions other than truth and to con-
struct logical form representations which include the information which the
surface structure fails to provide. Such representations must, therefore, make
use inter alia of the words I and you.

6.2 Indices

Scott (1970) uses the term "point of reference" to refer to the ntuple of factors
which determine the interpretation of a sentence. It will be recalled that, in
chapter 3, such points of reference, or "indices" in Montague's terminology,
included, as well as the set of individuals, possible worlds, I, and moments
of time, symbolised as J. Lewis (1970) uses the term "co-ordinate" to refer
to a member of such an index. Thus, I is the "possible world co-ordinate"
and J is the "time co-ordinate".
Evidently, in such an approach, I and J alone are not sufficient to provide a
semantic specification of a sentence containing "indexicals", such as the first
personal pronoun I or the adverbs tomorrow and here. In order to treat these,
we would require a co-ordinate for the speaker, the time of utterance and the
place of utterance. In addition, in light of imperatives and such sentences as:

(3) You are wonderful.

it would seem that a co-ordinate for the addressee is also required.


It is to be noted that third person pronouns like he/she/it need not be
provided with special co-ordinates since they are assigned values by the
principles of coreference discussed in chapter 5. It is, of course, assumed
that, when such pronouns are free, it is normally possible to assign them
values on the basis of some antecedent recoverable from the context. If this
cannot be done, the pronoun is nonreferring and the sentence in which it
occurs is irresolvably ambiguous.
To accommodate these indexical demands, Lewis proposes a rich system of
co-ordinates which includes, in addition to those mentioned: co-ordinates for
deictic determiners, as in this/that palmtree; a previous mention co-ordinate
to account for phrases like the above mentioned palmtree', and, finally, an
assignment co-ordinate which provides values for variables, as in son of his.
128 Pragmatics

Given this enriched system, instead of saying that a sentence is true or


false with respect to the interpretation < A, / , J ,g >, as in chapter 3, Lewis's
theory would claim that it has its value in respect of an octuple of co-
ordinates, ordered by him as:

A Lewis referential index

a. a possible world,

b. a moment of time,

c. a place,

d. an addressor,

e. an addressee,

f. a (possibly empty) set of concrete objects which can be pointed at,

g· a segment of discourse,

h. an infinite sequence of things.

In such an octuple, (b-g) may be thought of as contextual and as, thus,


providing the information required by the pragmatics to assign values to
contextually dependent utterances.

6.3 Contextual properties

Cresswell (1973) argues that, in fact, it is not possible to predetermine what


co-ordinates can figure in an interpretation, simply because the scope of
contextual dependence is so unpredictable. As his discussion suggests, to
draw up a list in keeping with Lewis's proposal is likely to result not merely
in a very long catalogue of co-ordinates but, even worse, an ad hoc one
requiring constant extension.
Given this general objection to Lewis's approach to the indexical problem,
Cresswell proposes that we regard a context of use as a property of an
utterance, where, by "utterance" is understood the speaking or writing of a
text on a given occasion.
In order to illustrate the general spirit of this approach - though not en-
tirely in line with Cresswell's original proposal and presented more loosely -
consider Montague's (1968) sentence:
Contextual properties 129

(5) I am hungry.

The contextual property of such an utterance which determines the proposi-


tion it denotes includes the properties of being uttered by a particular person
and at a specific moment in time.
Let φ be a contextual property of use. If φ includes both of the above
properties, then any two utterances of (5) which coincide on φ express the
same proposition. Thus, the meaning of (5) may be thought of as the function,
Fhungry, from contextual properties into propositions, such that Finmf!n(o) is
a proposition denoted by an utterance of (5).
Cresswell employs the term "open proposition" for an expression like
(5). Until it is uttered, an open proposition is an unevaluated propositional
function. Once its argument place has been filled by an appropriate contextual
property, it becomes a proposition which may, of course, be true or false.
Thus, we conclude that sentences may denote open propositions, e.g. (5)
before it is uttered, or propositions, such as:

(6) Percy is hungry.

In Montague's terminology (1968), an open proposition would be the


"meaning" of an expression and the proposition which results from the filling
of its argument place would be its "sense". Thus, in line with what was said
earlier, to know what an expression involving indexicals means, to understand
it, involves knowing what contextual properties can be in the domain of the
relevant open proposition. To know whether or not such an expression is true
involves knowing precisely what values are assigned to the argument places
of the open proposition by virtue of its being uttered. Reverting to Carnap's
example, chapter 1, to understand the sentence (7), we need to know what
can be in the domain of mon. To be acquainted with its sense, we need to
know the identity of the value assigned and whether that value really does
have a black pencil.

(7) Mon crayon est noir.

Clearly, the properties which constitute the domain of an open proposition


may be ordered. What makes (8) true or false is not just its being uttered by
a particular individual to a particular person, but their being in an ordered
ntuple:

(8) I am your brother.

in the spirit of the earlier discussion (5.4), such an ordering is best expressed
in terms of the notion of Case .
130 Pragmatics

Working outside the Cresswell framework, Sgall (1975) suggests that the
pragmatics should generate representations which reflect the basic case no-
tions as developed by Fillmore (1968). Thus, for example, Sgall gives (9a)
as a possible semantic representation of (9).

(9) I made a canoe out of every log.

a. ({I)Ag made (a canoe)obj (every log)orig)·

Such representations are not fully compatible with the treatment developed
in this study - they assume, for instance, that prepositions are surface reali-
sations of underlying cases, not base generated constituents of logical form.
Nonetheless, the exploitation of Fillmorean cases seems to be an obvious and
semantically justifiable way of expressing ordering within Cresswell's theory
of pragmatics.
Assuming the above proposal, let us say that a context of use, φ, generates
a "contextual index", δ, of properties, if for any such property, φ', φ permits
φ' to be a member of <5. Accordingly, no property, φ', can be in δ such that,
for some utterance, α, α (δ) is true but (φ' e δ) is false.
As an illustration, consider the following utterance:

(10) */ shot yourself.

If φ' is the property of being uttered in the first person, it is incompatible


with the property, φ", of second person reflexivity. Obviously, both properties
cannot enter into any contextual index generated by φ. If the open proposition
corresponding to (10) is satisfied by φ', it is not satisfied by φ" and hence
cannot yield a proposition true in any possible world.
In the above, it is, of course, assumed that φ is a context of use for a
simple open proposition. Clearly, there would be no question of bringing φ'
and φ" into the same contextual index if the one were a property of a main
clause and the other of a conjoined or embedded clause. The generation
of complexes of contextual indices is obviously determined by the usual
principles of compositionality.
Since the tense of an utterance is one of its contextual properties, the
generation of δ by φ must, as already implied, be partially determined by
time relations. Thus, (11) cannot denote a true proposition in any possible
world.

(11) This man had been the present king of France.


Contextual properties 131

Given these intuitive remarks, we can formally state the conditions under
which an utterance, a, may be said to denote a possibly true proposition as
follows:

(12) a denotes a possibly true proposition, p, only if for some open


proposition, p', expressed by a, there is at least one contextual index,
<5, in the domain of p \

Of course, we may well have false propositions resulting from falsifiable


valuations in respect of given open propositions. Thus, (12) states the condi-
tions for denoting a possibly true proposition, not for determining its value
in a particular world. If an open proposition, p', has no contextual index in
its domain, it's value must, necessarily, be 0, or else it is not evaluated at all.
Further, it is obvious that the usual rules for tautology apply, e.g. the value
of the proposition yielded by (p' or -p') is always 1.
The question of whether an open proposition which has no contextual
index in its domain is capable of yielding a proposition which is assigned 0
or is simply left unevaluated is clearly pertinent to a discussion of Strawson's
(1950) theory of truth assignment. I return to Strawson's theory below.
To evaluate an utterance like (5), it is obviously necessary to assign values
to its constituents. Cresswell (1973) proposes that we regard words like I
as denoting functions which he calls "open individuals". Thus, the value
assigned to I is a function, Fj, whose domain is the set of possible addressers
and whose value, for a given utterance, is an individual. We may apply this
to (5), informally, as follows. If φ generates a contextual index, δ, for (5) in
which φ' is the property of being in the first person and φ" is the property of
being asserted to be true at the moment of utterance, then hungry(6) yields a
proposition which is true just in case the value of the open individual denoted
by I was hungry at the time (1) was uttered.
Although Cresswell does not, himself, explicitly link the notion of an open
individual with that of an intensional object, referred to earlier in chapter 4,
it seems to me that the two have enough in common to warrant such an
equation. I shall, therefore, regard the king of France also as having the
status of an open individual, though, at any moment, it is a function which
is uniquely valued, if it is valued at all.
It will be recalled, from chapter 3, that Montague made provision in his
semantics for the assignment of truth values to tensed expressions. Thus, to
repeat the earlier case, the sentence:

(13) It was snowing.


132 Pragmatics

is assigned the value 1, if, at some time j ' earlier than time j at which (13)
is uttered, the present tense sentence:

(14) It is snowing.

is true. Montague's treatment of time-dependent truth assignment is formu-


lated within the framework of an indexical model. If we adopt the contextual-
property approach advocated by Cresswell, the ordering function is recast in
terms of properties. Let φ be a contextual property of use for (13) and let it
include the property, φ', which provides the time of utterance. (13) is true if
and only if (14) is true for some context of use, -ψ, just like φ except that -φ
has ψ' where φ has φ' and the time provided by φ' is earlier than that given
by φ'.

6.4 Performatives

In section 1 of this chapter, I included an example of an imperative sentence


among the types of utterance which a pragmatic theory must accommodate.
The example was:

(15) Fetch the luggage.

This sentence is marked by the surface syntax as a performative equivalent


to:

(16) I order/command you to fetch the luggage.

Let us assume that (16) has been uttered. Given lambda abstraction, it is
easy to provide a representation for (16). Simplifying the internal structure
of the infinitive, this will be:

(16) a. (I (Χ,χ (YA,}> (order x,y))you(to pro fetch the luggage)))).

It is plain that, for an utterance of (16) to constitute an imperative, the


following set of conditions must hold. For some context of use, φ, in the
domain of the relevant open proposition, there is a contextual index, «5, such
that an individual, a, in the domain of I utters (16) to another individual,
a' in the domain of you, at the moment determined by φ. The meeting of
these conditions is clearly necessary to being an imperative, but it is, equally
clearly, not sufficient. I return to the discussion of conditions on performatives
which are sufficient below.
Performatives 133

Being without an explicit performative, the case of (15) is, however, more
difficult. Many linguists have suggested that such cases should be handled
by assuming that they are represented at the level of logical form as com-
plement clauses of explicit performatives. Under this proposal, the semantic
representation for (15) would be (15a).

(15) a. (I(\,x((X,y(order, x,y))you)(that you will fetch the luggage))).

(15) will, obviously, have to meet the same set of conditions as (16) to qualify
as an imperative.
There have, at various times, been attempts to extend performative analysis
to all sentence types. Thus, for example, Lakoff (1971b, 1975) proposed that
the general format for the logical form representation of all sentence types
should be:

(17) 5 (pred,argx,argy propositional content)).

The options for pred would be represented by {order, ask, state, say} and χ
= I and y = you. The main clause of (17) is, therefore, a template for an open
proposition. If all sentences are regarded as complements to such formulae,
it follows that no sample of natural-language sentences can be adequately
analysed, at the semantic level, in the absence of a theory of pragmatics.
Lakoff s provision for the performative analysis of questions, through the
inclusion of ask among his predicates, requires some adjustment to the ac-
count presented in chapter 5 and taken up again in 7. Some of these adjust-
ments seem to be notational rather than conceptual - for example, his ask
predicate could be regarded as a variant of the question prime on wh-words
in the more traditional formulations.
More important, the claim that questions are complements to performative
clauses of the form:

(18) / ask you ...

highlights the problem of whether, and, if so, how, truth values should be
assigned to performatives. I shall, section 6.7, comment on the approach of
"erotetic" logic which certainly does permit questions to have truth values.
In the meantime, since (17) extends the performative analysis to all sentence
types, including assertions, the assignment of truth values to performative
sentences becomes a general issue.
Some linguists, including Cresswell (1973), have objected that it is nor-
mally inappropriate to respond to a question, imperative, or statement by
confirming or denying that the speaker is asking a question, giving a com-
134 Pragmatics

mand, or making a statement. Certainly, it seems inappropriate to respond to


(19) with (20) when the latter is taken to mean (20a).

(19) The earth is flat.

(20) That's true.

a. It is true that you just made that claim.

Lakoff s reply is to claim that truth values are assigned, under normal
circumstances, not to the sentence as a whole, including the performative
clause, but to the propositional content allowed for in (17). This is, surely,
correct. An utterance of (19) is certainly not an assertion that a claim is
being made and to evaluate it as if it were would be most unnatural. In
Lakoff's view, the performative clause itself is neither true or false. Rather,
it is satisfied by meeting the appropriate felicity conditions.
However, if (19) is not an assertion about a claim, what motivation can
there be for introducing the performative into its logical representation? Such
motivation might be found in a general theory of language in use. In such a
theory, it is clearly necessary to draw distinctions between the various speech
acts of promising, threatening, assuring, etc.. The fact that such acts are as
frequently performed through implicit as through explicit performatives has
to be acknowledged in pragmatic theory and to do so at the level of semantic
representation seems quite natural.
Lakoff (1975), reminiscent of a treatment by Karttunen (1974), justifies
the performative analysis on more general grounds. He points out that, if
satisfaction is extended to apply to appropriateness as well as to truth in
a model, then, treating declaratives as performatives allows for a unified
semantics. In Lakoff s (1975) system, for instance, it is not necessary to
have one theory to account for:

(21) I name this ship Sandrock.

and another for:

(22) I named this ship Sandrock.

However, since a subtheory of satisfaction by happiness and another for


satisfaction through truth in a model are still required, the overall gain may
not be as impressive as it at first appears. Lewis's own suggestion (1970)
that explicit performatives like (21) be assigned the value 1 if and only if the
addressee is, indeed, performing the act at the time of utterance would, in any
case, allow for a unified treatment of declaratives and explicit performatives.
Performatives 135

To insist on treating all sentence types, even declaratives, as performa-


tives at the underlying semantic level does, of course, create difficulties. For
instance, Montague's (1968) sentence:

(23) / am hungry.

must be taken as the complement of a performative such as:

(23) a. I declare that...

If (23a) were, itself, a surface expression, it would not seem reasonable to


take it as embedded in yet another performative. A more elaborate demon-
stration of this kind of problem is provided in Isard (1975), who points to
the implausibility of analysing a sentence like (24) as (24a):

(24) That fool Cuthbert turned up at the party.

a. I refer to Cuthbert as a fool in the course of asserting to you that he


turned up at the party (which, by the way, I also mention).

Given that such problems do not outweigh the advantages of the gener-
alised performative analysis, it will be assumed. In the interests of brevity,
however, I shall not make explicit provision for performative clauses in the
underlying representations of declaratives whose surface structures lack ex-
plicit performative verbs.
I shall, moreover, adopt the broadened view of satisfaction referred to
above. Thus, a prepositional function or an open proposition may be satisfied
if, its argument places being filled, the resulting proposition meets either
truth or felicity conditions. In the next section, I shall effectively liberalise
the process of satisfaction even further by extending the notion of felicity
to accommodate the appropriate use of certain nonperformative expressions
which also lie beyond the confines of classical logic.
As part of his development of "natural logic", Lakoff (1975) shows how
(25) can be viewed as entailing (26).

(25) Jack was sincere in saying that Sally sang well.

(26) Jack believed that Sally sang well.

Such entailments are established on the basis of a set of "conversational


postulates". The postulate relevant to (25, 26) is:

(27) (Sincere( x, state (x,y, P)) believe (χ, P)) [ If χ is sincere in


stating Ρ to y, then χ believes P].
136 Pragmatics

As Lakoff says, if (25) does entail (26), then the following will be a
contradiction:

(28) Jack was sincere in stating that Sally sang well, but he didn 't believe
that she did sing well.

Parallel considerations account for the contradictory status of a large array


of conjunctions, such as:

(29) I promise to pay you back, but I don't intend doing so.

(30) I name this ship Sandrock, but 1 don't know what her name will be.

The entailments implicit in performatives whose failure leads to such con-


tradictions as (29) and (30) depend upon sets of felicity conditions. For an
utterance of a performative sentence to be satisfied, it must, as a sufficient
condition, meet some set of felicity conditions. Lakoff (1975) uses this no-
tion of broad satisfaction to explain a number of semantic problems which
he calls "performative antinomies". One of his examples is:

(31) Don't read this sentence.

The conditions which an utterance of an imperative must meet in order to


constitute a command will, obviously, include the preparatory condition that
the addressee be able to carry out the action concerned. Clearly, in order to
comply with (31), one must read it, but to do so is not to comply with it.
This paradox is probably more familiar in its "liar" form:

(32) Everything I say is a lie.

A major challenge in performative analysis is to establish in a principled


way just which conditions count as sufficient to satisfiability in individual
cases. Searle (1969) provides a detailed account of the conditions on many
verbs, such as promise and threaten, and it is obvious that their number is
very considerable and their interrelations complex. For example, (33) could
surely never count as a promise, but the case is not so transparent in (34).

(33) I promise that the day after today will be tomorrow.

(34) I promise not to hurt myself.

Evidently, for a promise to count as such, what is promised must not be


totally expected to happen anyway, but is that condition to figure in the logical
structure of promises? Presumably, it is not. Similarly, an order may very
well be given which cannot be carried out simply because the addressor is
unaware of its unperformability. Even so, it may still be viewed as an order.
Fuzziness 137

It may be that to say that a performative is unsatisfiable is the pragmatic


counterpart of assigning it a value neither true nor false, namely #. I return
to this value shortly.

6.5 Fuzziness

The extended use of satisfaction adopted in the last section goes a long way
to meeting the demands of natural language analysis. However, as I indicated
in chapter 1, we must also be able to evaluate propositions which involve
fuzzy concepts. Such concepts are typically denoted by gradables, such as
beautiful/many/often. These items are not subject to the narrow, logical inter-
pretation of truth conditions since they are used subjectively and are context
dependent. Thus, a sentence like (35) cannot be strictly judged true or false.

(35) Many people are clever.

Since a semantic theory which concerns itself with natural languages would
be quite inadequate if it ignored such propositions, I shall broaden the con-
cept of satisfaction even further and say that formulae or open propositions
employing gradables may, also, be said to be satisfied if they meet felicity
conditions which suffice to make their use appropriate.
Combined with the extension of satisfaction already provided for, the
effect of this liberality is to free mathematical semantics from the narrow
preoccupations of truth-functional studies.

6.6 Presuppositions

Of course, many of the conditions on the felicitous use of performatives


are presuppositional in character. Thus, for example, the familiar (36) pre-
supposes that the speaker is in a position of authority with respect to the
addressee.

(36) Fetch the luggage.

As already declared, the presupposition which makes (36) satisfiable is


not one which makes it true. Rather, (36) is satisfied in being performed. For
this reason, such presuppositions are usually called "pragmatic", e.g. Keenan
(1972), McCawley (1981).
138 Pragmatics

Another case of pragmatic presupposition is provided by:

(37) Someone has left her case behind.

Since, in English, she is not used neutrally, (37) presupposes that someone
refers to a female. However, if this presupposition turns out to be false, the
truth of (37) is unaffected. Presuppositional failure in (37) might make its
utterance inappropriate, but it does not make it untrue.
Consider now what is presupposed in the following:

(38) Because Percy has read Darwin, he is well informed.

In (38), the initial, causative clause presupposes - or is highly likely to


presuppose - knowledge or belief which the addressor takes as shared by
himself and the addressee at the time of utterance. This presupposition is old
information and the result clause is likely to be new information which is
not presupposed but asserted. If the clauses are exchanged, then their roles
are also reversed, as is made apparent by contrasting (38) with:

(38) a. Percy is well informed because he has read Darwin.

Such cases suggest another type of pragmatic presupposition, namely, shared


knowledge or knowledge assumed to be shared at a given point in a discourse.
In this sense, pragmatic presupposition has to do with text structure and
information processing. As with the other cases, if the presupposition fails,
the utterance may be inappropriate, but its truth value is not affected.
In contrast to pragmatic presuppositions, which are not based on con-
siderations of truth values, we also have semantic presuppositions, in which
questions of truth value are vital. Typical are the cases of definite descriptions
already referred to, such as:

(39) The tower of London is old.

(39) presupposes (39a):

(39) a. There exists one and only one individual, a, such that (Tower ofLon-
don(a/x)) is true.

If (39) is true, then (39a) must be true. In addition, if (39) is false, then
(39a) must again be true. We may thus give a partial definition of semantic
presupposition as:

(D.l) ρ semantically presupposes q if, when ρ is true or -p is true, q is


true.
Types of semantic presupposition 139

Note that this definition, though partial, is alone sufficient to distinguish


semantic presupposition and entailment. In the latter, if ρ entails q, -p certainly
does not.
What makes semantic presupposition of considerable interest, however, is
the case in which the presupposed proposition, q, is false. If (39) is uttered
and (39a) is false, then, in the opinion of many scholars, (39) is neither true
nor false. In such a case, (39) is simply pointless. Thus, a fuller definition of
semantic presupposition is:

(D.2) ρ semantically presupposes q if, when q is true, either ρ or -p is


true, but when q is false, ρ and -p are vacuous.

It will again be observed that logical entailment and semantic presuppo-


sition are very different in respect of negation. In the case of entailment, if
q is false, then so is p.
The definition of semantic presupposition allows, of course, for an infinity
of trivial presuppositions since, by it, any tautology is presupposed by any
proposition whatever. Such cases are of no interest and will be ignored.
As this chapter is concerned with pragmatics, it might seem that semantic
presupposition should be treated elsewhere. However, the distinction between
the two types is not always clear cut and it is convenient to discuss both types
under one rubric. In addition, some problems surrounding pragmatic presup-
positions are best approached in the light of solutions to others involving
semantic presuppositions. I shall, therefore, concentrate upon the latter first.

6.7 Types of semantic presupposition

As a preliminary to the discussion of semantic presupposition, it is important


to realise that there are several different types to be considered. In this section,
I shall briefly review some of these types, deferring discussion of the difficult
question of the assignment of truth values in a presuppositional system for
the moment.
One of the most important types of semantic presupposition is represented
by the example cited above and repeated here as:

(40) The tower of London is old.

This sentence depends for its evaluation on an existential presupposition. It


presupposes the existence of a particular individual and, hence, the truth of
140 Pragmatics

the relevant existential proposition. Given the truth of such a proposition,


(40) asserts of the individual that it has the property of being old.
In chapter 3, I referred to Montague's treatment of definite descriptions,
observing that his stance is essentially Russellian. Accordingly, for Montague,
(40) could be true only if uttered in respect of a world and moment of time in
which there actually existed a unique individual named the tower of London.
If such an individual did not exist, (40) would be false.
Montague's logical representation of definite noun phrases is very com-
plicated. A much simpler notation which is often employed involves the use
of an operator, L, which is similar to the lambda operator, but is restricted
to the creation of nominals. Thus, the definite description in (41) could be
represented as:

(40) a. (L,X (tower-of-London(x))).

An expression containing this abstract may, then, be satisfied only if there is


just one possible value for the iota variable in the universe of discourse.
The definite description in (40) is, of course, of a special kind which
we might, following Cresswell (1973), call "complex proper names". Given
the usual semantics of proper names, it is perfectly easy to see the force of
Russell's (1905) treatment. Consider, now, a sentence like:

(41) The cathedral is on fire.

In order to interpret such a sentence in Russell's theory, we must say that the
universe of discourse has been restricted so as to include one and only one
cathedral and it is that individual to which the definite description refers.
McCawley (1981) discusses a number of problems arising from the Rus-
sellian approach. One is illustrated by the following:

(42) The cathedral is bigger than all other cathedrals.

As McCawley observes, if the universe of discourse is restricted so as to


accommodate the Russellian theory, then (42) would have to mean the same
thing as:

(43) The cathedral is bigger than itself.

This is so because, in such a restricted universe, the one and only cathedral
is all cathedrals. However, such an equation would, of course, amount to
gibberish. Hence, we are forced to say that the cathedral cannot pick out its
referent by virtue of a restricted universe of discourse.
Types of semantic presupposition 141

Similar difficulties arise in the case of plural definite descriptions, as in:

(44) The parrots hate all other parrots.

In order to treat these cases in a Russellian manner, we must obviously


take the unique individual concerned to be a set containing more than one
member. That seems to be trivial enough, but, again, it is implausible to take
a restricted universe of discourse as guaranteeing the set's uniqueness.
Working in a framework of discourse analysis, McCawley proposes that
we regard the referent of a definite description as being given by a contextual
domain. A contextual domain is that set of entities which, at a given point
in a discourse are regarded as uniquely identified and, for the purposes of
the discourse, as existing. Thus, the cathedral and the parrots in the above
sentences are interpreted as having unique denotations according to some
contextual domain. Since the contextual domain is independent of the domain
over which the variables of such quantifiers as all/some range, there is no
contradiction in claiming that a given individual is unique and also that there
are others of the same class.
This approach is, in fact, much like that adopted by Zeno Vendler (1971),
who treats all definite descriptions as either deriving from or as implying re-
stricted relative clauses which limit the referential domain of the description.
A similar, though looser, treatment is also at the heart of the traditional view
that the definite article introduces old information.
It is to be noted that the notion of a contextual domain extends beyond
what is actually said in a given discourse. It is not always necessary that a
particular individual be introduced as a topic in a discourse before it can be
referred to by a definite description. As is well-known, the definite article is
very frequently justified by shared knowledge, situational context, convention
and so on. Thus, we speak of the government on the assumption that the
addressee is able to identify the body of people in question. When we refer
to the sun, we do not normally expect the addressee to ask which one we are
thinking of. A phrase like the wastepaper basket often occurs in conversation
without prior mention because there is only one such object in the physical
environment and, when we speak of someone as having the 'flu we do so
by convention, just as we do when using the definite article with names of
rivers, public houses, mountain ranges etc..
In his own treatment, Cresswell (1973) discusses Russell's the king of
France example and concludes that, if it is analysed in terms of open propo-
sitions, then we may say that the open individual which is the denotation of
the definite description must have at most one individual in its domain in
order for the whole to attain the status of a proposition and, thus, to have
142 Pragmatics

a truth value. If there is no such individual, then the open proposition sim-
ply represents a partial function which is associated neither with truth nor
falsehood.
Cresswell's position is, in many respects, reminiscent of the approach to
definite descriptions adopted by Strawson (1950) which, essentially, claims
that when the open individual cannot receive a value, the assignment of
a truth value to the sentence involved is inappropriate. In a later account,
Strawson (1964) modified this position very considerably by granting that
a proposition with an unsatisfiable definite description could also be false.
McCawley (1981) cites examples like (45) which certainly seems to be more
than vacuous if there is no funfair.

(45) Sally is at the funfair.

In his own discussion, McCawley suggests that the falsehood of cases like
(45) might have something to do with the notion of topic - I return to
that subject later. In the meantime, it is important to recognise that though
Cresswell's approach may be seen as similar to Strawson's, it amounts to
more than saying just that some propositions go unevaluated. For Cresswell,
the failure to satisfy an open individual results not in a proposition which is
unevaluated, but in no proposition at all.
Whether a Russellian or a Strawsonian view is adopted, the mere use of a
definite description does not involve a commitment to the actual existence of
individuals capable of satisfying it. Thus, since a proposition is true or false
in a possible world, it is not necessary, for instance, that the pope of Wales
should actually have some individual in its domain. It is necessary merely
that such an assignment be possible.
A different kind of presupposition is implied by the following:

(46) It is angry.

In such a case, it is obviously presupposed that the value of it has the property
of being animate. Such a presupposition, a sortal presupposition, differs from
an existential one in that the existence of the individual is given. What is
presupposed is its being of the appropriate sort. Thus, a proposition like that
denoted by (46) is true if and only if some proposition of the form F(x) is
true. Such a predicative proposition being presupposed, (46) asserts of the
individual concerned that at the time of utterance, that individual is angry.
Although the conditions for the truth of a sentence like (46) are clear
enough, it is not, I believe, so simple to decide whether failure of sortal
presuppositions leads to falsehood. If (46) is asserted, for example, of a
Types of semantic presupposition 143

building, it is probably better to say that it is vacuous rather than false - the
adjective angry simply does not appropriately apply in such a case.
Lakoff (1971b), showed how sortal presuppositions may often be signalled
only poorly in natural languages. Thus, for example, while the relative pro-
noun who characteristically marks its referent as being human, this rule is
frequently infringed, especially when the individual concerned occupies a
specially favoured place in the life of an addressor. While (47) is perfectly
grammatical, (48) is merely acceptable. (49), by contrast, is barely acceptable,
if at all.

(47) My sister, who keeps sheep, is very intelligent.

(48) My parrot, who thinks he's a vulture, won't speak to me.

(49) ?My car, who thinks I'm a millionaire, is very greedy.

While we most readily think of sortal presuppositions in terms of predi-


cates like angry, which denote properties of individuals directly, or in terms
of such pronouns as who where the presupposition is indirect, there is often a
subtle relationship between tenses and sortal presuppositions. A particularly
interesting case of this sort is (50), originally discussed by McCawley (1971):

(50) Did you see the exhibition?

As McCawley points out, (50) presupposes that it is no longer possible to


see the exhibition, while, in (51) the presupposition is that that opportunity
is still available at the time of utterance, i.e. that the exhibition is currently
running.

(51) Have you seen the exhibition ?

This difference in sortal presupposition is frequently signalled, in English,


as in (50) and (51) by aspectual contrasts. The basis for such contrasts is, of
course, the present-relevance of present perfect beside the unmarked status
of the simple past. In a similar fashion, Columbus in (52) is presupposed to
be still living - and hence capable, in principle, of repeating the action - ,
while in (53) that presupposition is not present.

(52) Columbus has reached America.

(53) Columbus reached America.

The relationship between presuppositions and aspect are, of course, more


subtle than the above remarks suggest. It is, for example, important to dis-
tinguish assertions involving intensional objects - open individuals - and
144 Pragmatics

individuals proper. Thus, to repeat an example from chapter 4, present aspect


in (54) does not necessarily presuppose that the king of France is still living
at the time of utterance:

(54) The king of France has always been bald.

Consider next what is presupposed by the truth of the following:

(55) Sally regrets that she failed the test.

The presuppositions involved in such cases are known as "factive" since the
predicate of the main clause presupposes the factual status of its complement.
In (55), assuming the coreference of Sally and she, Sally may legitimately
regret what she takes to be the fact of having failed a test. Thus, (55) is true
if and only if (55a) is true and is vacuous otherwise:

(55) a. Sally failed the test.

The situation asserted to hold by the following is not, however, possible -


ignore strained interpretations such as conspiratorial report - :

(56) *Sally, who knows that Jack has not left, regrets that he has left.

Clearly, the reason why (56) is anomalous is that it presupposes two proposi-
tions which are mutually contradictory. In fact, since the relative and factive
clauses in (56) cancel each other out, the whole actually makes no predication
at all about Sally.
The classic study of so-called "factive verbs" like know, regret, be sur-
prised is Kiparsky - Kiparsky (1970) and they have been discussed exten-
sively in the context of formal semantics by Keenan (1972), Keenan-Faltz
(1985), McCawley (1981) and many others.
Whereas a verb like discover is factive, one like hope is nonfactive. Thus,
while (57) is normal, (58) is not.

(57) Sally hopes that Jack has left.

(58) * Sally, who discovered that Jack has left, hopes that he has left.

Nonfactive verbs do not presuppose the truth of their complements and thus
a sentence like (57) will be true or false depending solely on the truth of
the main clause. The case of (58) is, however, anomalous, stricto sensu,
because the factive and nonfactive clauses are contradictory. The nonfactive
hope presupposes the negation of a proposition implied by one asserting
discovered. If Hopes(x,y) is true, then Knows(\,y) must be false. However,
discovered clearly implies knows.
Types of semantic presupposition 145

The contradictory situation reflected in (58) is not, of course, always char-


acteristic of sentences containing both factive and nonfactive clauses. Thus,
for example, know presupposes believe, so that, while the main clause of
(59) is uninformative, (59) is not contradictory.

(59) Sally, who knows that Jack has left, believes that he has left.

Further, although believe does not presuppose know, the following is, again,
not a contradiction:

(60) Sally, who believes that Jack has left, knows that he has left.

Rather, the relative in (60) is simply pointless.


The reason for the discrepancy in the relations between know and hope and
know and believe is to be found in the details of the lexical meaning of the
nonfactive verbs. Thus, while hope simply presupposes not know and asserts
want, believe has a number of meanings, only one of which is compatible
with know. It would be strange to agree to the proposition (if Knows(x, y),
then Believes(x, y)) and, at the same time, deny Believes(x, y) while asserting
Knowsix, y). Evidently, the belief which is presupposed by knowledge is of
a different sort to that which does not presuppose knowledge. If this were not
so, then to believe a proposition would be to know it and that is manifestly
absurd.
Consider, next, the presupposition involved in the following:

(61) Sally succeeded in breaking the egg.

(62) Sally happened to break the egg.

In (61), the verb succeed presupposes that an obstacle had to be overcome


in the breaking of the egg. In (62), happen presupposes that the breaking of
the egg was a chance event.
We may summarise McCawley's (1981) discussion of the kind of presup-
position involved as:

(63) ρ presupposes q and -q presupposes -p, but -p presupposes neither


q nor -q.

Thus, if (61) or (62) is true, so is:

(61/2)a. Sally broke the egg.

However, if (61) or (62) is false, it does not follow that (61/62a) must be
false.
146 Pragmatics

Verbs which have this presuppositional property are called "implicative"


and the problems which they present in the assigning of truth values to
sentences are considerable, Karttunen (1971). As an intuitive justification of
the above summary of McCawley's general argument, consider the situation
depicted in (61). Suppose that Sally, after some difficulty, broke the egg. In
that case, (61) is indeed true and so is the presupposed proposition that she
broke the egg: ρ presupposes q. Of course, if the egg was not broken, then
Sally did not succeed in breaking it: -q presupposes -p. However, if Sally
broke the egg without effort, (61) is not true, even though the egg is broken.
Finally, if, in spite of her efforts, Sally failed to break the egg, then (61) is
again false but, now, the egg is unbroken. These last two cases prove that -p
does not presuppose q or -q.
One striking feature of this kind of presupposition is that it does not obey
the usual criterion for semantic presupposition, namely, that in the event of q
being false, ρ is simply vacuous. Since it is unreasonable to claim that such
presuppositions are pragmatic rather than semantic - truth values are, after
all, central - it would appear that the definition of semantic presupposition
given earlier should be weakened so that, when the presupposition is false,
the presupposing sentence may either be false or vacuous. I return to the
assignment of truth values shortly.
Finally, let us consider further the relation between negation and presup-
position. It is well-known that this relation is not always straightforward. A
classic example is provided by the interpretation of verb pairs which contrast
in respect of presupposition and assertion, as in:

(64) Jack didn't blame Sally for cheating him.

(65) Jack didn't criticise Sally for cheating him.

Criticise presupposes that an action was performed and asserts that it was
bad. Blame, on the other hand, presupposes that an action was bad and asserts
that it was performed by a given individual.
The negative in (64) is ambiguous in that it might have either the presup-
position or the assertion in its scope, as is demonstrated by the normality of
the following extensions:

(64) a. Jack didn 't blame Sally for cheating him because he knew she hadn 't
done so.

b. Jack didn't blame Sally for cheating him because he felt she was
justified in doing so.
Truth-value gaps 147

In (64a), it is the assertion which is negated, not the presupposition. In (64b),


it is the presupposition, not the assertion which is denied. Similar considera-
tions hold, mutatis mutandis, for (65). Thus, such sentences must be disam-
biguated before a truth value can be assigned.
Of course, the relations holding between such pairs as (64) and (65) de-
pend, crucially, on the scope of the negation. Thus, if (64a) is what is intended
by (64), then the latter is equivalent to the presuppositional negation of (65).
If, on the other hand, by (64) we intend (64b), then it is equivalent to the
negation of the assertion in (65).

6.8 Truth-value gaps

When one says of a sentence like (66) that its value is "vacuous" if the
complement is false, it amounts to saying that the proposition it denotes is
neither true nor false. Many scholars use the term "truth-value gap" for this
situation.

(66) Sally was astonished that Jack had left.

The possibility of a truth-value gap existing is not entirely without conse-


quences. If we insist on a two-valued system, {0,1}, as in Montague (1968,
1970, 1973), or Cresswell (1973, 1985), and others, the most reasonable so-
lution is to leave such expressions unevaluated. This may, as we have seen,
be done by regarding them as open propositions, i.e. as functions whose ar-
gument place is unfilled. The alternative would be to regard them as false,
but that might have unwanted consequences, especially when combinations
using logical constants need to be evaluated.
Another way of treating such gaps would be to regard them as constitut-
ing a third truth value. The Polish logician Lukasiewicz (1920), presented in
English by Rescher (1969), developed a three value system, {0,1,1}, where
I represents an "indeterminate" value. Lukasiewicz's logic permits the valu-
ation of sentences like Aristotle's:

(67) There will be a sea battle tomorrow.

which is obviously neither true nor false.


To assign sentences like (66) the value I when they suffer presupposi-
tional failure does not seem, to me, to be appropriate since they differ quite
markedly from cases like (67), where that assignment might be justified.
While the actual proposition which (67) denotes will never have either of the
148 Pragmatics

two classical truth values, the state of affairs which it proclaims relative to
the moment of utterance surely will or will not come into being. (66), on the
other hand, is not merely indeterminate by virtue of temporal contingency, it
relates to no state of affairs at all. Yet (66) does not actually misrepresent a
state of affairs in the way a false proposition does.
I shall not, here, take up the question of truth-value assignments to sen-
tences like (67) beyond repeating that they are true if and only if the cor-
responding present tensed sentence is true at some moment appropriately
related to their time of utterance - which seems, in all honesty, to amount
to saying very little. However, for the intuitive reasons presented above, I
take it that a truth-value gap introduced by the relevant assignment to the
presupposition of (66) ought not to be filled by a third, indeterminate value,
but should be regarded as an open space in a two value system. This space
may be represented, for convenience, in the manner of McCawley (1981), by
the symbol #.
Van Fraassen (1969) developed a system for assigning values to propo-
sitions which include truth-value gaps without departing from the classical
two-value systems. This method, "supervaluation", is also discussed in Mc-
Cawley (1981). The remarks below will be fairly informal.
To understand how supervaluation works, it is necessary to appreciate that
it assigns 0 or 1 to a complex proposition containing truth-value gaps, i.e.
when some component is valuated to #, when those gaps make no difference
to the final assignment. To illustrate: if the connective is — t h e n , since ((0
—> {0,1}) = 1), it makes no difference whether the consequent is 0 or 1.
Hence ((0 —> #) = 1).
Let L be a presuppositional language. Then the vocabulary and syntax of
L define its wellformed sentences and its semantics specifies the mappings
of those sentences into {0,1}.
Van Fraassen's (1969) development of supervaluation relies upon a prior
relation, C, of "classical necessitation" defined as follows, where X is some
set of sentences and A is a sentence:

(D.3) X C A if and only if every classical valuation which assigns 1 to


every member of X [i.e. satisfies X] also assigns 1 to A [i.e. satisfies
A],

To illustrate, using Van Fraassen's own example: let L contain only the
two sentences (68) and (69) and let its logical operators be just disjunction
and negation.

(68) The king of France is wise. = p.


Truth-value gaps 149

(69) The king of France exists. = q.

Then the set of classical valuations, V, mapping ρ and q into {0,1} will be
just those satisfying the following.

(70) v(p ν q) = 0 iff v(p) = v(q) = 0.

(70) a. v(-A) = 1 iff v(A) = 0.

Thus, as expected, the set, V, of classical valuations permits no truth-value


gaps and any sentence necessitated by a classical valuation is classically
necessitated.
In addition to classical necessitation, C, Van Fraassen postulates a relation,
N, of "nonclassical necessitation", which "exhibits the presuppositions and,
perhaps, other cases of necessitation not reflected in the construction of the
classical valuations.". Ν will, for example, reflect the factive status of verbs
like know, or the existential presupposition which is a part of the semantics
of the. This relation combines with C to define the notion of a "saturated"
set, G, of sentences, as follows:

(D.4) A set G of sentences is saturated if and only if (a) there is a classical


valuation which satisfies G, (b) if X is a subset of G and X C A,
then A is in G, (c) if X is a subset of G and Χ Ν A, then A is in G.

Given these definitions, Van Fraassen defines supervaluation as:

(D.5) The supervaluation induced by a set of sentences X is a function 5


which (a) assigns 1 to every sentence which is assigned 1 by every
classical valuation satisfying X (b) assigns 0 to every sentence which
is assigned 0 by every classical valuation satisfying X (c) is not
defined for any other sentence.

The import of (c) is that A is assigned # when neither (a) nor (b) is met.
An "admissible valuation for a presuppositional language L" is then de-
fined as:

(D.6) A supervaluation induced by a set of saturated sentences.

Naturally, if we include # among the valuations of propositions, the clas-


sical truth tables must be enlarged upon. Drawing upon McCawley (1981), I
give the additional rows below, with some explanatory comments.

(71) Additional rows for supervaluatory truth tables


150 Pragmatics

Table a. Conjunction

Ρ q (p&q)
1 # #
# 1 #
0 # 0
# 0 0
# # 0/#

Comment: Since supervaluations rely on classical valuations, no conjunction


containing a false proposition can be true, hence rows with 0 yield 0. If the
classical valuations which make every member of X true make ρ true, but q
evaluates to #, then the set Vx(p&q) yields #. Thus, a sentence like:

a.a. 8 + 7—15 and Sally regrets that Jack, who has not left, has left.

has # as its value, not 0.


The last line displays alternatives between 0 and #. Obviously, any contra-
diction, including one with conjuncts evaluated to #, evaluates to 0. However,
if some members of the supervaluation make one conjunct true and the other
false, while others assign opposite values, the whole is #.

Table b. Negation

Ρ -Ρ
# #

Comment: Obviously, any classical valuation which makes each member of


X true will assign opposite values to ρ and -p. Therefore, if some admissible
valuations assign ρ 0 and others assign pi so that ρ = #, then, the assignment
to -p must also be #.

Table c. Inclusive disjunction

Ρ q (ρ ν q)
1 # 1
# 1 1
0 # #
# 0 #
# # l/#

Comment: Since inclusive and exclusive disjunction differ only in respect


of the disjunction (1 or 1), they agree in their valuation for #. Since any
Truth-value gaps 151

disjunction containing at least one true proposition is true, 1 disjoins # to


yield 1. If the classical valuations which make each member of X true make
ρ false, but some nonclassical ones make q true and others make q false, then,
clearly, some will make (ρ ν q) true and others will make it false. Thus, 0
disjoins # to yield #. The valuation for (# or #) will be 1 in a tautology, but
will otherwise be #.

Table d. Implication

Ρ q (P — • q)
1 # #
# 1 1
0 # 1
# 0 #
# # l/#

Comment: The two alternative types of implication do not fully agree in


respect of # since they differ in the evaluations of (1 if 0) and (0 if 1). Since
both of these are false in the case of <—>, both will valuate to # when #
combines with 0.
As in the case of disjunction, (# if #) evaluates to 1 or # depending upon
the values of ρ and q. If the implication constitutes a tautology, then it always
evaluates to 1. An instance, presuming presuppositional failure, is provided
by:

d.a. If Jack regrets that he is Molly's brother, then he regrets that he is


Molly's brother.

It is apparent that the relation, N, in (ρ Ν q), is not commutative. If ρ


is true, then Ν necessitates the truth of q. However, if q is true, it is not
necessary that ρ be true. If (72) is true, then Sally must love Jack. However,
if Sally does indeed love him, it does not follow that Jack must know it.

(72) Jack knows that Sally loves him.

Thus, the relation, N, does not necessitate the truth of ρ whenever q is true.
So that, if (ρ Ν q) and (-ρ Ν q), then whenever q = {0, # }, a truth-value
gap is forced upon us and the whole is #. This is, of course, precisely as the
earlier Strawsonian definition of semantic presupposition requires.
Considering, briefly, the relation between semantic presupposition and
logical entailment, we have already observed that these relations are distinct
since, in the latter, -q implies -p. The case of implicative verbs is worth
restating in this respect. If (73) is true, then Jack managed to find the book.
152 Pragmatics

However, if the book was not found, it does not follow that (73) is false
since Jack may not have tried to find it, in which case, the whole is simply
pointless.

(73) Jack managed to find the book.

The interesting point is, as in the earlier discussion of succeed, that, at first
glance, one would be inclined to say that if the book was not found, then Jack
did not manage to find it. (73) does, in fact, presuppose many propositions,
two of which are:

(73) a. The book was found.

b. Jack tried to find the book.

It is the falsehood of (73b) which renders an utterance of (73) pointless when


(73a) is false. In order for the whole to be truly false, (73b) must be true
while (73a) is false.
As the above remarks suggest, implicative verbs like manage are like
blame and criticise in that they require to be analysed both in terms of
presuppositions and assertions. Manage presupposes try and asserts achieve.
It is in respect of this assertion, not the presupposition, that it contrasts with
fail and similarly for other pairs.

6.9 Primary and secondary presuppositions

Belnap (1969) appeals to Leonard's (1967) proposal that presuppositions


should be regarded as primary or secondary. The latter are implied by the
former variety. Thus, since existential propositions are presuppositionless,
existential presuppositions must be secondary. This notion has obvious ap-
plication in the analysis of the sets of presuppositions which are involved in
given sentences. A typical illustration is provided by:

(74) My wife is the one who made a match between two people living in
Paris.

Evidently, such a sentence involves at least the following primary and sec-
ondary presuppositions:

(74) a. The speaker of (74) is a male. = secondary to:

b. The speaker of (74) had a wife at the time of utterance. = primary.


Presuppositions and questions 153

c. Two people, χ and y, were in a position to marry. = secondary to:

d. χ and y got married. = primary.

There is a number of issues which obviously arise from the contemplation


of such a set of presuppositions.
First, it is difficult to know exactly how many presuppositions are present.
In part, this problem involves questions of lexical decomposition. Is it nec-
essary, for instance, to spell out all of the sortal presuppositions involved
in the appropriate use of wife? In part, it bears on questions of background
knowledge. For example, what does it involve for someone to make a match
over and above facilitating a marriage? Another sort of problem arises from
the pervasive ambiguity of such auxiliary notions as tense. Is it the case, for
example, that the two people referred to in (74) now live in Paris? Clearly,
such an issue has a bearing on the truth value assigned to the conjunction of
the presuppositions and, hence, to the value ultimately assigned to (74) itself.
Another problem, flowing in part from that just mentioned, is to know
just which presuppositions are crucial to the semantic description of a sen-
tence and which are marginal. McCawley (1981) quotes Donnellan's (1966)
example to illustrate this point:

(75) The man in the corner with a martini in his hand has just been hired
by Stamford.

If it turns out that the man in question was not drinking a Martini, it is
surely not reasonable, if the description is adequate to pick out the correct
individual, to claim that (75) is false.
It would seem that, at least in part, these problems should be approached
within a theory of context. Before discussing McCawley's (1981) proposals
in that regard, I shall turn to the related question of the presuppositional
nature of questions.

6.10 Presuppositions and questions

I referred in the previous section to the work of Leonard (1967), who was
responsible, among other achievements, for the development of so-called
"erotetic" logic, which enables the study of the logic of questions. Among
the issues which receive particular attention within erotetic logic is the role
of presuppositions in the use and interpretation of questions. I shall, in what
follows, take the discussion of Belnap (1969) as my starting point.
154 Pragmatics

Quoting Leonard, Belnap gives the following definition of an erotetic


presupposition:

(D.7) Any proposition whose truth is necessary to the validity of the ques-
tion.

where, by a valid question is understood one which has a correct answer.


From this, Belnap arrives at the following definition of the relevant presup-
positional relation:

(D.8) A question q presupposes a sentence A if and only if the truth of A


is a logically necessary condition for there being some true answer
to q.

Given the famous/infamous example (76), the direct answer - the state-
ment which is directly and precisely responsive - must be chosen from (76a).

(76) Has Jack stopped beating his wife ?

a. Jack has/has not stopped beating his wife. i.e. Yes/no.

From this, it follows that:

(D.9) A is a presupposition of q if and only if every direct answer to q


logically implies A.

Thus, the most important secondary presupposition of (76) - the one which
is not merely necessary for its direct answers to be true, but is necessary and
sufficient - is:

(76) b. Jack, at least prior to the time of the appropriate utterance, beat his
wife.

Of course, as with statements, questions have a variety of secondary pre-


suppositions which, while they are necessary to the truth of the question, are
not, in themselves, sufficient. Thus, for example, (76b) presupposes:

(76) c. Jack had a wife.

Since most questions are put on the basis of substantive presuppositions,


it is reasonable to say that they may be falsely put as well as truly so. For
example, the following is self-evidently a false question:

(77) Do four-sided figures have three sides?


Pragmatic presuppositions 155

(77) is a false question because its direct answer presupposes a contradiction.


Thus, as Belnap claims, it is appropriate to speak of questions as having truth
values.
While most questions presuppose propositions, there are, as with existen-
tial statements, those which are presuppositionless. An instance is provided
by the existential question:

(78) Are there dragons?

Such questions are, obviously, either true or false depending on the value
assigned to the sentence which is their direct answer. In particular, they
cannot have the value #.
In Belnap's own treatment, a gapless, Russellian, view of presuppositions
is adopted over the "gappy", Strawsonian one. However, I can see no tech-
nical reason for preferring the former and, consequently, assume that the
definitions for erotetic presupposition above are Strawsonian. Certainly, the
fact of presuppositionless questions is not a good reason for rejecting this
treatment.
It follows that, as with their declarative counterparts, questions may some-
times be assigned the vacuous value #. A candidate would be the following:

(79) Does Sally regret that Jack has left?

Presuppositional failure of the direct answer to (79) would require that the
whole be valuated to #. Another case is provided by:

(80) Is the present king of France wise?

6.11 Pragmatic presuppositions

What has been said in the last three sections essentially has to do with
truth value assignments and, hence, with semantic presuppositions, in the
conventional sense. I now turn to the question of pragmatic presuppositions
which do not, in any normal sense, rest upon satisfaction in terms of truth-
valuation.
It will be recalled that some preparatory conditions for the satisfiability
of an imperative are presuppositional in character. If an addressor utters a
command when it is clear that she/he is not in a position to enforce it, then the
command may not be satisfied, but that failure has no bearing on whether it
is regarded as true or false. Such a failure is contextual in that the conditions
156 Pragmatics

necessary for the performance of the speech act concerned are not met and,
hence, the utterance itself is inappropriate or infelicitous rather than false.
A different kind of pragmatic presuppositional failure is given by infelic-
itous utterances of sentences like (81) which rely upon a presupposition of
gender:

(81) Someone has left her case behind.

As observed earlier, if the value of someone turns out to be male, the utterance
becomes inappropriate because it incorrectly draws upon a particular sortal
presupposition. Even so, if it is indeed true that a case has been left behind by
its owner, the proposition itself cannot reasonably be said to be false. Such
instances are, of course, similar in kind to Donnellan's cases of partially
incorrect definite descriptions referred to earlier, which confirms that the
divide between semantic and pragmatic presuppositions is not always a clear
one.
Of particular interest is the kind of pragmatic presupposition which Van
Dijk (1977) defines as: "Any proposition expressed by [a sentence] S which
the speaker assumes to be known to the hearer.".
The distinction between presupposition and assertion - roughly parallel to
that between topic and comment/focus - has already figured in this chapter,
section 6.6, and it is this dichotomy which lies at the heart of this kind of
pragmatic presupposition.
The distinction is clearly illustrated by the well-known fact that, in some
languages, including English, what is presumed to be old information, the
topic, tends to appear in actual utterances to the left of what is regarded as
new information, the comment. Thus, as I said in connection with an example
provided earlier and repeated as:

(82) Because he has read Darwin, Percy is well-informed.

the fact of Percy's having read Darwin is presupposed to be shared knowl-


edge or belief between the participants in the discourse. The assertion, given
this presupposition, is that as a consequence, Percy is well-informed. It is
apparent, of course, that such inferences presume a normal arrangement of
information, a presumption which may be questionable.
In like manner, the subject in (83) is usually taken to provide the topic of
the utterance and the predicate the comment:

(83) Percy is a student.

Here, the truth of the existential proposition:


Pragmatic presuppositions 157

(83) a. Percy exists.

is taken for granted and Percy's being a student is asserted to be true. As


indicated earlier, parallel considerations explain the use of the definite article
in:

(84) The girl you met is very clever.

If what is presupposed as old information turns out not to enjoy that


status, then the utterance may be said to be inappropriate. Thus, nonreferring
pronouns often give rise to appeals for identification, as in the following
sequence:

(85) (a): He's left,


(b): Who's left?

Similarly, if the presupposition in (82) that Percy has read Darwin does not,
in fact, reflect shared knowledge between the participants, the statement is
likely to lead to some question to that effect, such as:

(82) a. Has Percy read Darwin?

Van Dijk (1977) shows that pragmatic presuppositions may be established


and/or altered at different points in a discourse - McCawley's (1981) "con-
textual domains". Since, in (82), the presupposed proposition is stated in the
form of a subordinate clause, it may, but need not, have been established prior
to the discourse in which the sentence occurs. In contrast, the co-ordinating
conjunction and may set up a presuppositional background by establishing a
context in which some fact is to be interpreted. In this "world-determining"
function, the conjunction links two facts presumed to be new information to
the addressee, but the first, once conveyed, supports the second, as in:

(86) Percy went to the races and won a great deal of money.

In such cases, it would appear that the question of appropriateness will turn
on such factors as general background knowledge. If the addressee is familiar
with the general context of situation presupposed by the context-providing
first clause, he or she will, or will not, regard the assertion made in the second
as probable or improbable and judge the conjunction as appropriate or not
so.
I am not convinced that, because and can be used to link a world-
determining clause with an assertion, we ought to regard the conjunction
itself as having a non-logical, pragmatic function. I shall not, therefore, treat
it as such in my semantic rules, chapter 8. However, such conjunctions seem
158 Pragmatics

to provide a basis for understanding - or beginning to understand - some


of the semantic presuppositional problems surrounding an array of sentences
discussed by McCawley (1981), drawing on the work of Karttunen (1971,
1974).
Consider what is presupposed by:

(87) Thatcher is Welsh and Bush regrets being Welsh.

Clearly, the only presuppositions of the first conjunct are existential ones.
The second conjunct, in addition to the existential presupposition involving
Bush, also presupposes the truth of its factive complement, namely:

(87)a. Bush is Welsh.

In fact, of course, (87a) is false, as is the first conjunct in (87). Now consider:

(88) Thatcher is Welsh and regrets being so.

In this case, the first conjunct is identical with the factive presupposition of
the second, namely:

(88) a. Thatcher is Welsh.

Karttunen concluded that pairs like (87) and (88) are to be assigned dif-
ferent truth values, presumably, 0 in the case of (87) and # in that of (88).
McCawley, himself, was unsure of Karttunen's claim. However, to me, it
seems very plausible in light of the pragmatic differences between the two
cases and the earlier discussion of truth-value gaps.
In (87), and conjoins two assertions which are introduced as new informa-
tion. In the case of (88), on the other hand, and conjoins a world-determining
clause with an assertion.
In (87), the first conjunct is false and the second is clearly #, so that the
whole is 0, in accordance with the usual value assignments for &. In (88), the
first conjunct is identical with the proposition expressed by the complement
of the verb regrets in the second and the whole is, therefore, equivalent to:

(88) b. Thatcher regrets being Welsh.

Under the accepted definition of semantic presupposition, (88b) is # and so


it follows that (88) must also be #.
What makes cases like (88) special is, of course, the relation between the
first conjunct and the presupposition (88a). It is not, however, necessary that
that relation be one of identity. Mere entailment is sufficient to bestow the
same pragmatic status on a conjunction, as the following shows:
Pragmatic presuppositions 159

(89) A number of farmers went bankrupt and it was sad that farmers went
bankrupt.

Here, the relevant presupposition is, of course:

(89) a. There were farmers who went bankrupt.

and the first clause is world-determining.


The acceptability of sentences like (89) makes it clear that we can have
conjunctions in which the first conjunct is false, but the whole #, even when
the weaker relation of entailment is involved, as the following demonstrates:

(90) Everyone is talking about Bush and Bush is delighted that people are
talking about him.

In this example, as in one discussed by McCawley (1981), it is possible that


the presupposition of the complement clause:

(90) a. People are talking about Bush.

is false and, if so, the second conjunct is #. However, if (90a) is false, then so
is the first conjunct of (90). Thus, since (90a) is entailed by the first conjunct
of (90), the sentence as a whole entails:

(90) b. Bush is delighted that people are talking about him.

Thus, it follows that (90) is # if (90b) is #.


As mentioned earlier, in my remarks on because, the presuppositions of a
sentence frequently depend not upon explicit assertion, but rather on beliefs
or opinions which are taken to be shared by the participants in a discourse
as part of their general cultural background. Thus, for example, (91) may
correctly presuppose (91a) and (91b) for a large number of speakers.

(91) Bishop Jenkins' chaplain is Irish and the bishop is glad to have a
good speaker in his administration.

a. A bishop's chaplain is part of his administration.

b. Irish people are good speakers.

For those for whom such presuppositions hold, the first conjunct of (91)
implies:

(91) c. Bishop Jenkins' chaplain is a good speaker.


160 Pragmatics

Thus, for such people, the first conjunct of (91) involves the presupposition
of the factive complement of the second. Hence, if the first conjunct is false
and the second is #, the whole is #.
Obviously, any conjunction must presuppose the presuppositions of its
first conjunct, as is easily seen by:

(92) Percy enjoys being a student and is determined to become an engi-


neer.

which necessarily presupposes:

(92) a. Percy is a student.

Given this fact, it seems reasonable to say that a conjunction of two sentences
presupposes the presuppositions of the component atomic sentences accord-
ing to a clear pattern. For me, the situation is summarised in the following
condition.

(D.10) A conjunction (p & q) presupposes a proposition r if ρ presupposes


r, or q presupposes r and r is implied by p.

Thus, under this definition, (88) is # in the conditions stated.


I take the same condition to hold for strict implication as for conjunction.
Thus, the truth of (93) presupposes (93a), but (94) does not presuppose (94a)
since it may be #.

(93) If Percy drinks too much, then he regrets it.

a. Percy drinks too much.

(94) If Percy drinks too much, then Jack regrets doing so.

a. Jack drinks too much.

The situation in the case of disjunction seems to be very complicated, as


McCawley's (1981) discussion suggests.
Van Fraassen's rules establish a very simple assignment of 0, 1 and # for
or which make it a matter of no technical consequence whether or not the
whole shares the presuppositions of its atomic parts. Any presuppositional
failure will lead to an assignment of # and if at least one disjunct is #, the
whole will be, at worst, #.
Let us take, as an example, the following, rather absurd case.

(95) Percy drinks too much or Sally regrets it.

For (95) to be true, (95a) must be true.


Pragmatic presuppositions 161

(95) a. Percy drinks too much.

If (95a) is false, then the second disjunct of (95) is # and the whole is #. In
fact, it is the case that, under no conditions at all is (95) false. It is, therefore,
equivalent to:

(96) Sally regrets that Percy drinks too much.

I describe (95) as "absurd". Its absurdity resides, of course, in the fact that,
apart from text books on logic, or linguistics, no context of use could possibly
accommodate it. (95) is, in a very straightforward sense, pragmatically point-
less. It is totally uninformative and is, therefore, reducible to the semantically
empty tautology:

(97) Percy drinks too much or he doesn 't.

My discussion of pragmatic presuppositions has been extremely brief and,


I think, straightforward. The treatment in McCawley (1981), upon which I
have drawn, is considerably more detailed. He - and Karttunen - consider
problems of great subtlety. Superficial though my treatment is, however, it
should suffice to demonstrate the central role which the theory of presuppo-
sitions must play in pragmatics.
Chapter 7
Categorial grammar

7.1 Categorial grammar

In this chapter, I develop in greater detail the framework of the categorial


grammar alluded to in an informal way in chapter 3. My treatment, though
strongly influenced by Montague (1973), is eclectic. In particular, I draw
upon the extensive discussions of categorial grammar in Cresswell (1973,
1985).
As stated in the opening chapter, although what I have to say is, I hope,
relevant to natural language in general, being a native speaker of English
only, I centre the discussion entirely around that language.
As a preliminary, even though particular syntactic categories are closely
associated with particular parts of speech, the notion of Category is not iden-
tical with that of Part of speech. This point needs to be made in order to avoid
unnecessary complications. In my usage, "category" is a syntactic term equiv-
alent to the linguist's "syntactic function", abbreviated in such node labels
as "NP" and "VP".
Thus, when we claim that a given lexical item belongs to such and such a
category, we claim that it has such and such a syntactic function in a given
sentence, e.g. as a nominal, a verb phrase modifier, a noun modifier, etc..
Clearly, items from the same part of speech, such as adjectives, can belong to
different categories since they can function in different ways. Thus, we would
not wish to say that empty belongs to two different parts of speech because
it can appear attributively as well as predicatively. We would, however, want
to say that it can function in two different categories which correspond to
these two different uses. Similarly, when we say that furniture is a common
noun, we do not claim that it can only occur as the head of a noun phrase
and cannot, for example, modify another noun.
What I have just said is not, I believe, in conflict with Cresswell (1973,
1985) who insists that no expression can belong to more than one category.
Clearly, in any given logical representation, no expression can belong to
more than one category - the representation would otherwise be ambiguous.
A severely limited grammar 163

My claim is merely that, in isolation, expressions may have the potential to


function in several syntactic ways.
In view of the fact that the plane of expression must be brought, as far as
possible, into at least a homomorphic relation with the plane of meaning, the
syntactic function of particular items should be reflected in their semantic
behaviour. I shall, therefore, usually support the syntactic assignments by
appeal to considerations of meaning. In many cases, such appeal will be
explicitly to the level of logical form. In keeping with the earlier discussion, I
shall, as in Cresswell (1973, 1985), employ lambda abstracts to link the syntax
and semantics. In the interests of brevity, however, the relevant formulae will
not always be written out in detail and will, sometimes, merely be assumed.
As well as seeking support for syntactic classifications from meaning,
I shall often follow the common practice of calling upon native speaker
intuitions regarding grammaticality.

7.2 A severely limited grammar

Let us suppose that we have a severely limited linguistic corpus in mind at the
outset which is, nonetheless, representative of important constructions in the
language to be described. It is usual to call a grammar designed to describe
such a sample of data a "sample grammar". As the data is increased and the
analysis progresses, the sample grammar will be expanded until, ultimately,
it is sufficiently powerful to describe the language in its entirety - I do not,
of course, aspire to such a development here. What I present below must be
regarded as a sample only.
If the language to be described consisted only of atomic sentences oc-
curring either alone or in combination through the operations of functions,
as in the propositional calculus, the categorial grammar would contain only
two categories. One, the category of sentence, would be basic. The other, the
category of functor, would be derived.
Thus, if t represents the category of sentence, the functor and is of cat-
egory (t/(t,t)). That is, given an unordered pair of sentences as argument,
the function denoted by and yields a sentence as value. Another way of
analysing conjunctions would be to say that they combine with a sentence to
form something that, in its turn, combines with another sentence to form a
sentence. Looked at that way, and is of category ((t/t)/t). I do not think there
is any advantage in adopting this latter treatment.
164 Categorial grammar

Of course, in a natural language, and conjoins two full sentences rather


infrequently. While (1) is perfectly formed, (la) is more likely to occur.

(1) Percy swims and Sally swims.

a. Percy and Sally swim.

Since (1) and (la) appear to have the same meaning, it is clear that they
should have equivalent logical form representations. This involves assuming
that, at that level, and does, in fact, join two full sentences. This situation is
easily accommodated by lambda abstraction - I return to the inclusion of the
lambda operator in the syntactic rules below. In the meantime, the underlying
structure for (la) would be (lb), where Vb is a verb phrase variable:

(1) b. ((X,Vb ((Percy, Vb) and (Sally, Vb))) swim).

The corresponding structure for (1) would be (lc).

(1) c. ((Percy(\,x(swims,x))) and (Sally(X,y(swims,y)))).

Since, disregarding logical signs, (lb) and (lc) differ only in the number of
occurrences of swim, i.e. the set of swimmers is named twice in (lc), we
may assume that the two representations are equivalent.
It will be observed that such an analysis does not take account of the
peculiar use of and to express + in informal statements of arithmetic, as in:

(2) 7 and 7 make 14.

Clearly, in such cases, and is not the natural-language equivalent of math-


ematical multiplication. In such examples, it is unreasonable to argue that, at
any level of analysis, and joins two sentences. It obviously connects two unit
constituents, i.e. numbers. I shall not consider this arithmetical use further in
this study, though I shall briefly discuss the semantics of compound nominals
later, 8.2.
In contrast to functors like and, a one-place functor like not would be of
category (t/t).
If the language is more elaborate, as in the case of the predicate calculus,
then there will have to be as many categories as the task of describing the
complexities of internal sentence structure demand.
Term variables, {x, y, z, . . . }, function as names in the predicate calculus
and may be assigned to the category e. A one-place predicate will then be of
category (t/e).
Transferring this analysis to natural-language expressions, we may as-
sign intransitive verbs, such as walk, to the category (t/e). If we allow such
A severely limited grammar 165

verbs to appear in lambda abstracts, then the term variables which form their
arguments and are bound by Λ, are of category e. Thus, the logical form rep-
resentation of an intransitive verb phrase like walks will be (X.x (walks,x)).
It might appear that the proper nouns of natural language should also, as
in chapter 3, be allotted to the basic category, e. However, since we employ
lambda abstracts to construct logical form, it proves more convenient to
think of them - as do Montague (1973) and Cresswell (1973, 1985) - as
one kind of nominal. Nominals, as we shall see shortly, take intransitive
verbs as arguments. If intransitive verbs are of category (t/e), it follows that
nominals, including proper nouns, are of category (t/(t/e)). This categorisation
is exploited in (1c), where the lambda abstracts are of category (t/e).
Cresswell (1985) provides the following elegant formation rules which
generalise categorial grammar.

(R. 1) t and e are categories.

(R.2) If r and σ\, σ2, • • •, ση are categories, then (τ/(σ\,σ 2 ,... ,σ„)) is a
category.

Rules ( R . l ) and (R.2) permit the construction of complex expressions out


of simple ones, basic or derived. The generalised rule for assigning syntactic
status to a complex expression is as follows.

(R.3) If δ e (τ/(σι,σ 2 ,... ,σ„)), and cti, α 2 , . . . , a„ e at, σ2, ..., σ„,
respectively, then (δ, a\, a 2 . . . . , α„) is an expression of category r .

To illustrate: if c^ and a2 are sentences, i.e. members of category t, and δ is


a two-place functor of category (t/(t,t)), e.g. and, then the sequence (δ. α ϊ , a2)
is a sentence. Again, if a j is a nominal, member of category (t/(t/e)), and δ
is an intransitive verb, member of category (t/e), then the sequence (δ, α ϊ ) is
a sentence.
I return to rules which spell out these concatenations below (section 7.5).
In the meantime, using English, the illustrations just given might be clarified
by:

(3) (and( Sentence)(Sentence)).


(4) (runs( Percy)).

As observed, the categorial language is enriched by the addition of lambda


abstraction, necessitating the use of the abstraction operator Λ and an infinite
set of variables of each category, including category e. The addition of lambda
abstraction enormously increases the power of the grammar since it facilitates
the creation of an infinite variety of complex predicates and, in principle, the
166 Categorial grammar

formation of complex expressions of any category whatever - the abstract


in (lb) above, for example, is a nominal. Moreover, as Cresswell (1973)
demonstrates, the use of lambdas enables representations in logical form to
approximate closely to the word order of surface structures.
For convenience, I repeat Cresswell's (1985) formation rule which governs
the creation of wellformed lambda expressions:
(R.4) If χ is a variable in category σ and a is an expression in category
r , then ( λ * , a ) is an expression in category (r, σ).

The operation of this rule is exemplified in many formulae in chapter 3.


As a further instance, the representation (lb) consists of two propositional
functions, (Percy,Vb) and (Sally,Vb), which are conjoined by and to yield
an expression of category t, namely:
(1) d. (Percy, Vb and Sally, Vb).
This open expression becomes the scope of a lambda operator, yielding a
nominal of category (t/(t/e)), namely:

(1) e. (\,Vb ((Percy, Vb) and (Sally, Vb)))

This abstract is provided with a value for its free variable, Vb, namely, swim,
yielding the representation (lb), (lb) converts into the surface (la) simply by
deleting the lambda and free variables and removing brackets. Let us, with
Cresswell (1973), call this "logical deletion".

7.3 Some category assignments

The examples given so far are uninterestingly straightforward. The need for
the system to analyse sentences so as to reflect the compositional nature of
meaning and provide for their interpretation through a system of semantic
rules forces us to face many difficulties which might not otherwise be promi-
nent. Thus, while it is obvious that words like Percy function as names, it is
not so readily apparent that a word like somebody does not also function as
a name. While it is evident that a word like runs belongs to category (t/e),
it is unclear whether an item like loves should be thought of as combining
with a nominal to form an intransitive verb or with two nominals to form a
sentence. The question whether to take that as part of a verb of propositional
attitude or as a separate complementiser - perhaps with several functions -
was seen in chapter 4 to be a far from simple matter.
Some category assignments 167

In this section, I shall discuss some important categories in English.

7.3.1 Nominals and intransitive verbs


Consider, first, the status of a quantifier word like somebody. I have referred
already to the fact that we can conveniently think of all nominals, in the sense
of Cresswell (1973), as of category (t/(t/e)). The exempla of this category
have, so far, been proper nouns, like Percy.
Quantifier words like somebody and proper nouns are syntactically alike
in many respects. They both function, for example, as subject to a verb or
as direct object to a transitive verb; they do not take plural morphemes, etc..
Thus, since proper nouns are of category (t/(t/e)), so are quantifier words.
While proper nouns are basic, however, quantifier words are derived, e.g.
some -f one.
Reflecting the basic/derived distinction, proper nouns and quantifier words
do not have precisely the same semantic function. While the former denote
uniquely valued functions (chapter 1), equivalently, refer to individuals di-
rectly, the latter denote higher order functions on one-place predicates. Thus,
whereas proper nouns denote arguments to functions, quantifier words denote
functions which take arguments. (5) is true iff the individual denoted by Sally
is in the set denoted by runs. By contrast, (6) is true iff the property denoted
by runs has the property of being a property of some individual.

(5) Sally runs.

(6) Somebody runs.

A well-known and convincing demonstration of the incorrectness of view-


ing proper nouns and quantifier words as denoting the same semantical object
is provided by their behaviour under negation. To negate a sentence like (5)
is to assert of an individual that it is false that that individual runs. To assert
(7), however, is not to deny of each individual that that individual runs.

(7) Everyone does not run.

In its usual interpretation, (7) is true just in case there is at least one
individual who fails to run.
Such considerations suggest that quantifier words are not only members
of the syntactic category (t/(t/e)), but also denote semantical objects of the
same type.
168 Categorial grammar

7.3.2 Nominals and transitive verbs


The assignment of proper nouns and quantifier words to the syntactic category
(t/(t/e)) is straightforward in cases like those discussed above, where the verb
is intransitive, but an important problem arises in cases where they appear as
object to a transitive verb, as in:

(8) Sally admires Jack/someone.

If the object noun phrase is of category (t/(t/e)), it must combine with a


verb to form a sentence. However, in (8), the noun phrases concerned are not
in this relation to the verb. Obviously, it would be ridiculous to claim that
the phrase Sally admires is a verb and even if we were to insist that it was,
we would still be faced with the problem of accounting for the role of Sally
in its internal structure. It appears, therefore, that either we must abandon
the view that quantifier words and names are of syntactic category (t/(t/e)),
or we must find a way of analysing the rest of the sentence so as to provide
them with a suitable argument.
Montague (1973) treats transitive verbs in category ((t/e)/(t/(t/e))). This
allows them to combine with members of category (t/(t/e)), such as quantifier
words or proper nouns, so as to form intransitive verbs. Thus, in his system,
a verb like admire combines with its object to form an intransitive verb,
say admires Jack/everyone and this derived expression is, as expected, of
category (t/e). This complex intransitive verb is then able to combine with a
subject to form a sentence.
Cresswell (1973, 1985), introducing, as he does, lambda binding directly
into the categorial grammar, uses this enrichment to provide verbs with the
arguments they would normally be assigned in a logical language, namely,
members of category e. Many illustrations of this approach have been given
already. In the case of (8), one of the alternative strings could be analysed
as (8a).

(8) a. (Sally(\,x((\,y (admires,x,y))someone))).

Cresswell's treatment has a number of advantages over that proposed by


Montague. First, it allows for the simple classification of transitive verbs as
members of category (t/(e,e)) - the arguments in question being the variables
in the propositional function. Second, it implies a straightforward treatment of
verbs of higher degree, such as give in terms of multiples of places. Thirdly,
and most importantly, it interlaces the syntactic and semantic representations.
The formula (admires,x,y) in (8a) is created by the syntax. In context with
Some category assignments 169

the lambdas, also outputs of the syntax, that formula can be interpreted by the
semantics in accord with the principles of satisfaction described in chapter 3.
As we saw in chapter 3, section 7, Cresswell's treatment also has the
advantage that it permits a simple and clear representation of the logical form
of sentences involving scope ambiguities as in the now well-worn example
provided earlier and repeated here as:

(9) Everyone loves somebody.

As Cresswell demonstrates (1973), provided the correct order of the log-


ical signs relative to their arguments is maintained, considerable licence is
permitted in the ordering of the nonlogical symbols. Thus, the surface order
of a sentence like (9) can be maintained in its logical form representations,
as in (9a) and (9b) - almost precise repetitions of (65c, d) in 3.7.

(9) a. (Every one(\.x((\,y (loves,x,y))someone))).

b. ((X.y(Everyone(X,x(loves,x,y)))) someone).

As noted earlier, 7.2, these deep structures can be converted into shallow
structures simply by applying logical deletion. Although Cresswell seems
now (1985) to have weakened this transformational-like position, it has im-
mense appeal because of its simplicity if nothing else. It is, moreover, close,
as far as erasure is concerned, to Montague's own treatment in which disam-
biguated representations, "analysis trees" are related to their natural-language
counterparts by the deletion of subnodes and rule numbers.
An attractive feature of Montague's treatment of transitive verbs is that it
brings the phrase structure syntax of the disambiguated language directly into
line with the usual linguistic analysis of a sentence into a noun phrase and a
verb phrase - with "intransitive verb" corresponding to "verb phrase". Thus,
in his system, the options in (8) would be represented in a tree equivalent,
in phrase structure, to (8b).

(8) b. [5 [NPSally [/v admires Jack/someone]]].

However, this pleasing property is also present, albeit less obviously, in our
lambda representations which, as I have said, have the additional advantage
of being directly interpretable.
170 Categorial grammar

7.3.3 Some verb types


The decision to treat transitive verbs as of category (t/(e,e)) appears to conflict
with the fact that, in surface structure, many verbs can occur without an
explicit object, as in:

(10) Jack eats.


In traditional transformational grammar, eat would be specified in its lex-
ical entry as tolerating object-deletion, unlike a verb such as hit which is
resistant to that transformation.
Assuming that the operation of logical deletion removes any unevaluated
variable, the representation of (10) will be (10a) in which the verb is correctly
shown as taking two arguments.
(10) a. (Jack (Χ,χ (X,y (eats,x,y)))).
By contrast, (11) will have the lambda structure (11a) since the progressive
aspect implies that Jack is eating something specific.

(11) Jack is eating.


a. (X,y (Jack (\,x (eat,x,y)))).
Neither (10a) nor (11a) is, of course, an adequate representation of the
meaning of the relevant sentence. As indicated earlier, chapter 4, it is essential
that the domain of the object variable be suitably restricted. The most obvious
way in which this is to be accomplished is in the semantic rule for the verb
through the inclusion of a clause such as [eatable(y)]. I ignore semantic rules
for the present, see chapter 8.
Although a verb like eat is always transitive, there is a large number which
may function either in category (t/e) or (t/(e,e)). This is illustrated by burn
in the following pair.

(12) Jack burned the letter.


(13) The letter burned.
Several problems concerning the categorisation of verbs remain. One of
the most important is posed by the relation between particles and adverbs
and verbs in such structures as:
(14) Jack ran off/away/back.
(15) Sally looked for John.
(16) Jack turned off the television.
Some category assignments 171

(17) Jack turned off the highway.

The varying treatments which these structures receive in scores of refer-


ence books and linguistic studies of English bear testimony to the consider-
able difficulty which they present. I shall restrict myself here to particles and
leave the question of adverbs until later.
The main difficulty is, of course, to identify and separate particles from
true prepositions which combine with nominals to form adverbs. Particles are
always to be counted as integral to the verb. Adverb-forming Prepositions,
on the other hand, should be categorised separately.
To deal with the easiest case first: it is standard practice to recognise a
subcategory of transitive verbs known as "phraseal verbs". Such verbs are
discontinuous in that they consist morphologically of a verb and a particle.
For example, the expression turn off in (16) is a phraseal verb, while turn
off in (17) is an intransitive verb plus a true preposition.
To show that this is so, we may appeal to the fact that, in English, pronom-
inal objects of phraseal verbs enforce postponing of the particle, a transfor-
mation which is optional when the object is not a pronoun. In the case of
non-phraseals, however, postponing is not permitted. Thus, (18) can mean
the same thing as (16) but not (17).

(18) Jack turned it o f f .

Conversely, (19) can have the meaning of (17) but not of (16).

(19) Jack turned off it.

Given this distinction between the particle off in (16, 18) and the prepo-
sition off in (17, 19), we can categorise the former as combining with a
transitive verb to form a transitive verb, i.e. of category ((t/(e,e))/(t/(e,e))).
The transitive verb which results may, then, be treated just like any other,
presuming, of course, some rule for handling the positional facts of postpon-
ing.
The need for such a postponing rule is, probably, the best reason for
treating the particle as a distinct category. It would, otherwise, be simpler
and more in concert with the view that the resulting structures are complete
verbs to treat the particle as part of the verb and thus distinguish between:
turn o f f , turn on, turn up, turn over, turn out, etc..
More difficult are cases like (15). Montague's (1973) treatment would
categorise for as a true preposition and, thus, would analyse for John as an
adverb. Such a treatment, however, is not plausible from a semantic point of
view. Look for seems to operate semantically as a whole unit equivalent to
172 Categorial grammar

seek. Further, as Cresswell (1973) says, Montague's analysis does not allow
for a straightforward representation of ambiguities of scope, as in the two
readings of (20).

(20) Everyone is looking for someone.

The semantic implausibility of treating for John as an adverb in (15) is


further suggested by the fact that adverbials are readily subject to certain
stylistic movement rules, but not so phraseals or cases like for John. Thus,
while (23) is normal, (22) is barely so and (21) is certainly not acceptable.

(21) * Off the television turned Jack.

(22) ?For John looked Sally.

(23) For ten years, Jack lived in Paris.

Since postponing is prohibited for words like for/from/with, the syntax


does not require us to separate the verb and its particle. We may, therefore,
regard the particle in look for and similar transitives as part of the verb.
One considerable advantage of treating for as part of the verb, in such
cases, is the simplicity with which ambiguous cases like (20) can be treated.
If looks for is analysed as a transitive verb of category (t/(e,e)), cases like
(20) are represented as (20a) or (20b), exactly paralleling the alternative
representations of (9) in 7.3.2.

(20) a. (Everyone( X,x((X,y (looks for,x,y))someone))).

b. ((X,y( Everyone,(X,x(looks for,x,y))))someone).


Obviously, we may have discontinuous intransitives as well as transitive
verbs, as (14) demonstrates. Since the particle, in such cases, is not subject
to postponing - though it can be moved stylistically - , there is no need to
think of it as belonging to a distinct category. Accordingly, I shall take such
cases as ran off/away as basic.
In (5.4), I discussed some problems presented by attitudinal verbs. Among
such verbs, some, like want, take an infinitival complement, as in:

(24) Sally wants to photograph the leopard.

In Montague's treatment (1973), the preposition to is taken to be part of


the verb which is then categorised as a member of ((t/e)//(t/e)) - the double
slash distinguishing, for example, try to from an adverb. One difficulty with
this approach is that it does not allow for a uniform treatment of infinitives
since, for example, it does not apply to structures like (25):
Some category assignments 173

(25) To cycle is relaxing.

As a very simple solution to cases like (24) - though not (25) - one might
regard to as something which combines with an intransitive verb to form
something which combines with an intransitive verb to form an intransitive
verb, i.e. of category ({(t/e)/(t/e)}/(t/e)).
However, since we want to be able to incorporate the empty categories
of Binding theory, including Pro, into logical representations, this simple
approach would not be justified. Pro is of category e and we must, therefore,
in keeping with the remarks in (5.4), regard the relevant clause as denoting
an open proposition, not an intransitive verb. Thus, under this solution, to, in
infinitival constructions like that in (24), would be something which combines
with a member of category, t, to form something which combines with an
intransitive verb to form an intransitive verb. That is, its category would be
({(t/e)/(t/e)}/t).
A more attractive view of infinitival clauses, adopted, for example, by
Cresswell (1973), is that they constitute nominals and are, thus, of category
(t/(t/e)). In this approach, wants in (24) is a transitive. In his own treatment,
Cresswell regards the word to as a surface structure device with no seman-
tic force which merely marks the infinitival status of the nominal, allowing,
among other things, for the morphology of the verb. I shall follow this nom-
inal approach and, in doing so, shall postulate an abstract complementiser inf
which takes a sentence and yields a nominal and is, therefore, of category
((t/(t/e))/t).
A considerable advantage of this alternative over Montague's treatment is
that it simplifies the syntactic analysis of infinitives with specified subjects,
as in:

(26) Sally wants Jack to photograph the leopard.

A logical representation of (26) would then be, ignoring the internal struc-
ture of the complement:

(26) (Sally(Λ, x((X.y(wants,x,y))(inf(Jack to photograph the leopard))))).

This nominal approach also has the advantage of generality. Under it, a
straightforward treatment of cases like (25) is possible. While the internal
structure of the infinitive itself may be highly complex, its nominal status
allows, in principle, for an analysis of no greater complexity than:

(27) ([to cycle](X,x(is relaxing,x))).


174 Categorial grammar

In chapter 4, I discussed the status of the complementiser that which


Montague (1973) treated as an integral part of a verb of propositional attitude.
In his analysis, believe that is a single item and is of category ((t/e)/t).
However, as the earlier discussion indicated, it is better to treat that as a
sentential complementiser, in which case, it is an expression of the same
category as the abstract infinitival complementiser inf, i.e. ((t/(t/e))/t).
Just as there are verbs like burn which can be transitive or intransitive, so
there are verbs which can take either an infinitival or a sentential complement.
Thus, seem and appear may occur in a variety of structures, as the following
show.

(28) Jack seems/appears to be honest.

(29) It seems/appears that Jack is honest.

I shall not here discuss the syntactic complications which surround verbs
like seem - for a recent discussion, see Köster (1986). I merely note the
fact that, since both infinitive and sentential complements are treated alike,
as nominals, the ability of such verbs to take both types of complement need
not be reflected in our categorisation as it must in Montague's.
A very important group of verbs, in English, is the auxiliary system.
Auxiliary verbs are, semantically, sentence modifiers in that they provide
tense, aspect, mood, etc. in terms of which propositions are to be interpreted.
They are, therefore, to be treated as of category (t//t). The double slash is
required to ensure that auxiliary verbs can undergo the usual range of syntactic
operations, such as, modals excluded, number concord, which do not apply
to words like possibly. Under this analysis, (30) has the representation (30a).

(30) Percy will disappear.

a. (Percy(\,x( will(disappear,x)))).

In contrast to modals, such as will, have and be may function as auxiliaries


or as full verbs. As auxiliaries, they conspire with the affixes -ed and -ing
which are, of course, affixed to the first and second verb, full or auxiliary, to
their right.

7.3.4 Wh-words
As we saw in chapter 5, it is reasonable to treat wh-words almost as if they
were quantifiers of a special kind which bind variables. Accordingly, words
like who and what will have scopes in the formulae of lambda expressions.
Some category assignments 175

This will, of course, mean that wh-variables are of category, e, just like the
ordinary term variables falling within the scopes of quantifiers.
The simplest case is one in which the wh-word acts as subject of the main
verb, as in:

(31) Who drew the cartoon?

Such cases are straightforward in that - unless we adhere to an old trans-


formational model - no movement is involved. Wh-words which can occur in
such structures will be of category (t/(t/e)), so that a possible representation
of (31) is (31a).

(31) a. (Whoq (X.xq ((X,y(drew,xq,y)) the cartoon)))?

In this representation, whoq is an item distinct from the who of relative


clauses, as the subscript indicates. Clearly, if it is felt necessary to include an
explicit clause restricting the domain of the wh-word in such representations,
the relevant clause must be marked as logical so that logical deletion can
apply. This can easily be done by adopting the usual set-theoretic notation
used in chapter 5.
Consider, now, yes/no questions such as:

(32) Did/will/has Jack run?

Since, as the options show, do, in such cases, is not the full, transitive do,
but an auxiliary, it is in category (t//t). The representation corresponding to
the first option in (32) is, therefore, (32a).

(32) a. (Did(Jack(X,x(run,x))))?

Since we can give representations for yes/no questions like (32), we are
also in a position to represent the more complicated structures of wh-questions
like (33) in which the wh-variable occupies object position in the open for-
mula, as in (33a):

(33) What did Jack draw?

a. (Whatq (X.yq( did( Jack( X,x( draw,x,yq))))))?

Unlike who, the wh-words which and what, however, may form complex
nominals, as in which cartoon. For the moment, let us ignore the internal
structure of such nominals and assume them to be of category (t/(t/e)). The
logical form representation for (34) is (34a).

(34) Which cartoon did Jack draw?


176 Categorial grammar

a. (Which,cartoonq(X,yq(did(Jack(\,x(draw,x,yq))))))?

7.3.5 Common nouns, quantifiers and of


It is standard practice to treat common, count nouns as of category (t//e). That
is to say, they have the same semantical, i.e. logical, status as intransitive
verbs. However, as the double slash indicates, they function differently from a
purely syntactic point of view. This treatment accords well with our common-
sense feeling that, for example, to be a musician is to have the property of
musicianship. Treating common nouns in natural language as intransitive
verbs conforms with their normal logical translations, as in:

(35) Percy is a student.

a. (Student,Percy).

If common, count nouns are of category (t//e), they obviously cannot


combine with intransitive verbs directly to form sentences. Instead, as (35)
illustrates, common nouns form heads of nominals and it is these latter con-
stituents which combine with the verb phrase.
There are several such complex nominals to be considered. Some are
without complement and have a common noun as head, e.g. each student.
Some have a measure word as head with mass noun as complement, e.g. a
gallon of milk. Some have a collective noun as head with plural count noun
as complement, e.g. a team of horses. Finally, some consist of quantifier-
head with complex complement, as in all of the students. I shall consider the
variety without complement first.
Let us, for convenience, call complex nominals with common nouns as
their heads "quantifier phrases". Such phrases may feature logical quantifiers,
as in: every man, some man and all men. Alternatively, the quantifier may
be proportional, as in: many men and few men.
Exactly the same considerations which prompt us to regard quantifier
words like everyone as higher order functions apply also to quantifier noun
phrases. It follows, therefore, that, in such constructions, quantifiers should
combine with members of (t//e) to form members of (t/(t/e)). Thus, on this
simple view, their category symbol should be ((t/(t/e))/(t//e)).
It will be recalled from chapter 4, however, that many quantifiers can
function as unitary nominals without overt restriction, as in:

(36) All is lost.


Some category assignments 177

In such cases, it might seem that their category should be (t/(t/e)). A better
approach would be to envisage the presence, in logical form, of an empty
category which acts as the head of the noun phrase. Such a head, H, would
be of category (t//e). A suggestion not unlike this was made by Partee in
an early paper (1962) and I discuss an important construction involving it
below.
As suggested earlier, it is possible to regard the definite and indefinite
articles, the and a/an, as quantifiers and, indeed, this is Montague's practice.
There are difficulties with this approach. For example, since they cannot
function as unitary nominals, articles cannot freely interchange with ordinary
quantifiers in all contexts, e.g. in the phrase all of us, and this restriction needs
to be included in their lexical entries, e.g. by giving them the feature [-H], To
treat them as quantifiers, however, seems semantically very plausible. This is
especially so if we adopt a Russellian view of definite descriptions in which
the leopard denotes a set consisting of one unique individual.
Treating quantifiers as of category ((t/(t/e))/(t//e)) is, however, an oversim-
plification. Many scholars, including Cresswell (1973) and Montague (1973),
view them as functions of two arguments of category (t/e). From a syntactic
point of view, given two such arguments, they yield a sentence as value. On
this sophisticated analysis, a word like every is of category (t/((t//e),(t/e))).
Although I do not wish to discuss, in detail, semantic rules in this chap-
ter, the above assignment seems, at first, so odd that it deserves justification
forthwith. I shall, therefore, give a simplified restatement of Cresswell's se-
mantic rules for every, a/an and the - he does not, of course, claim that they
really represent the full meaning of the words involved.

(R.5) The semantic value of every is a function of category (t/((t//e),(t/e))),


such that if its first argument is true of any individual, then its second
argument is true of that individual.

To illustrate: if we claim:

(37) Every man is mortal.

then, we claim that, for each individual of whom it is true to say that that
individual is a man, it is also true that he has the property of being mortal.

(R.6) The semantic value of a/an is a function of category (t/((t//e),(t/e)))


such that, if its first argument is true of at least one individual, then
its second argument is true of the same individual.
178 Categorial grammar

(R.7) The semantic value of the is a function of category (t/((t//e),(t/e)))


such that there is exactly one individual of whom both its first and
second arguments are simultaneously true.

Although quantifiers, in this theory, take two predicate arguments and


yield a sentence as their value, a way must still be found of creating nominals
involving them. This is done through the use of lambda abstraction. Thus,
the quantifier phrase some bird has the following logical form representation:

(38) (\,xt/e(some bird xt/e)).

Such structures are, as required, of category (t/(t/e)) since they combine


with verbs to yield sentences. Hence, quantifier phrases, including unitary
nominals as in (36), and proper nouns are of the same category.
I remarked above that the wh-words which/what may combine with com-
mon nouns to form complex nominals. Since wh-words seem to function
much like quantifiers, it is reasonable to regard which and what, in such con-
structions as which bird, as having the same category as quantifiers. Thus,
which bird is represented by: (A,x,/ e (which bird xt/e)).
While we are now in a position to describe such sentences as (39), we are
left with a host of problems connected with quantifier noun phrases, such as
those exhibited in (40) and (41).

(39) The student fears everyone.

(40) All students are hard working.

(41) Five/all/some of the students are hard working.

In (40), all, unlike every, requires that the head be plural and this, in turn,
necessitates plural concord on the verb. A similar situation holds in the case
of some, though optionally. Thus, beside such constructions as some man,
we have some men - there are, of course, semantic differences which will
have to be accounted for, perhaps by distinguishing between two underlying
items, somei and somej. Such semantic problems aside for now, these facts
about number concord can be built into the relevant lexical entries.
The problems posed by (41) are more difficult. In (41), each of the op-
tions is a complex quantifier phrase involving two quantifier phrases and the
preposition of. The claim that the first of the expressions flanking of, like its
fellow, is a quantifier phrase, not a simple quantifier is, in part, supported by
the reality of parallel structures such as six members of the committee and
the leaders of the delegation which have specified heads.
Some category assignments 179

Partee (1962) was, I think, the first to suggest that, in such examples as
(41), the first quantifier actually determines a head of its own in deep structure
which is deleted. Thus ,five of the students would be derived from five students
of the students. This proposal, however, required that the first head be identical
with the second in order to qualify for deletion. It was also a trifle implausible
in cases like five of the students who enjoyed themselves. Such an expression
would, presumably, require an underlying form five students who enjoyed
themselves of the students who enjoyed themselves . . . , but such repetition of
the restrictive relative would seem to nullify its semantic purpose as modifier
of the second head. For such reasons, therefore, it seems best, as noted earlier,
to envisage the presence in logical form of the unspecified element Η which
could serve as head in the first phrase. This phrase would, then, have the
logical form representation: (λ, λ:,/^(Five Η x,/e)).
Further support for the above suggestion comes from the fact that, in
constructions such as those in (41), the first quantifier has to be one which
can appear as a unitary nominal in structures like (36). Thus, we cannot have
*every of the men, or *the of the men.
There are problems surrounding the corestrictions between quantifier phras-
es in the constructions under discussion, but I shall ignore these, here. Instead,
I shall confine my attention to the preposition.
It does not seem plausible to treat the occurrences of of in (41) as be-
longing to the category of vacuous prepositions. Far from being a mere case
marker, of in (41) has a clearly partitive sense. This is indicated by the fact
that such structures can be reformulated, in the manner of Lees (1960), as:

(42) Of the students, five were hardworking.

When of marks oblique case, however, such a stylistic option is scarcely


possible, as the following pair demonstrates.

(43) The cover of the book was torn.

(44) ?Of the book, the cover was torn.

This partitive use must be distinguished from its vacuous counterpart in


the lexicon.
In light of the internal structure of these complex phrases, it seems appro-
priate to assign partitive of to the category ({t/(t/e)}/{(t/(t/e)),(t/(t/e))}). That
is: of, in such constructions, takes, as arguments, two higher order functions
of category (t/(t/e)) to yield a higher order function of the same category.
This reflects the true status of the flanking quantifier phrases and seems to
be close to the partitive use of of. It would, of course, be easy to reformulate
180 Categorial grammar

the category name in such a way as to allow the preposition to combine first
with the phrase to its right and the resulting structure to combine with the
initial quantifier phrase.
In contrast to partitive of in structures like (41), the preposition is used
in a genitive sense in:

(45) Five horses of the farmer were lame.

These structures are very difficult. In particular, they appear to be synony-


mous with alternatives such as:

(46) Five of the farmer's horses were lame.

I shall simply assume that of in (45) is a genitive case marker. I shall,


further, assume that it has the same category status as partitive of.
However, as (46) demonstrates, genitive case may also be marked, in
English, by affixing 's to a nominal - that it is a nominal, not a common noun
is demonstrated by cases like Percy's. This suggests that the categorisation
for 's is ((t/(t/e))/{(t/(t/e)),(t//e)}). That is to say, it is a function which takes a
nominal and a common noun as arguments and yields a nominal as its value.
As mentioned above and in chapter 5, one source of vacuous preposi-
tions is case marking in compound nominals. In English, the preposition is,
commonly, of, as in the bottom of the ocean, or the downfall of the Russian
empire, though to is also used, as in a sister to John - I shall not consider
this latter case. Since, of always links two independent noun phrases in such
structures, it should be assigned to the same category as its partitive and
genitive counterparts discussed above.
It is to be noted that it is this vacuous of which appears in complex nom-
inals with measure words and collective nouns. That this is so is suggested
by the fact that it does not behave like partitive of as is witnessed by the
ungrammaticality of both of the following:

(47) *Of the milk, a gallon was sour.

(48) *Of the horses, the team was lame.

Nor does it behave like genitive of as is suggested by the fact that structures
containing it cannot be alternatively expressed with 's, as the following show:

(49) *The milk's gallon was sour.

(50) *The horses' team was lame.


Some category assignments 181

As indicated in chapter 5, the other source of vacuous of is as an oblique


case marker in adjective phrases such as full of water. This construction is
illustrated by:

(51) The jar is full of water.

Such phrases do not occur in attributive position, as can be seen from the
unacceptability of:

(52) *Thefull of water jar is Italian.

If predicative adjective phrases are assigned, along with predicative ad-


verbs, to category ((t/e)/(t/e)), as in Cresswell (1985), then vacuous of in
expressions such as (51) should be of category: ({(t/e)/(t/e)}/{[(t/e)/(t/e)],
(t/(t/e))}). The import of this monster is to say that of combines, in adjective
phrases, with a verb phrase modifier and a nominal to form a verb phrase
modifier.

7.3.6 Mass and abstract nouns


It will be observed that the complement of of in full of water is a mass
noun. While this is not a necessary feature of such constructions - see, for
instance,/«// of the books you bought it is interesting because it reflects the
ability of such nouns to function in category (t/(t/e)). Mass nouns share this
ability with abstract nouns like honesty and, hence, both frequently appear
as unitary nominals in subject or object role, as in:

(53) Water is refreshing.

(54) Sally drinks water.

(55) Honesty is admirable.

(56) Most people admire honesty.

A major problem surrounding mass and abstract nouns is, however, that
they can also appear as the heads of noun phrases, as in:

(57) The water is like crystal.

(58) John displayed some honesty.

They may even appear in complex nominals, as in:

(59) Most of the water was undrinkable.

(60) All of the honesty he displayed was in small things.


182 Categorial grammar

Clearly, in such uses, these nouns must be assigned to the category (t//e) of
common nouns. This is also necessary when they occur in adjective phrases
as heads of quantifier phrases, as in full of the water.
A preferable alternative to allowing mass and abstract nouns to function
in category (t/(t/e)) as well as in (t//e), is to have an empty quantifier zero.
This quantifier would optionally combine with mass and abstract nouns of
category (tJle) to form members of (t/(t/e)).
The empty quantifier can also combine with plural count nouns, so justi-
fying their appearance in such sentences as:

(61) Pigs are fond of truffles.

The notion of an empty quantifier once enjoyed a measure of popularity in


transformational grammar and, as we saw in chapter 5 - and as my proposal
for Η implies - there is certainly no hostility towards empty categories in
current linguistic theory. To my mind, it is a considerable advantage of such
an analysis that it unifies cases like those discussed above. We could, of
course, take plurality as a quantifier and so justify cases like (61) without
reference to the empty quantifier, but then the unified treatment is lost.

7.3.7 Adverbs and adjectives


Treating predicative adjectives as of category ((t/e)/(t/e)) allows them to be
united with verb phrase adverbs, which can also be treated as of that cate-
gory - I shall refer briefly below to an alternative analysis of adverbs adopted
by some authors.
Adverbs, as with adjective phrases, may be simple or complex, as in:

(62) Percy runs rapidly.

(63) Percy runs in the park.

In Montague's (1973) treatment, adverbs are divided into two very general
classes, verb phrase modifiers, as illustrated above, and sentential adverbs.
The latter, along with the negator not are members of category (t/t) and
include the modal operators discussed in chapter 3, such as necessarily and
possibly.
As is well known, there are many problems surrounding adverbs which
the above categorisation does not address. At the level of classification, there
are adverbs which operate both as verb phrase and sentential modifiers. Thus,
for example, naturally in (64) modifies the verb phrase, but in (65), its scope
is the entire sentence - these examples are from Quirk - Greenbaum (1973).
Some category assignments 183

(64) Mary walks naturally.

(65) Naturally, Mary walks./ Mary walks, naturally.

As the examples in (65) show, adverbs may appear in a number of sentence


positions at the level of surface structure. The full range of options is exhibited
in (66). It is, however, to be noted that, while clearly is a sentential adverb in
initial position, in other positions, it is a verb phrase modifier in the absence
of punctuation.

(66) {Clearly}, Sally {clearly} will {clearly} speak {clearly} to them


{clearly}.

There are many stylistic factors involved in adverb placement. For in-
stance, though initial position is possible for a predicate modifying adverb,
as in:

(67) Slowly she advanced.

it is certainly not favoured. However, since meaning differences do not seem


to be involved in these variations of position, they need not be reflected in
logical form.
In an early treatment, Montague (1970a) classified all adverbs as sentence
modifiers, "adformulas". Obviously, using lambda abstracts, it is perfectly
easy to accommodate this approach and, at the same time, ensure that the
logical form representations are correct. Thus, in the following representation,
quickly modifies a formula and yet remains part of the verb phrase.

(68) (Percy (X,x ((runs,χ) quickly))).

To treat all adverbs as of category (t/t) seems, however, counter-intuitive.


Certainly, it makes the task of formulating semantic rules for items like
necessarily and rapidly unnecessarily complicated. I shall, therefore, with
Cresswell (1985), assume Montague's (1973) broad classification and the
category assignments provided.
Within this broad classification, it would seem that prepositions which
figure in complex adverbs should be of category (((t/e)/(t/e))/(t/(t/e))). This
allows both for adverb phrases with proper nouns, as in in Paris and adverb
phrases with quantifier phrases as in around every churchyard. It is obvious, of
course, that the prepositions which figure in these structures are not vacuous.
Thus, around in the example just given, is very different in meaning to in in
in every churchyard.
Complex adverbs also occur as sentence modifiers, as in:
184 Categorial grammar

(69) In the opinion of the vicar, the bishop is a heretic.

In such cases, the head is a noun phrase and the preposition is of category
((t/t)/(t/(t/e))). I shall ignore this use of prepositions here.
These few remarks on the categorisation of adverbs and predicative ad-
jectives provide, of course, a meagre image of the semantic complexities
involved in this type of modification. I have not, for example, even men-
tioned the fact that adverbs fall into different semantic classes, classically
partitioned into: time, e.g. today, frequency, e.g. often, manner, e.g. harshly,
and place, for example, in Paris. I shall return to this classification briefly in
the next chapter.
To categorise predicative adjectives and adverbs as above is, of course,
inappropriate for their attributive use, as in big tower in Paris. When they
modify nouns directly, they may be thought of as members of category
((t//e)/(t//e)). That is to say, they combine with a common noun to form
a common noun. The resulting common nouns are subject to all of the mod-
ificational possibilities of their basic counterparts, including modification by
other adjectives, as in bright, young student, by quantifiers, as in every young
student/every bright, young student from Paris and they may appear in parti-
tive constructions such as six of the bright, young students.
At first sight, it appears that there is considerable redundancy involved in
allowing for two categories of adjective and adverb, attributive and predica-
tive. Traditional, transformational analysis, as mentioned in chapter 4, treated
all attributive cases as originating in predicative position, so that (70) was
related by "relative-be deletion" and "adjective placement" to (71).

(70) A big car is expensive.

(71) A car which is big is expensive.

Similarly, in this analysis, (72) is related transformationally to (73):

(72) The student from Berlin is clever.

(73) The student who is from Berlin is clever.

The possibility which this transformational approach offers of unifying the


treatment of attributive and predicative adjectives and adverbs is very attrac-
tive. However, it is attended by some major difficulties. Most importantly,
there are many adjectives - called "reference modifiers", Bolinger (1967) -
which occur in attributive position only. These adjectives, which include:
same, future, late, Southern and scores of others, seem to have an identify-
ing function and are of high frequency. If they are derived from predicative
Some category assignments 185

position, therefore, their transformation to attributives is obligatory and their


underlying environment must be ungrammatical. By contrast, there are some
adjectives, such as asleep which, since they originate historically in preposi-
tional phrases, cannot occur in attributive position and so must be excluded
from the reduction and placement transformations.
As far as adverbs are concerned, while there seem to be none which occur
in attributive position only, simple adverbs, such as harshly, may appear only
in predicative position.
It is also to be noted that constructions involving noun modifiers cannot
be readily explained as deriving from relative clauses. Thus, for example, the
phrase furniture shop is perfectly grammatical, but the phrase shop which is
furniture is most certainly not. We must, therefore, categorise noun modifiers,
along with attributive adjectives and adverbs, as of category ((t//e)/(t//e)).

7.3.8 Relatives and appositives


The treatment of relative clauses is, as the remarks above suggest, a fairly
complicated matter in English. They are, however, of semantic interest and
cannot be altogether ignored.
The most fundamental distinction to be made among such clauses is that
referred to in chapter 4 between restrictive and nonrestrictive or appositional
clauses, as in:

(74) The hills which surround the city are steep. = restrictive relative.

(75) The hills, which surround the city, are steep. = nonrestrictive clause.

Traditionally, while restrictive relatives are constituents of nominals in


logical form, the appositives are conjuncts. These facts would, in a traditional
grammar, be summarised in the differences between (74a) and (75a).

(74) a. [5 [NP The hills such that [5 they surround the city]] are steep].

(75) a. [5 & [5 The hills are steep] [5 They surround the city]].

It will be observed that (74a) makes use of the connective such that, in
the manner of Quine (1960). If such regimentations are taken seriously, as in
Montague (1973), it should be observed that (74a) is defective in that it does
not make it clear that such that actually connects the sentence which follows
it not with the preceding nominal as a whole, but with the common noun,
hills, which acts as its head. Thus, such that is of category ((t//e)/((t//e),t)).
Given such a treatment, we would, therefore, distinguish the restrictive rel-
186 Categorial grammar

ative connective such that from its sentential counterpart and which is of
category (t/(t,t)) and figures only in appositive constructions.
The justification for assigning such that to the category ((t//e)/((t//e),t)) is
semantic and is, again, due to Quine (1960). As Partee (1975) points out,
in an expression like (76), Montague's Russellian view of the requires that
the assertion of uniqueness which the definite article makes should extend
beyond the common noun, fish, to include the property denoted by the relative
clause - there is just one thing which is both a fish and walks. Obviously,
parallel considerations hold for other quantifiers like every.

(76) The fish such that it walks swims.

Assuming that logical form is formulated in terms of lambda expressions,


the representation of relative and appositive clauses departs somewhat from
the traditional formulations exemplified in (74a, 75a). In particular, the rele-
vant pronouns will be base generated and will appear in surface position so
that movement transformations are unnecessary.
Since the pronoun that can occur in relatives, but not in appositives, we can
employ it as a generalised relative. The logical representations corresponding
to (74a) and (75a) will be (74b) and (75b).

(74) b. ([The (hills such that {that(Χ,x((Λ,y(surround,x,y)) the city))})]


(X,x(are steep,x))).

(75) b. ([The hills{which(X,x((X,y(surround,x,y)) the city))}]


(X,x(are steep,x))).

7.3.9 Comparison
An important function of natural language is the expression of degrees of
difference or similarity. The former is accomplished through so-called "com-
parative" constructions and the latter via their "equative" counterparts.
Difference and similarity of degree are expressed in respect of all four
major parts of speech, nouns, verbs, adjectives and adverbs.
Since difference of degree is a property of properties, and since adjectives
and adverbs denote properties directly, they frequently allow, in their mor-
phology for the marking of proportion. Thus, for example, beside the positive
adjective young and adverb fast, we have their comparative and superlative
counterparts, namely, younger/faster and youngest/fastest. In a few cases,
comparison is by suppletion, as in good/better/best.
Some category assignments 187

The proportional quantifiers much/many/little/few also allow for compari-


son. This is usually by suppletion, more/most and less/least, but few has the
regular forms fewer/fewest.
When their morphology does not allow affixation, adjectives and adverbs
are marked for comparative and superlative by the appropriate form of the
proportional quantifiers much for positive comparison and little for negative.
Thus, we have: more/most remarkable and more/most rapidly beside less/least
remarkable/rapidly.
Unlike adjectives and adverbs, nouns and verbs, in English, do not permit
morphological changes to indicate degree. They, therefore, combine with
the appropriate form of the proportional quantifiers. Count nouns combine
with the count quantifiers many/few, as in many/more/most people, beside the
negative constructions few/fewer/fewest rabbits. Mass nouns take appropriate
noncount quantifiers, e.g. much/more/most/little/less money. Verbs are also
regarded as noncount in respect of comparison.
In the remainder of this subsection, I shall concentrate on comparative
structures only, taking up equatives shortly.
Consider, first, the following range of positive comparisons.

(77) Sally is brighter/more intelligent than Percy.

(78) Percy runs faster/more awkwardly than Jack.

(79) Sally has more money/books than John.

a. Sally plays more tennis than she plays squash.

(80) Sally plays tennis more than she plays squash.

(81) John runs more than he walks.

(82) More people like walking than running.

In contrast to these positive assertions, we have such negative comparisons

as:

(83) Sally is less bright/intelligent than Percy.

(84) Percy runs less fast/awkwardly than Jack.

(85) Sally has less money/fewer books than John.

a. Sally plays less tennis than she plays squash.

(86) Sally plays tennis/the piano less than she plays squash.

(87) Jack runs less than he walks.


188 Categorial grammar

(88) Fewer people like running than walking.

The considerable variation in comparative marking suggests the need at


the level of logical form for an abstract category representing the notion of
comparison. In traditional grammar, this category has the positive and nega-
tive forms more and less. However, since these are not fully representative,
failing to reflect the morphology of few, I prefer to employ the neutral Comp
for this role.
Comp, which relies for its semantic force on the relations {>, <}, is, in
some respects, like a proportional quantifier. It is realised as the comparative
inflection and so appears either as the morpheme -er, as in fewer, brighter,
or suppletively, as in more/less/better.
One important difference between Comp and words like much/many is
that Comp qualifies the proportional quantifiers themselves. The resulting
comparative may, in turn, be modified intensively by much, as in much more.
Since comparison of degree is a property of properties, I shall treat Comp
as an affix which combines with an adjective, adverb, or quantifier to form
an adjective, adverb, or quantifier.
As (77-88) show, Comp appears with than. While than is often taken to
be a complementiser, I prefer to think of it as a particle which combines with
Comp to form a correlative conjunction Comp than. The simplest way of
viewing this conjunction is as of category (t/(t,t)). However, since properties
of properties are concerned rather than propositions, I shall take it as of
category ((t/e)/((t/e),t)).
Although Comp is usually realised, in the correlative, as -er when an
adjective or adverb allows for this, utterances with explicit quantification are
fairly commonly employed for purposes of emphasis or contrast, even in
these circumstances, as the following shows:

(89) Joan is bright more than brilliant.

It will be seen from (77-88), that, while more/less normally precede com-
pared adjectives and adverbs, in the case of verb phrase comparisons (80,
81, 86, 87), they are adjacent to than. In the case of common noun object
comparisons (79, 79a, 85, 85a), they determine the first object noun. When
quantities of subjects are compared, (82, 88), the first subject is determined.
Since I take Comp, in structures like (77-89), to be part of the conjunction
Comp than, I shall assume that it is adjacent to than at the level of logical
form and that its surface position depends upon the application of movement
rules.
Some category assignments 189

Comp than is a conjoining function which yields an intransitive verb.


Unlike and, which conjoins sentences, it is subordinating. Thus, this kind of
conjunction is not commutative. Further, while ellipses in the first conjunct
are possible, as in:

(90) Jack more than Percy runs.

it is usually only the sentential argument which is subject to reduction. The


following examples are illustrative.

(91) Sally is brighter than Percy {is/is hardworking}.

(92) Percy runs faster than Jack {does} {runs/swims}.

(93) Sally has more money than John {has/has credit}.

(94) Sally is more intelligent than {she /.v} inspired.

The possibility of allowing Comp to have a status independent of than is


suggested by cases like the following in which comparative adjectives are
attributive.

(95) The bigger pumpkin won the prize.

It seems reasonable, however, to derive attributive comparatives from


predicatives in relative clauses, though, as in the earlier case of positive at-
tributives, there may be good reasons for avoiding this manoeuvre. Certainly,
such derivational histories would be complex for attributive superlatives, as
in:

(96) The biggest pumpkin won the prize.

It seems appropriate to regard the superlative inflection as the surface re-


alisation of an underlying abstract element Super which, in many respects
behaves like Comp. An important difference between the two elements, how-
ever, is that, unlike Comp, Super cannot combine with than to form a con-
junction, as the ungrammaticality of the following shows.

(97) *Mary is biggest than Sally.

As more is derived from much/many plus Comp, so most is a derivative


of much/many plus Super. The same goes, mutatis mutandis for least. The
count quantifiers many/few cannot, of course, figure in the derivation of the
comparative or superlative forms of adjectives.
Both (95) and (96) have alternative surface structures which explicitly
feature partitive of, namely:
190 Categorial grammar

(98) The bigger/biggest of the pumpkins won the prize.

To account for thfese cases, we need only invoke the abstract head Η referred
to earlier in the treatment of quantifiers. Thus, bigger is attributed to Η and
is, thus, an expression of category ((t//e)/(t//e)). The derived common noun
then combines with the definite article to form, via lambda abstraction, an
expression of category (t/(t/e)), which can, of course, enter into partitive
constructions with of.
We may also employ Η to supply the missing head in reduced partitive
constructions involving comparatives and superlatives, as in:

(99) The bigger/biggest won the prize.

(100) Sally selected the bigger/biggest.

Such constructions always imply a full partitive expression, such as bigger


of the two.
Although the proportional quantifiers do not figure in attributive compar-
atives, they may, of course, provide the bases for determiner quantification,
as (79, 82, 85, 88) above show. The same holds for superlatives, as in the
following.

(101) Most people enjoy music.

(102) Most snow is too thin for skiing.

7.3.10 Intensifiers
Comparable, or gradable, adjectives and adverbs, like tender/ly, unlike their
absolute counterparts, such as correct/ly, may be modified by intensifiers like
very, as in very sultry, quite rapidly.
That it is an error to regard such words as adverbs - a common practice -
is evidenced by various distributional restrictions. Thus, for example, while
adverbs may modify independently, intensifiers cannot, as the ungrammati-
cally of the following shows:

(103) *Percy runs very.

Intensifiers may combine with simple adverbs and nonabsolute adjectives


in all functions. Thus, in (104) very modifies both predicative and sentential
clearly.

(104) Very clearly, Sally speaks very clearly.


Some category assignments 191

Similarly, in (105), rather modifies both attributive and predicative adjec-


tives:

(105) The rather beautiful church seems rather unsafe.

As noted in the previous section, the word much can also function as an
intensifier. This is particularly common when comparatives involving more
are intensified, as in much more rapid/ly, though other intensifies may also
be used, as in rather more rapid/ly.
Although comparatives are frequently intensified, superlatives obviously
may not be. Thus, in an expression like the very highest mountain, very is to
be taken, in its original use, as an adjective meaning true/absolute.
From these facts, it follows that intensifiers combine with adjectives and
adverbs to yield either of the three modifier categories:

a. ((t/t)/(t/t)): e.g. quite possibly;

b. ({(t/e)/(t/e)}/{(t/e)/(t/e)}): e.g. very rapidlylvery young·,

c. ({(t//e)/(t//e)}/{(t//e)/(t//e)}): e.g. very young.

Of course, any one of these derived modifiers can, itself, be modified, so


that we have strings of iterated intensifiers such as very, very young/rapidly.

7.3.11 Equatives
Just as we can analyse comparatives involving than in terms of a conjunc-
tive function, so also can we treat equative constructions as combinations of
a sentence with an intransitive verb, resulting in an intransitive verb. The
conjunction employed for this purpose is correlative as as. Thus, like Comp
than, the equative conjunction is of category ((t/e)/((t/e),t)).
Obviously, the treatment of discontinuous elements is not entirely straight-
forward for a categorial grammar, as my discussion of comparative conjunc-
tion implied. If they are assigned to categories on the basis of their surface
representations, the first component has to combine with some constituent
to its right so as to produce something which can link up with something
to its right to form the whole. Alternatively, the second combines with the
constituent to its left in preparation for the final combination. Finally, the
elements may be combined sequentially either from left to right or right to
left. Such procedures are entirely practical, but not always in accord with our
intuitions. Thus, for example, to say that as much money, or much money as
in (108) below is an entire unit which combines with as to conjoin the two
sentences is hardly convincing or simple.
192 Categorial grammar

I shall, therefore, continue to assume that, at the level of logical form,


components of discontinuous elements such as as as, either or, . . . , are ad-
jacent and that their surface separation is the result of movement rules.
Equative constructions, like their comparative counterparts, can be positive
or negative and may be formulated in terms of adjectives, adverbs, nominals
or verbs. Unlike Comp than, as as is commutative. However, as with com-
paratives, while ellipsis is rarely found in the first conjunct, the sentential
argument is open to a variety of reductions, a few of which are reflected in
the following examples.

(106) Joan is as musical as Sally {/.?}/{zs logical}.

(107) Joan runs as fast as Jack {does/runs/swims}.

(108) Sally has as much/little money/as many/few books as Percy {does/


has/has socks}.

a. Sally plays as much/little tennis as {Jack} {she/Jack plays squash}.

(109) Sally likes tennis/Jack as much/little as ...

(110) Jack runs as much/little as Percy {does/runs/swims}.

It will be observed that, as in comparative constructions, the proportional


quantifiers much, many, little, few appear explicitly in the nominal and verbal
equations. Further, as with comparatives, the quantifier is moved into the
nominal in cases like (108, 108a), in contrast to equations involving unitary
nominals or entire verb phrases, as in (109, 110).
These facts suggest the possibility of regarding the equative conjunction
as incorporating an abstract element which is the equative counterpart of
Comp and Super. Such an element could be represented by Positive which
relies for its semantic force on the relation = . However, since it would never
be realised in surface structure, and since the semantic rules for positive
proportional quantifiers should, anyway, involve the relation denoted by = ,
I shall continue to treat the conjunction simply as as as.
As with comparatives, a quantifier is sometimes used with adjective or
adverbial equations for special effects, as in:

(111) Sally is as much annoyed as surprised.

(112) Jack is as little interested in frogs as he is in horses.

Equality of degree is also expressible with the aid of equally/just, as in:

(113) Percy is equally/just as fat as he is lazy.


Some category assignments 193

A particularly common and interesting equative construction involves the


modal operator possible, as in:

(114) Jack runs as fast/eats as many apples as possible.

In such cases, it is asserted that a particular property is possessed in the


highest degree relative to some notion of ability.
Finally, it is worth noting that equative constructions may incorporate
comparative ones but the reverse is scarcely acceptable, as the following,
rather clumsy structures indicate.

(115) Jack is as much taller than John as he is than Percy.

(116) Jack is bigger than Sally as much as he is than Percy.

(117) ?Jack runs faster as much as Sally than Percy.

7.3.12 Degree complements


Three other constructions in which adjectives, adverbs, nominals and verbs
are used relatively rather than absolutely are degree complements. These
constructions involve either the particle too and an infinitive, so as plus an
infinitive, or so and a sentential complement. As in the case of all other
structures involving proportion, much/many and their negative counterparts
little/few denote degree when the property involved is expressed as a nominal
or verb.
The following cases are typical.

(118) Percy is too lazy to work.

(119) Jack runs too slowly to be an athlete.

(120) Sally has too much/little money/to many/few books to be ignored.

(121) Jack laughs too much/little to be trusted.

(122) Jack was so kind as to feed the pidgins.

(123) Sally is so rich that she need not work.

(124) Jack works so hard that he has become a bore.

(125) Sally has so much/little money/so many/few books that she can't
move.

(126) Jack laughs so much/little that nobody trusts him.


194 Categorial grammar

Considering, first, distributional questions, it seems that too and so as are


restricted to infinitive complements. Thus, for example, we cannot have:
(127) *Jack plays too much/so much as that he can study.
So, on the other hand, takes sentential complements only, as the ungrammat-
ically of the following demonstrates:
(128) *Sally has so many books to be a student.
The restriction of too and so as to infinitive complements has to do with
their semantic function and the nature of infinitives - it is not merely a matter
of use. Presumably, these elements indicate that a property is possessed to a
degree which precludes the possession of some other property. The futuristic,
unfulfilled connotations of infinitives noted by Bolinger (1968) is relevant
here.
Semantically, the most plausible approach would be to assume that too
and so as combine with to and so with that to form correlative conjunctions
which, like Comp than and as as, are of category ((t/e)/((t/e),t)).
The surface differences would then be represented at logical form by the
fact that the complement sentence of too and so as has Pro in subject position.

7.4 Abbreviations

The remarks in 7.3 have been somewhat discursive. However, they have been
sufficient to provide for the construction of a fairly rich sample categorial
grammar.
Before proceeding, it would be well to introduce some notational abbrevi-
ations since it would be tedious in the extreme constantly to employ the full
ideographic names for the various categories. In the following, the abbrevia-
tions sometimes coincide with the category they name. There is, moreover, a
measure of redundancy in writing out some categories which have identical
ideographs, but this is tolerated for ease of reference and completeness.
Abbreviations

Abbreviations for category names

Table 1. Logical and nonlogical connectives

Category Ideograph Abbreviation


Sentence t t
Negator (t/t) Neg
Sentence adverb (t/t) (t/t)
Conjunction (t/(t,t)) Conj
Disjunction (t/(t,t)) Disj
Correlative conjunction ((t/e)/((t/e),t)) Correl
Relative conjunction ((t//e)/((t//e),t)) Such That

Table 2. Nominals
Category Ideograph Abbreviation
Term variable e Vf
Wh-variable e vq
Pronoun e Pron
Pro e Pro
Proper noun (t/(t/e)) Τ
Common noun (t//e) CN
Quantifier (t/((t//e),(t/e))) Quant
Nominal (t/(t/e)) NP
partitive (NP/(NP,NP)) of
Genitive of (NP/(NP,NP)) Of
Genitive 's (NP/(NP,CN)) 's
Vacuous of (NP/(NP,NP)) Of

Table 3. Verbals
Category Ideograph Abbreviation
Intransitive verb (t/e) IV
Transitive verb (t/(e,e)) TV
Auxiliary verb (t//t) Aux
Complement- ((t/(t/e))/ {That,inf}
iser t)
Particle ((t/(e,e))/(t/(e,e))) Particle
196 Categorial grammar

Table 4. Nonsentential adverbs and adjectives

Category Ideograph Abbreviation


Pred-adv. ((t/e)/(t/e)) (IV/IV)
Pred-adj. ((t/e)/(t/e)) (IV/IV)
vacuous of ((IV/IV)/ Of
((IV/IV),(NP)))
Attrib-adv. ((t//e)/(t//e)) (CN/CN)
Attrib-adj. ((t//e)/(t//e)) (CN/CN)
Noun-mod. ((t//e)/(t//e)) (CN/CN)
Comp {((IV/IV)/(IV/IV)), Comp
((CN/CN)/(CN/CN)),
((t/t)/(t/t))}
Super {((IV/IV)/(IV/IV)), Super
((CN/CN)/(CN/CN)),
((t/t)/(t/t))}
Intensifier {((IV/IV)/(IV/IV)), Intensif
((CN/CN)/(CN/CN)),
((t/t)/(t/t))}
Preposition {((IV/IV)/NP), Prep
((t/t)/NP)}

7.5 Spelling-out rules

In section 7.2, I gave Cresswell's (1973) general rules for the formation of
wellformed expressions in a lambda categorial language - rules (R.l, R.2,
R.3) and (R.4). In this section, I shall consider some examples of rules which
"spell out" the operation of concatenation in respect of particular structures.
As usual, angles indicate that the enclosed elements are ordered.
The rules which I present are somewhat like Montague's (1973) "rules
of functional application", though they differ in detail and scope. The rules
assume the use of lambda abstracts. Under that assumption, the structures
they describe will be below the surface.
The simplest spelling out rules are those concerned with sentence modifiers
and connectives.

(129) Rules for sentential modifiers and connectives

(R.8) If <5 e (t/t) and a c t , then (δ, a ) e t.


Spelling-out rules 197

Comment: As mentioned above, adverbs, including members of (t/t), such as


possibly, may appear in several surface positions besides the one indicated.
Hence, the use of parentheses.
Since the negative operator is also a member of (t/t), it too is covered by
(R.8). Clearly, the rule oversimplifies negative placement greatly. In many
languages, including English, negatives may appear intrasententially. I have
adopted the oversimplification since a discussion of the surface syntax of
negation in English would be out of place in this study. The classic account
of that subject remains Klima (1964).

(R.9) If δ„ e Aux and a e t, then < δ„, a' > e t: where δη is an ordered set
and a ' is just like a save for any morphological changes determined
by the choice in δ„.

Comment: At the level of logical form, auxiliary verbs always prefix sen-
tences, which will be propositional functions. In English, multiple modifica-
tion by auxiliaries is strictly ordered, hence the need for δ„ to be an ordered
set. The changes in a' will be in respect of concord if δη is a single modal.
If δ„ contains aspectual auxiliaries, they will introduce the relevant affixes.

(R.10) If δ e {Conj, disj} and Qi, q 2 e t, then (Q{i,2}· δ, ö{i.2}) and


< α\,δ,α2 > and < δ, «2, Qi > e t.

Comment: (R.10) spells out concatenation for unreduced conjuncts/disjuncts


only. The options reflect the differences in behaviour between co-ordinating
and subordinating conjunction and disjunction. In the former type, the con-
junction/disjunction is flanked by the conjuncts/disjuncts and their order
is, strictly, irrelevant. In the latter type, the connective, logical, e.g. al-
though/unless, or nonlogical, e.g. because/before, may appear in initial posi-
tion. In the subordinating cases, ordering is fixed.
The remainder of the rules are concerned with the internal structure of
sentences.

(130) Rules for verb phrases

(R.l 1) If δ e IV and α € {e,pro, Pron, NP}, then < α, δ' > e t: Where δ' is
just like δ except for any language-specific morphological changes
to the first and second verb in δ required by the choice in a .

Comment: Since, at the level of logical form, verbs occur in formulae within
lambda expressions, the first verb in δ may, in fact, be a member of IV, TV,
or Aux. If the first verb is in Aux, then the second verb will undergo the
changes allowed for in (R.9).
198 Categorial grammar

(R.12) If δ e TV and a.\, <%2, ...an are variables of category e, then


< <5, α ϊ , ct2, . . . a„ > e t.
Comment: In this rule, the subscripts on a allow us to count any verbs of
higher degree than one as transitive.
Since the lambda expressions which involve transitive verbs are of cate-
gory IV, it is unnecessary to allow for morphological changes in their case
since that is already provided for in (R.l 1).

(R.13) If δ e {That,inf } and α e t: (A) if β e {IV,TV }, < β, δ, a > e IV;


(B) if β e IV, < δ, α, β > e t.
Comment: (A) of (R. 13) allows for the creation of intransitive verbs by sup-
plying the relevant intransitives, e.g. is obvious, is easy, and their transitive
counterparts, e.g. believe, want, with their complements. (B) permits the for-
mation of sentences by supplying intransitives, e.g. is surprising, worries
Percy, with sentential or infinitival subject nominals.
As with the other rules, (R.13) will permit the creation of innumerable
ungrammatical strings. To construct the filters by which to remove them is,
however, the business of a grammar devoted to the description of English
as such. The restrictions on linear order might, however, be taken as partial
filters.

(R.14) If δ e correl and α, β e IV and t respectively, < α, δ, β > e IV.

(131) Rules for deriving nominals


(R.l5) If δ e Such That and a and β e CN and t respectively, then
< α , δ , β > e CN.
Comment: The output of this rule may become the a argument of its input.
In principle, therefore, there is no restriction on the density of relative clause
embedding.
(R. 16) If δ e Quant and a e CN, then < δ,a' > e NP: where a' is just like
a except for any morphological changes imposed by the choice in
δ.
Comment: Since the empty categories zero and Η have the statuses of a
quantifier and a common noun respectively, unitary nominals like all and
water/honesty/books enjoy NP status under this rule as well as phrase nom-
inals like the child. The rule will also permit the unacceptable NP which
consists of no surface elements whatever, i.e. the combination of zero and
H. Any output of (R.l5) is, of course, a valid input to this rule.
The lexicon 199

(R.17) If δ e Of and a2 e NP, then < α{,δ,α2 > e NP.

(R.18) If δ e 's and ot\ and a2 e NP and CN respectively, then < < αι,δ >
, OL2 > e NP.

Comment: The choice in δ is, of course, dependent on that in αι.

(132) Rules for adverbs and adjectives

(R.19) If δ e IV/IV and a e IV, then (α, δ) e IV.

(R.20) If Si, 62 e CN/CN and α e CN, then < δ{, a > and < a, > e
CN.

Comment: If δι contains only adjectives and δ2 only adverbs, then rule (R.20)
ensures that adverbs do not premodify and, at the same time, allows for those
adjectives which postmodify. Strictly, these precautions are an indulgence
since they properly belong to a detailed account of English.

(R.21) If δ e Intensif and α e IV/IV, then < 6, a > e IV/IV and similarly
where a e CN/CN, or t/t.

(R.22) If δ e {Comp, Super } and a e IV/iV, then < a + δ > e IV/iV, and
similarly where a e CN/CN, t/t, or Quant.

(R.23) If δ e Of and α and β e IV/IV and NP respectively, then < a , <5, β >
e IV/IV.
Comment: In this rule, IV/IV is, of course, a predicative adjective. The con-
struction involved is exemplified by full of water.

(R.24) If δ e Prep and α e NP, then < «5, α > e {IV/IV, t/t }.

7.6 The lexicon


To complete a categorial grammar, it is necessary to construct a lexicon in
which all basic expressions in the sample are assigned to their respective
categories. I shall not attempt this enterprise here, but shall merely provide
a representative list for the sample developed in this chapter. In constructing
the list, I shall not try to represent the dictionary definitions of the various
items. I also ignore nonsemantic data, such as phonological representations.
One important point to be noted is that all derived categories must have
basic expressions. Thus, there must be basic intransitive verbs, basic common
200 Categorial grammar

nouns, etc. Clearly, however, there is no basic expression belonging to the


basic category of sentence.

(133) A sample lexicon

Let Ba be the set of basic expressions of category a. The sample lexicon


is as follows.

Table 5. A sample lexicon


Basic category — Item
B, = {}
Bt/t = {not, possibly, clearly}
Β Conj = {and, although, because}
Bdisj = {or, unless }
Bcorrel = {as as, comp than, either or}
Be = {Pron, pro, V{ET(/} }

BNP = {Percy, Sally, Paris}


BJV = {run, burn, run off}
BAUX = {will, have-en, be-ing, do}
Β TV = {play, have, give,
look for, try, believe}
Bcomplementiser = {that, to}
BCN = {leopard, water, honesty}
BQUANT = {all, the, a, some, much,
many,little, few,
which, what}
BJV /IV = {tall, rapidly}
BCN/CN = {Southerly, young}
Bjntensif = {very, quite}
Bprep = {in, about, off}
Chapter 8
Semantic rules

8.1 Semantic rules

Slightly reformulating the discussion of chapter 1, the theory of meaning


which provides the background for this study assumes that the values of
derived expressions are determined by the denotations of basic expressions.
The denotations of proper nouns are constant functions and the denotations
of sentences are open propositions, functions from contextual properties to
propositions, or propositions. The values of propositions are, in turn, truth-
values or pragmatic values.
The domain of individuals or things, De, contains literally anything we can
possibly refer to, including individual concepts or "intensional objects" and
even propositions. For convenience, I shall take propositions as consisting of
possible worlds rather than of heavens.
Since we claim that the meaning of a sentence is arrived at by summing
the meanings of its parts in accordance with the compositional principle, it is
clearly necessary to say what the meanings of the parts are in terms of rules
which are based on their syntactic behaviour.
In chapter 3, I presented and discussed Montague's (1973) system of
semantic rules which determine the values of expressions under a given in-
terpretation. Montague's treatment is very elegant, but it is not at all explicit.
In this chapter, I shall provide rules for particular items representing the parts
of speech discussed in the last chapter.
To specify precisely what is to be the meaning of a given word would,
however, go far beyond the aims of this study. My rules will, therefore, not
be sufficiently detailed to constitute actual dictionary definitions. Thus, for
example, in proposing a semantic rule for a given word, say, quickly, I shall
not attempt to supply the details which would explain exactly what is its use
compared with, say, slowly. My rules will not constitute definitions in the
conventional, lexicographical sense.
The principal purpose of the rules will be to display general outlines of
the conditions which determine the satisfiability of complex expressions. In
some instances, for example, natural-language equivalents of logical con-
202 Semantic rules

stants, these conditions will be truth conditions and will be stated simply
as such. In others, such as nonlogical connectives like because, they are
nontruth-functional and in yet others, such as thin/many, they involve con-
ditions of appropriateness. In stating the rules, "met" will indicate that a
condition of appropriateness is in question.
Cresswell's (1973) idea of considering propositions as sets of possible
worlds allows him to present rules in a very general but precise way. I shall
adopt a looser format. However, since my rules are broadely modelled on
Cresswell's, it will be useful to display and comment on one of his examples.
If a proposition, p, is true in a world, w e W, then we may write (w e p)
to mean 'p is true in w \ Similarly, (w jk p) stands for 'p is false in w'.
Let V be a meaning assignment and "a e Dx" mean that α is a member
of the domain x. Further, let "a e Fx" mean that α is a function of category
x. Then the rule for negation will be:

(R.l) V(neg) is that function F^eg such that, for any a e Dt and w e W,
w e F(Q) if and only if w / a.

Verbalised: Neg is that function which, for any sentence and world, combines
with that sentence to form a sentence true in that world if and only if the
positive form of the same sentence is false in that world. That is to say: a
negated sentence is true if and only if the worlds which constitute its propo-
sitional content do not form the propositional content of the corresponding
affirmative.
In general, my own definitions will be simple versions of rules like (R.l).
The major simplification will consist in the omission of explicit reference to
the fact that members of W constitute a proposition. I shall, therefore, simply
say, in place of (R.l):

(R.l) a. V(Neg) is that function F^g such that, for any a e Dt, F(a) is 1 if
and only if a is 0.

In this formulation, quantification over sets of worlds is understood. If explicit


quantification were thought necessary, it could be made explicit, as in the
following rough definition of inclusive or .

(R.2) V(or) is that function FDisj such that, for any α, β e D, and w e W,
F(a,/i) is 1 iff α and β are both 1, or either a or β is 1.
Logical and nonlogical connectives 203

8.2 Logical and nonlogical connectives

As in the account of categories in chapter 7, it seems best to begin with the


simplest situation, namely, one in which the fragment consists only of atomic
propositions and connectives.
We may start by observing that the semantic rules for the logical con-
nectives: and, not, or and i f , given the discussion of implication, are largely
summed up in the usual truth tables displayed in chapter 3. That being so,
I shall not needlessly repeat that information here. I shall comment on the
supposed temporal use of and later.
The truth tables in themselves, however, are not sufficient bases for the
semantic analysis of all sentential connectives. They are not detailed enough
to allow for important differences in meaning between connectives which
may, from a purely logical point of view, substitute for each other. Just as
important, they have nothing to say about non-truth-functional connectives
like because.
To commence, it is very well known that English, in common with many
other languages, makes use of several connectives which, from a logical point
of view, are equivalent to the usual logical constants. Thus, for example, but,
in its conjunctive use, and although have precisely the same truth-functional
status as co-ordinating and, while unless behaves like or. However, it is
apparent that, from a purely linguistic perspective, these equivalences mask
important differences.
While but is a co-ordinating conjunction, although is subordinating. In
spite of this difference, however, both but and although share roughly the
same semantic structure in addition to their truth-functional meaning. Prob-
ably their least complicated use is exemplified by conjunctions like (1):

(1) 5 is odd but 6 is even./Although 5 is odd, 6 is even.

Clearly, the details of the use of these items are fine. However, what ap-
pears to be common to them both in (1) is the element of contrast. This
suggests the following as a first approximation to a semantic rule for but/
although. "Met" is employed here because both truth and pragmatical condi-
tions are involved.

(R.3) V({but, although}) is that function FC(mj such that, for any α, 3 e
Du F(a, 3) is met if and only if ( a & 3) is 1 and contrast{a, 3).

In this rule, it is not claimed that contrast{a. 3) must be true in w since that
would be to make that a truth-condition. Considered in this light, the condition
204 Semantic rules

of contrast is necessary for a compound proposition using the connectives


but/although to be pragmatically satisfiable. Thus, (2) is unsatisfiable:
(2) Jack is fat but/although he isn 't thin.
However, the notion expressed by "A contrasts with B" is by no means
sufficient to provide the semantics of but and although. These connectives are
frequently employed to suggest the unexpected or surprising. Thus, for exam-
ple, (3) seems to imply that the one conjunct is an unexpected or surprising
sequel to the condition denoted by the other, while in (4), the simultaneous
possession of the two contrasting properties is regarded as surprising.

(3) Percy married Sally, but he didn't like her./ Although he married
Sally, Percy didn 't like her.
(4) Percy is young, but wise./ Although he is young, Percy is wise.
While I shall not attempt to discuss the additional semantic conditions
involved in these cases fully - Russell (1950) and Van Dijk (1977) provide
insightful discussions of the kinds of psychological issues involved - it is
obvious that (R.3) is in need of some expansion.
One fact about (R.3) invites particular comment. When two propositions
are claimed to contrast, an important ingredient is likely to be the posi-
tive/negative opposition. As Russell (1950) demonstrates, it is not an easy
matter to say what is meant by a contrast like wise/not wise. However, at an
intuitive level, the notion of contrast involves the negative/positive opposi-
tion and this opposition may, but need not, be unexpected. When it is not
unexpected, but/although are used in their simple sense as in (1), reflected
in (R.3). When, however, a conjunction involving contrast is unexpected, it
leads to surprise.
In terms of the positive/negative polarities, if (p & q) is unexpected, it is
probably true that (p & -q) is expected. Thus, if it is surprising that, being
young, Percy is wise, it is expected that, being young, Percy is not wise. It
would seem, however, that mere opposition is not enough. The content of
the two propositions which are in contrast must have something in common.
Thus, the sense of but/although is not reflected in:

(5) Percy is young but inflation is not rising this summer.


In the case of predicates like wise, this relation seems to involve the
notion of relevance. This notion is traditionally defined in terms of logical
implication - Belnap (1969) - as follows:
(6) (p is relevant to q) if either ρ or -p implies either q or -q.
Logical and nonlogical connectives 205

Thus, for a relation to be expected it is, presumably, necessary, though not


sufficient, that it be judged to be relevant. When the relation is unexpected,
it must still be relevant if it is to make sense. Admittedly, this seems some-
what trite. One might, after all, point out that the co-operative principle in
itself will usually ensure that the addressee make some effort to discover the
basis for the relevance relation and, thus, the conjunction of almost any two
propositions can, given sufficient good will, be made to make sense, even a
case like (5).
Another point to be noted, in connection with the negative/positive oppo-
sition, is that although the content of the two propositions must be, in some
sense, mutually relevant, the two must not be identical. In natural language, it
is common to find pseudocontradictions like (7). Such conjunctions, however,
are used to express the mean of a gradable property. They do not suggest
that that mean is surprising or unexpected.

(7) Percy is tall and he is not tall.

Whereas, in a formal system, (7) would be necessarily false, in a natural


language, it may be either true or false depending upon whether Percy can
be truly judged to possess the property of height in an average degree.
The above considerations suggest the following expansion of (R.3) as a
semantic rule for but/although, where the relevance relation is assumed:

(R.3) a. V(but/although) is that function Fc o n j such that for any α, β e D,,


F ( α , β) is met, if and only if (I) (α & β) is 1 and contrast(a. β) or
(II) (I) holds and, in addition, for some individual a, surprise((a &
β), a) and utters{a,(a & β)).

In this revision, I ignore the fact that the element of surprise may be restricted
to a point of time coincident with that of the utterance. The chief motivation
for including the utterer is, of course, that what is surprising to an addressor
may well not be so to her/his addressee.
There is, of course, a quite different use of but to denote the complemen-
tary status of a subset - derived from the original sense of butan 'outside'.
This use is exemplified in (8).

(8) Everyone left early but Jack.

I shall not treat this usage here.


As noted earlier, the word unless can be substituted by or in English and
is, thus, frequently counted among the logical constants - for instance, Quine
(1941). While the interchangeability of the two connectives seems sometimes
to be complete, as in (9, 10), consideration of examples (11,12) shows that, as
206 Semantic rules

in the cases of but/although, unless may possess pragmatic properties which


its purely logical counterpart does not share.

(9) 6 is an even number unless it is odd.

(10) 6 is an even number or it is odd.

(11) There will be a revolution unless the government resigns.

(12) There will be a revolution or the government will resign.

What seems to make unless in (11) different to or in (12) is that it presup-


poses a causal relation between the events referred to in the two propositions -
I return, briefly, to causation below in my discussion of because. While or
is truth-functional in (12), unless in (11) suggests that the cause of the rev-
olution's taking place or failing to do so will be the government's failure to
resign or its doing so. It is, of course, the complete neutrality of or in this
respect which makes its use in (12) appear rather artificial.
In its truth-functional sense, as in (9), unless can be provided with the
same semantic rule as exclusive or.
More uncertain is the ability of unless to function inclusively, though (13)
may provide an instance:

(13) Birds fly unless they swim.

I should stress that I have rather little faith in my own intuitions regarding
examples like (13). Certainly, it does not seem possible to underline an in-
clusive sense through the extention or both which is available in the case of
or. However, that may be unhelpful since its exclusive counterpart but not
both is also unavailable with unless.
The causative unless clearly requires to be treated separately from its neu-
tral counterpart. The important point is that this unless is not truth-functional.
Thus, reversing the order of the two propositions in (11) changes the mean-
ing of the whole. According to (11), the revolution will be caused by the
government's not resigning. If the disjuncts are reversed, the cause of the
government's resigning will be the revolution's failing to take place. Let us
use unlesscause to denote this particular connective. A possible semantic rule
for it would be the following:

(R.4) \{unlesscause) is that function Fdisj such that, for any α, β, e Dt,
¥(α,β) is 1 if either a is 1 or β is 1 and cause(-(ß),a).

Rules for disjunctions like (R.4) may not, however, be entirely satisfactory.
Superficially, (R.4) is circular in that it makes use of the disjunction either
Logical and nonlogical connectives 207

or in defining a disjunction. This is, however, only a superficial circularity


which arises from the use of English as a meta- and an object language in the
same text and could be disposed of by employing the logical symbol "A".
More striking, (R.4) gives no idea of the pragmatic conditions on unless apart
from the reference to the relation of causality.
In accord with Russell's (1950) discussion, we may analyse or in terms
of the relation of "uncertainty". When the disjunction or is used in natural
language, the motivation for doing so is the speaker's uncertainty as to the
lightness of one of two choices over the other. While the notion, Rightness,
must obviously be interpreted very broadly in this claim - the choice may be
dependent on context of use involving such notions as desirability, propriety,
etc. - it may be advisable to incorporate the notion of uncertainty into the
relevant semantic rules both for natural or and unless. This inclusion would,
of course, distance the semantic rules of a natural language from those of an
artificial language by extending the pragmatic component to the connective
or as well as unless.
Considering only the rule for unlesscause, with this addition, (R.4) could
be expanded to:

(R.4) a. V(unlesscau!ie) is that function FDisj such that, for any α, β e Dt,
F(Q, 3) is met if and only if either a is 1 or ß is 1 and cause(-
(3). a) and uncertain{a & 0)at time j for some individual a such
that utters(a,F(a, ß)) at j.

It is to be noted that, in this rule, the predicate "uncertain" takes a conjunction


as its argument rather than a disjunction. The reason is that both alternatives
must be uncertain, not just one disjunct. To see the reasonableness of this, it is
only necessary to consider what could be meant by the assertion "uncertain(p
or q)".
The semantic rule for negation, as given in the previous section, is straight-
forward. However, as is well known, its application to different cases is not
always quite so simple. The negation of a disjunct provides a typical exam-
ple of the care which must be taken when strictly logical relations are in
question. Consider (14) which is equivalent to (14a).

(14) Jack doesn't like milk or honey.

a. Jack likes neither milk nor honey.

In (14), the syntax of English obscures the fact that the negator has both
disjuncts as its scope - a reality which is emphasised in the morphosyntax
of (14a) in which each disjunct is explicitly negated through the negative
208 Semantic rules

disjunction neither nor. Hence, the logical structure of (14) is not (14b), but
(14c).

(14) b. (-pvq).

c. -(ρ ν q).

(14c) is equivalent, by de Morgan's law to:

(15) (-p & -q).

This means that an alternative encoding of the proposition denoted by (14)


would be:

(15) a. Jack doesn't like milk and he doesn't like honey.

Facts like these are not at all surprising given the logical status of or and
not. They provide, among other things, for the semantic rule for neither nor.
This disjunction is probably best regarded as derived from a historical neither
nor yet in which yet means in addition. The rule is:

(R.5) V(neither nor) is that function Fdisj such that, for any α, β e Dt,
F(α, β) is 1 if and only if (-α & -β) is 1.

In this rule, the uncertainty condition is, of course, absent - there can be no
uncertainty in the assertion of a conjunction of two negations.
Consider next a case like:

(16) Jack doesn 't like milk and honey.

At first sight, the rules of logic suggest that (16) is equivalent to (16a).

(16) a. Jack doesn't like milk or he doesn't like honey.

However, it is clear that this is not necessarily so since, in English, and may
conjoin nominals to form a compound nominal which is not derived from
two underlying sentences. Thus, the phrase milk and honey must be treated
as a single unit and the negation in (16) asserts that it is false to say that Jack
likes that unit, not that he dislikes either milk or honey in isolation. Thus, the
logical structure of (16) is not (-ρ ν -q) but simply -p. Parallel considerations
must also be taken into account in the interpretation of ambiguous cases like:

(17) Jack and Sally don't play tennis.

More complex are cases like (18) which, according to the established tests
(Klima 1964) are instances of sentential negation:

(18) Jack seldom/rarely/hardly ever drinks milk.


Logical and nonlogical connectives 209

If words like seldom were ordinary sentential negatives, then (18) would be
equivalent to (19), but that is obviously not so.

(19) Jack doesn 't drink milk.

Evidently, the negative element of such adverbs applies not to the entire
sentence but to some adverb of frequency like often. This is so in spite of
the strictly logical fact that (not often p) is always true when (not p) is true.
Thus, if (18) is true, then so is (20) and, by implication, (21). Further, if (21)
is true, then so is its entailment (22).

(20) Jack does not drink milk often.

(21) Jack drinks milk sometimes.

(22) Jack drinks milk.

These facts suggest that the semantic rule for negative adverbs like seldom
should be along the lines of:

(R.6) V(seldom) is that function FNeg such that, for any a e Dt, F(a) is
met if and only if α is 1 and (V(often)(a)) is not met.

(R.6) is rough in the absence of a rule for often - I shall provide such
a rule below, section 7 - however, the general import is obvious. It is also
obvious that notions like that expressed by often are essentially fuzzy. What
is regarded as often performed in one case may very well be thought to be
seldom in another. Thus, words like often/seldom have no place in formal
systems and their semantic rules are stated in terms of satisfiability in its
broad sense.
(R.6) classifies seldom as of category Neg and, hence, a member of (t/t).
Clearly, there are difficulties with this view. While it seems plausible to say
that seldom negates an entire sentence in which the IV-modifying adverb
often figures, the fact that, when such negatives as seldom/rarely appear
in sentence-initial position, they attract aux\ - tense and the first of any
auxiliary verbs - just like wh-words, etc., e.g. seldom does ..., scarcely
had . . . , suggests that they are not straightforward negators in spite of their
behaviour in respect of tag-questions, appositive tags and so on. The (t/t)
classification in (R.6) is, therefore, not wholly satisfactory.
As a further example of the kind of problem which can arise in the inter-
pretation of negation in English, consider the following sentence involving
word negation:

(23) Percy is unhappy.


210 Semantic rules

This sentence is indisputably false if (24) is true:

(24) Percy is happy.

However, equally indisputably, if (24) is false, it is not necessarily the case


that (23) is true. The phenomenon at the root of these facts is, of course,
the gradability of adjectives like happy, referred to in chapter 4 - see also
Lyons (1968). Like all gradables, happy implies the negation of its opposite
{unhappy/sad}, but the negation of happy does not imply the assertion of its
opposite. Paraphrasing Lyons, (x —> -y) but -(-x —> y).
These considerations are also basic to our understanding of cases like the
following in which sentence and word negation appear in a single clause:

(25) Percy is not unhappy.

(25) is true just in case (26) is false:

(26) Percy is unhappy.

However, if (26) is false, then (27) should be true:

(27) Percy is happy.

This is not so because, evidently, unhappy, being itself gradable, implies the
negation of its opposite, happy, but its negation does not imply its opposite.
From this, we conclude that the usual rule of double negation:

(28) -p = p.

does not extend to cases in which a sentential and word negation appear in
the same clause.
My treatment of unlesscause and seldom bestows upon them the status of
pragmatic operator rather than truth-functional connective in the strict sense.
Better known as pragmatic operators are words like because and before. For
such connectives, the classical truth tables do not provide suitable models
for the construction of semantic rules. Unlike and and other truth-functional
connectives, these items not only connect two propositions but claim that
their contents are related causally or temporally in certain ways.
Consider first, the relation asserted by because in:

(29) Percy broke an ankle because he fell on the stairs.


Since because is non-truth-functional, it is not commutative. Thus, (29) may
be true when (30) is false:

(30) Percy fell on the stairs because he broke an ankle.


Logical and nonlogical connectives 211

Obviously, the causal relation asserted by because is fundamental to the


meaning of the whole and helps to determine its truth value. In cases like
(29, 30), it is not sufficient to know whether the atomic propositions are true
or false to know what is the value of the entire assertion.
It is not my intention to attempt a review of the philosophical discussion
of causation. However, it is obvious that, in (29, 30), because asserts a rather
straightforward relation of direct cause and effect. The position is a little
more complex in:

(31) 6 is even because it is divisible by 2.

Here, because seems to assert a relation of premise and inference. It is a


necessary and sufficient condition for an integer's being even that it be divis-
ible by 2. 6 is divisible by 2 and so we infer that it is even. This example is
especially striking since, in it, because is commutative. It has that property
because the condition is sufficient as well as necessary. Yet another variety
is:

(32) One of the dogs has chewed the rug, it must be Spot because Rover
is in the garden.

In this instance, because links an inference with its justification, but because
is not here commutative.
Apart from a brief comment below, I shall ignore these differences and use
cause to denote a general condition with the Humean property of necessary
connection. The semantic rule for because is, accordingly, as follows:

(R.7) V(because) is that function Fconj such that, for any α, β e Dt, F(a, β)
is 1 if and only if α and β are both 1 and cause(a.ß) is 1.

Rule (R.7) requires that both conjuncts must be true in a compound propo-
sition featuring because. Thus, I take it that the following are false in the
actual world:

(33) The Japanese won the war because they bombed Pearl Harbor.

(34) Russell is famous because he walked on the moon.

(35) The president of China is a woman because China is a matriarchy.

It could be argued, however, that this requirement is too strong. In cases


where because links an inference and a justification, it might be that, if the
inference is true but the justification false, the whole could be regarded as
true. An example which seems to support this objection to (R.7) is:
212 Semantic rules

(36) Someone conquered England. It must have been Caesar because he


was a Greek.
Such a departure from (R.7) does not, however, seem possible if the inference
is false and its justification true, as in:

(37) Someone conquered England. It must have been Hitler because he


attacked England.

The semantics of the temporal connectives is fairly straightforward. Take


before as illustrative, then (38) is true, but (39) is false.
(38) Russell wrote On Denoting before he was forty.
(39) *Russell was forty before he wrote On Denoting.
Without taking into account the tenses of the atomic clauses, the semantic
rule for before is:

(R.8) V(before) is that function FConj such that, for any α, β e D,, F(a, β)
is 1 if and only if a is 1 at some moment j and β is 1 at j', j —> j'.
If the tenses of the atomic clauses are taken into consideration, stating
the semantic rule for before is more difficult. The condition that j precede
j ' obviously rules out any combination in which both clauses refer to actual
present since, under such circumstances, j and j' would be identical. Thus,
(40) is not acceptable:

(40) *Sally is writing a letter before she is going to the circus.


This restriction does not, of course, bear on the English practice of using
grammatical present to indicate future, so that (41) is not in conflict with
(R.8).
(41) Sally is writing a letter before she goes to the circus.
Indeed, it is the semantic behaviour of before which fixes the actual time
reference of the second conjunct in (41) as future. If before is substituted
by and in that example, the time-reference of the first is taken to be actual
present and the second is interpreted as expressing habitual aspect, resulting
in semantic anomaly.
When the tenses of the atomic clauses are past, j and j ' must obviously
both precede the time of utterance and for future, they must both follow it.
Further, as in the anomalous case just mentioned, both clauses must agree in
tense. This condition rules out combinations with time references on either
side of the time of utterance, as in:
Logical and nonlogical connectives 213

(42) *Percy went fishing before he will leave for India.

Relation to the time of utterance also inhibits combinations in which the first
event is expressed in present perfect since the clause is then relevant to the
actual present and cannot precede another event prior to the present. This is
illustrated by:

(43) *Percy has been fishing before he went to China.

(43) would be possible only if before were regarded as elliptical for before
that - a rather forced interpretation!
The semantic rules for after and while are like (R.8) but with the obvious
adjustments to the linear relations between j and j ' . The remarks concerning
tenses of the component propositions also, of course, require adaptation for
these connectives.
As a final case, it is to be noted that and frequently appears to be used
in a non-truth-functional manner to mean and then. Thus, the truth of (44)
depends not just on the values of the component propositions but upon the
assertion that the event described by the first preceded that described by the
second.

(44) The man went to the window and jumped out.

Clearly, in such cases, the conjuncts may not be exchanged. The question is
whether or not to account for this by ascribing a non-truth-functional use to
and or by assuming the presence at deep level of the temporal connective
then.
The fact that then does often occur in surface structures of the form (p
and then q) favours the view that and in examples like (44) is employed in
its usual logical sense as a simple conjunction and the failure of commutation
is to be attributed to a syntactic property of then by which it must follow the
clause which describes the earlier of two events. Presumably, the conjunction
then will have a semantic rule like that for before, though its syntax is rather
more complex.
An alternative approach to the problem of sentences like (44) would be to
presume that, at the level of logical form, each conjunct is explicitly marked
for time reference. If the first is true at j and the second at j ' , it is not
necessary to treat and as a temporal connective. The fact that the surface
order is inflexible reflects the well-known practice, in English, of mirroring
temporal in linear order.
There are, of course, other important connectives such as the correlatives,
Comp than and as as. Since these involve properties of nominals, adjec-
214 Semantic rules

tives and adverbs, I shall discuss them opportunistically below as they most
naturally arise.

8.3 Nominate

8.3.1 Unitary nominals


In chapter 7, proper nouns were assigned to the same syntactic category as
other nominals, namely, (t/(t/e)) = NP. In chapter 1, I suggested that proper
nouns denote constant functions with unique individuals as values and that
is little different from saying that they denote such individuals directly, i.e.
are of semantic type, < e >. To justify this approach would require a review
of the extensive philosophical literature on the semantics of proper names.
I shall say, merely, that there seems to be good reason for regarding proper
nouns as lacking sense, unlike ordinary nouns. Hence, while it is always
possible in principle to identify the referent or extension of a given proper
noun, say Scott, it is not strictly possible to determine its sense.
The fact that the personal pronouns and anaphors function as natural-
language variables would seem to point to the reasonableness of treating them
also as of type < e > and as having, in consequence, a simple semantics.
There are, of course, questions of number, case and gender which would,
in many languages, figure in their semantic rules, but these are of limited
interest and I shall not set them out here. As chapter 5 demonstrated, the
most important semantic facts surrounding pronouns such as he or words
like himself have to do with coreference and the description of those facts
obviously belongs to the subtheory of Binding rather than the semantics of
the individual items.
While quantifier words like someone/everyone behave like proper nouns
from a syntactic point of view - they are derived members of category NP -
they cannot be made to denote individuals directly for the reasons discussed
in chapter 7. Thus, for example, someone denotes a property of properties
and this must be reflected in its semantic rule. Below, I give the outlines of
rules for three of the most important quantifier words, someone, everyone and
no-one. It should be stressed that these rules are outlines only. For example,
the later treatment of some ultimately has relevance for someone.

(R.9) V(someone) is that function F^p such that, for any a e F/y, F(a) is
1 if and only if there exists at least one individual, a, such that a/x
satisfies the formulae human(x) and a(x).
Nominals 215

(R.10) V(everyone) is that function FNp such that, for any a e F/y, F(a)
is 1 if and only if for any individual, a, such that a/x satisfies the
formula human(x), a/x satisfies a(x).

(R.l 1) V(no-one) is that function FNP such that, for any a e F[v, F(a) is 1
if and only if for any individual, a, such that a/x satisfies the formula
human(x), a/x fails to satisfy a(x).

In accord with the principle of compositionality, the denotation of a nom-


inal is arrived at through the denotations of its parts. Thus, it is necessary,
for the sake of complex nominals, to formulate semantic rules for nominal
constituents as described in chapter 7.

8.3.2 Common nouns and intransitive verbs


Since common nouns and intransitive verbs are logically of a kind, it follows
that their semantic rules should be similar. Ignoring intensions for the mo-
ment, both are of a category which has as its type a function with domain in
the set of individuals. The values of such functions will either be truth values
or broad satisfaction.
The general pattern for the relevant semantic rules can be stated as (R.l2).

(R.12) If α e {IV,CN}, then V(a) is that function F{iV CN} such that, for
some set Ε in De, for any individual, a, a/x satisfies the formula
Q(X) if and only if a e E.

(R.12) is impoverished in that it makes the domains of common nouns


and intransitive verbs extensions rather than intensions. Since De includes
individual concepts of type < s. e > , this simplification is not harmful pro-
vided it is allowed for in those cases where an intensional interpretation is
required.
While (R.12) says rather little, it is to be noted that it establishes the
domains of IV and CN as sets of individuals. Thus, for example, frog(a)
is true if and only if the individual, a, is in the set, E, of frogs. Similarly,
walks(a) is true if and only if individual, a, walks. In like manner, humbugia)
is met, appropriate, if there is an individual, a, who is considered by some
to be such and overreacts{a) is met if at least some think that a has that
property.
Of course, the semantic rules for individual items must not be ambiguous.
Thus, if a given word, say bank, is used in the sample in different senses,
these uses must be reflected in separate semantic rules.
216 Semantic rules

The domain of a in (R.12) is generalised to a set, E. Particular rules should,


obviously, be more specific so as to provide conditions on appropriate use.
Thus, for instance, laugh needs to be restricted to agents which have the
property of being human. Failure to include such a selectional restriction
would make the interpretation of (45) problematical.

(45) She laughed.

Similarly, it is necessary to mark the patient of resound as inanimate in view


of such aberrant strings as (46).

(46) *The tiger resounded.

While it might be argued that restrictions of this kind are not relevant to
truth conditions, they are indisputably relevant if the semantic rules are to
perform as filters, disallowing some expressions as semantically ill-formed.
For brevity, I shall not, however, set out selectional restrictions in detail.
Rule (R.12) applies equally to mass and abstract nouns as to their ordi-
nary counterparts with countable referents. Of course, the rules for specific
items, e.g. vapour/virtue, would require clauses specifying the mereological
characteristics of those elements of De in their domains. Such information
would be needed, for instance, for quantification. Thus, in (47), vapour must
be understood as part of all vapour there is in spite of the noncount status of
its referent, but in (48) the reference is to the universal body. On the other
hand, in (49) virtue refers to a distinct instance of virtue, while in (50) all
instances of virtue are intended.

(47) Vapour filled the room.

(48) Vapour is liquid.

(49) Percy displayed great virtue.

(50) Virtue is difficult to describe.

Finally, it is obvious that if a is the empty category H, introduced in


chapter 7, then the domain of a is the universal set.
Given these general remarks, the following are sample rules for tiger and
walk respectively.

(R.13) V(tiger) is that function FCN such that, for some set E, a/x satisfies
the formula tiger(x) if and only if a e E.

(R.14) V(walk) is that function FfV such that, for some set E, a/x satisfies
the formula walks(x) if and only if a e E.
Nominals 217

As observed above, these rules are not lexicographically very informative.


In particular, it seems circular to say that a(x) is satisfied by a if and only
if a is in the set e denoted by a . The intention is that α is a constant
defined according to the kind of detail set out in Katz-Fodor (1963) in
which selectional rules are displayed. Hence, for example, walks can be true
of animate beings only if they possess legs.

8.3.3 Logical quantifiers


In chapter 7, the term "quantifier" was employed very generally to include
any part of speech which the linguist would call a determiner. In Montague
(1973), only logical quantifiers were considered, namely, every, the and a/an.
Since such quantifiers are, in a sense, more basic than their proportional
counterparts, I shall discuss them first.
It will be recalled from chapter 7 that quantifiers are in (t/((t//e),(t/e))),
abbreviated Quant. That is, they are functions from common noun and in-
transitive verb functions to truth-values. In section 7.3.5, I gave somewhat
simplified versions of semantic rules for every, a/an and the. These may be
restated as follows - I shall, henceforth, frequently tie the reference of ex-
pressions to individuals rather than to sets, but that is for convenience merely.
Moreover, I shall assume the substitution ( ß f V b ) in the open sentence of the
lambda expression used to create a nominal.

(R.15) V(every) is that function FQuun, such that, for any a e Few, F(a) e
Fnp such that, for any β e F/V, (F(a), β) is 1 if and only if for each
individual, a, such that a/x satisfies the formula a(x), a/x satisfies
3{x).

(R.16) V(a/an) is that function FQ such that, for any a e F n, F(Q) e


uant C

Fnp such that, for any β e F/y, ( F ( a ) . ß ) is 1 if and only if there


exists at least one individual, a, such that a/x satisfies the formulae
c*(x) a n d ß(\): a n d new information(a(a)).

(R.17) V(the) is that function FQ , such that,for any a e Fcn, F(a) e F


uan NP

such that, for any β e F/v, (F(a). /3) is 1 if and only if there is a
unique individual, a, such that a/x satisfies the formulae a(x) and
β(\): and old information(a(a)). If there exists no individual, a, such
that a/x satisfies a(x), (F(a), β) is #. If a/x satisfies α(χ) but fails to
satisfy ß ( x ) , ( F ( a ) , ß ) is 0.

Clearly, these rules are not sufficient, in themselves, to provide fully for
the meanings of these quantifiers. In the case of every, it would be appropriate
218 Semantic rules

to add a clause stating that a, in the domain of a , must be a member of a


subset in De > 3 . This is so because of the unacceptability of:

(51) *Percy has a book in every hand.


In this, every differs from each which may be used of sets containing two
or more members. Obviously, these numerical restrictions filter out occur-
rences of every and each with mass nouns. The quantifier all is insensitive
to countability.
More seriously, in the cases of a/an and the, the rules reflect their par-
ticular usage only. This is not a trivial matter, as the discussion in chapter
7 indicated. It is, in particular, not quite clear how to incorporate Chafe's
(1970) distinction between bounded and unbounded sets which appears to
be involved in the difference between universal the and a/an in spite of its
intuitive appeal.
The rules also fail to mention the specific/nonspecific readings of the in-
definite article which may be semantically significant, for example, in respect
of alternative intensional and extensional readings. I shall, however, not write
the rules in greater detail here in the interests of economy.
There are several universal quantifiers in English in addition to every, each
and all. Two of the most interesting are both and either. These quantifiers
have the peculiar property of being restricted to sets of exactly two members.
Thus, in (52), two and only two directions are involved:
(52) Sally looked both/either way(s) before crossing the river.

It is, in fact, rather difficult to say precisely what is the difference between
these words. Certainly, in (52), itself, no possible change in truth-value or ap-
propriateness is involved in the choice between them. It does seem, however,
that either implies that the individuals in its scope are discrete, where both
is ambiguous in this respect. This difference, though apparently nebulous,
must be allowed for in their semantic rules since the truth-value of a given
sentence may, in some cases, depend on it. Thus, (53) is true just in case
Percy was holding two books, whereas (54) may be true if only one book
was involved.

(53) Percy held a book in either hand.


(54) Percy held a book in both hands.
This suggests the following approximation to a semantic rule for either.
(R. 18) V(either) is that function FQuant such that, for any a e FCN , F(c*) e
F^p such that, for any β e F[V, (F(a),ß) is 1 if and only if, for any
Nominals 219

duple δ C De, if δ/χ satisfies the formula a(x), δ/χ satisfies 3(x) and
for any a, a' e δ, a, a' are discrete.

Doubtless, the discretive feature of the quantifier either is related to the


same property in the exclusive disjunction either or. By contrast, although
both is ambiguous in a case like (54), it frequently implies a continuity
among the members of the set in its domain. This continuous feature of both
is probably at the heart of its use in the compound conjunction both and
which contrasts with either or, as in:

(55) Percy both/either swims and/or runs.

Intuitions of this sort seem to support Cresswell (1973), who treats either
and both in such conjunctions as shallow structure devices for marking the
scope of the logical operators or and and. A parallel explanation seems apt
for the appearance of then in the compound if then.
We may also attribute the use of both to disambiguate or, in the phrases
or both and but not both to this same scope marking function. The relevant
structures have roughly the following forms:

(56) ((ρ ν q) but not {(p & q), both}).

(57) ((ρ ν q) or {(p & q), both}).

An important property of both as a quantifier is that it signals old infor-


mation. Thus, for example, (58) is inappropriate if the men in question are
not already in the contextual domain:

(58) Both men were looking under the bonnet.

In this respect, both is like the. It seems likely that it also shares the''s
presupposition of uniqueness. Thus, (59) would seem to be vacuous rather
than strictly false if Percy either has only one daughter or more than two:

(59) Percy loves both of his daughters.

A possible semantic rule for both would be:

(R.19) V(both) is that function FQuant such that, for any α e Fcn , F ( a ) e
FNp such that, for any β e F/v, (F(a).ß) is 1 if and only if there is
a unique duple δ C De, such that if δ/χ satisfies the formula c*(x),
it satisfies β(χ), and for any a, a' e δ, a, a' are continuous: and old
information α(δ). If there exists no duple, δ, such that δ/χ satisfies
a ( x ) , (F(a),/3) is #. If δ/χ satisfies a(x) but fails to satisfy 3(x),
(F(a): ß ) is 0.
220 Semantic rules

In this outline, I have not mentioned the possibility of a and a' being discrete
since that use of both requires its own semantic rule.
In spite of its status as the paradigm instance of an existential quantifier,
some is semantically quite complex.
In chapter 4, I discussed some examples of scope differences, including
some involving negation and every or any. Reference was there made to
Quine's observation (1960) that every has narrow scope while the scope of
any is wide. These scope differences appear again in alternative negations of
certain existential statements, as in:

(60) Sally wrote some letters.

a. Sally didn 't write some letters/every letter.

b. Sally didn't write any letters.

The reason for the options in the negation of (60) is to be found in the
polysemy of some which can either have its original specific sense certain -
Old English sum gelaerned munuc 'a certain learned monk' - or the later
nonspecific meaning. While on both readings, some signals new information,
on the specific interpretation, the negation of (60) is either of the options in
(60a), i.e. an O-type statement symbolisable, very roughly, as:

(60) c. (v,x) (Letters(x) & -wrote(Sally, x)).

In the nonspecific reading, (60) is negated as (60b), i.e. an E-statement,


symbolised:

(60) d. (V,x) (Letter(x) —> -wrote(Sally, x)).

(60a, c) will be true if and only if there are some specific letters such
that Sally didn't write them - I do not mean to suggest that the letters need
actually exist, of course - , while (60b, d) will be true just in case Sally did
not engage in letter-writing.
The set of possible denotations of letter is, of course, made up of countable
things. Some is also used with mass nouns, as in:

(61) Sally drank some water.

Although the difference between count and noncount might, at first, appear
to have no bearing on truth-assignments, it can be crucial, as in:

(62) Jack put some chicken in the oven.

Clearly, (62) is ambiguous between count and noncount readings - reflected


in the alternative phonetic realisations of the quantifier. On the first, the whole
Nominals 221

is true if and only if there exists at least one chicken such that Jack put it into
the oven. On the mass reading, (62) is true just in case Jack put a quantity
of chicken-meat into the oven.
Perhaps the most important point about some in natural English - though a
point usually thought too obvious to merit dwelling upon - is that, used with
count nouns, it somewhat rarely means 'there exists at least one' but, rather,
it normally means 'there exist at least two'. Thus, while (63) is perfectly
acceptable, (64) represents a more common usage.

(63) Some man was sitting under the trees.

(64) Some men were sitting under the trees.

Obviously, (63) - ultimately attributable to the Old English usage referred


to above and denoted Somei in chapter 7 - will be true if there was at least
one man sitting under the trees. (64), on the other hand, will be true only if
the number of men concerned was greater than one.
These remarks are not exhaustive. However, they suggest at least the
following outline of a disjunctive semantic rule for some.

(R.20) V(some) is that function FQuan, such that, for any A e FCN, F(a)
e F^p such that, for any β e Flv, (F(a).ß) is 1 if: either (I) there
exists at least one individual, a, such that aJx satisfies the formulae
a(x) and ß(x) and countable{a) and new information{a{&))\ or (II)
(I) holds save that a is noncountable; or (III) either (I) or (II) holds
and speciftc( a); or (IV) either (I) or (II) holds and nonspecific a);
or (V) there exist at least two individuals, a, a', such that aJx and
a'/x satisfy the formulae a(x) and ß(x) and countable(a,a') and new
information(a(a,a')) and either a and a' are specific or nonspecific.

This disjunctive format is, of course, prompted by the demands of economy.


We ought, properly, to distinguish between several quantifiers called some as
suggested in chapter 7.
In rule (R.20), I have not attempted to reflect the fact that, in English,
some is used in affirmative statements, but is usually substituted by any in
negative and interrogative contexts. To include this information would not
be difficult, but it is superfluous in view of the fact that the alternation is not
actually obligatory.
In chapter 7 , 1 proposed that unitary nominals, including mass and abstract
nouns as well as undetermined plurals, should be regarded as quantified by
the abstract item zero. When the noun is mass, or count and plural, such
222 Semantic rules

unitary uses may be either existential or universal as the following mass


cases demonstrate:

(65) Water flowed down the street.

(66) Water is fluid.

It would appear, therefore, that zero should have a semantic rule which al-
lows for either interpretation. In stating such a rule, it is obviously necessary
to provide for the exclusion of count singular nouns, but, otherwise, the spec-
ifications seem uninteresting and I shall not, therefore, discuss them further.

8.3.4 Proportional quantifiers


While the ability to express relative proportions is restricted in most formal
systems to the comparison of quantities which can be accurately measured,
natural languages, as the discussion in chapter 7 demonstrates, have the addi-
tional virtue of being able to express proportions where precise measurement
is impossible or irrelevant.
Thus, while formal systems, such as arithmetic, use the relations > and
< to express measurable differences in size or quantity, natural languages
employ, in addition, subsystems of proportional quantifiers which provide for
statements concerning states of affairs which are not at all precise. Whereas,
in arithmetic, > and < may only be predicated of two sets if the one is literally
greater or less so than the other, in a natural language, as well as their logical
use, the items more and less may be used impressionistically of properties
like intelligence which defy accurate, objective measurement. Further, while
arithmetic may make reference to sets of any size whatever, a natural language
is blessed with the additional capacity to refer to given numbers or spatial
dimensions as large or small with innumerable intermediate degrees.
Apart from the logical uses of more/most, fewer/fewest and less/least, the
meanings of proportional quantifiers cannot be reduced to precise semantic
rules. Words like many/few are excluded from classical logics because they
reduce to existential quantification. Thus, (67) is true, strictly speaking, just
in case there exists at least one individual who is both a girl and plays tennis.

(67) Many/Few girls play tennis.

Whatever else is conveyed by (67) is a matter of opinion and, therefore, has


nothing to do with logic. It is this fact which motivated the remark made in the
previous section that logical quantifiers are more basic than their proportional
counterparts.
Nominals 223

People's opinions on what does or does not constitute a large or a small set
are not, of course, entirely idiosyncratic. Thus, as Sapir (1944) pointed out,
there is more-or-less universal agreement that the word many applied to errors
on a page of typing signifies a quantity which is considerably smaller than
that conveyed by many applied to visible stars on a clear night. Thus, while
assertions of proportion may often be approximate and thus not demonstrably
true or false, they are commonly to be regarded as reasonable or unreasonable,
i.e. appropriate/inappropriate.
In light of these considerations, the semantic rules for proportional quanti-
fiers are only precise in the cases of more/most, few/fewest and less/least and
then only when the proportions are accurately measurable. In all other cases,
the rules do not lay down the conditions for a statement's being demonstrably
true or false, but, merely, for its being appropriate or satisfiable in the broad,
pragmatic sense of that term. They are, moreover, question-begging. To say
that many indicates a large quantity is informative only in being positive
rather than negative, for the rest, it relies upon the good will of the reader
for its sense.
As a preliminary, let us assume that Comp has combined with many and
much to yield more. In its precise use, more is restricted to the quantifi-
cation of things very broadly conceived as including both countables and
noncountables. The semantic rule for precise more has the following rough
formulation:

(R.21) V(more) is that function FQUANT such that, for any A e FCN, F ( a ) e
FNP such that, for any β e FIV, ( F ( α ) , β) is 1 if and only if for any
set Ε C D e , if E/x satisfies the formulae a(x) and /3(x),then for any
set EJC DE distinct from E, such that satisfies a(x) and fails to
satisfy β(χ), Ε > .

The rules for fewer/less would be as above, but with < for > .
(R.21) assumes nothing regarding the sizes of the sets concerned beyond
their relative proportions.
The rule for most could be simply stated by assuming an additional set Ej
and asserting that Ε > Ej relative to Ej. This assumption is warranted by the
fact that, although most is logically equivalent to more when two quantities
only are compared, it is used in natural English when three or more sets are
involved. When we claim:

(68) Most people love tennis.

we assert not merely that the set of tennis-lovers is greater than that of
nonlovers of tennis, but that the size of the former more nearly approaches the
224 Semantic rules

totality of the set of all people than does the latter. This additional assertion
is implicit in more, in the case of most it is explicit.
The superlative counterpart of rule (R.21) is appropriate for the abstract
entity Super - with < for > where necessary. However, (R.21) is too crude
for Comp generally. This is so because conjunctions with Comp than do not
always contrast positive and negative values for the same property. Thus,
while (69) is commonplace, so is (70).

(69) More people like walking than do not.

(70) More people like walking than like running.

In (70), no assertion whatever is made either about the number of people who
do not like walking or those who do not like running - although it is entailed
that the set of those who dislike running is greater than that of those who
dislike walking. For example, (70) would be true if every single person liked
walking and all but one liked running. It may also be true if more people
dislike walking than take pleasure in that exercise.
Thus, Comp than has a semantic rule which expands on (R.21) with clauses
along the following lines - squares enclose rough examples:

(R.22) V(comp than) is that function Fcorrei such that, for any 7 e D, and
β, β' e Fiv, F(/3,7) e F/V such that, for any sets Ε, E' e De, E/x
satisfies the formula (F(ß, 7))(x) if:
either (I) 7 = ß(E') [runs,x-runs,y];
or (II) 7 = ß(E,E') [has,x,y-has,x,z];
or (III) 7 = ß'(E) [runs,χ-walks,χ];
or (IV) 7 - β'(Ε') [runs,x-walks,y]: and Ε > , or < E \

There are several simplifications in (R.22). First, since Comp than struc-
tures are as common in subjective comparisons as in logical ones, the satis-
faction of the formula is not necessarily in terms of truth. Second, pertaining
to the first point, the relations > and < are completely vague when subjective
comparisons are involved. Third, the sets Ε and E' are fuzzy when properties
such as beauty are compared.
It is to be noted that a rule along the lines of (R.22) is applicable no matter
what dimension is in comparison. Thus, (71) is true if and only if the set of
points which constitute line A is greater than the set which is line B:

(71) Line A is longer than line B.

Similarly, with other accurately measurable properties such as speed, as in:


Nominate 225

(72) Sally runs faster than Jack.

or mass:

(73) Sally weighs less than Jack.

(R.22) also outlines the foundation for the evaluation of comparisons involv-
ing sets of events, as in:

(74) Lions growl more than they roar.

The rule for equatives will be along the same lines as that for Comp than.
The significant difference will, of course, be that it involves the relation
= rather than > and < . This is important since it reflects the fact that the
equative correlative as as is commutative whereas its comparative counterpart
Comp than is not.
Turning to non-logical proportional quantifiers, they fall into two natural
classes, those which assert that a given proportion is large and those which
assert that it is small. Each of these classes is made up of quantifiers which
modify count nouns and others which modify mass nouns. Thus, many is in
the first group and takes count nouns, while much is in the same group and
takes mass nouns. Few is in the second class and takes count nouns, whereas
little is in the same class but takes mass nouns.
The semantic rule for many will crucially involve satisfaction in its broad,
pragmatic sense in addition to truth, and may be roughly outlined as follows:

(R.23) V(many) is that function FQLTANL such that, for any A E FCN, F(Q) E
FNP such that, for any β e F/y, ( F ( α ) , β) is met if and only if for
some set Ε such that E/x satisfies the formulae a(x) and 3(x), Ε is
countable and the cardinality of Ε = η and largein).

As confessed above, a rule like (R.23) is question-begging and imprecise.


Obviously, what is large for one person may well be small for another. The
size of η is also, as stated, contextually determined.
The choice between claiming that many requires that η be large or that it be
not small is, however, not arbitrary. As remarked earlier, gradables, including
proportional quantifiers, have the property that their assertion implies the
negation of their opposites, but their denial does not imply the assertion of
the opposites. Thus, if -small were to replace large in (R.23), many would
be glossable as of average size and the negation of many would incorrectly
claim that η was small, i.e. -many would equal few.
It is to be noted that the appearance of = in (R.23) allows positive many to
occur in equative constructions and automatically excludes it from compar-
226 Semantic rules

ative conjunctions with Comp than. Thus, (75) is acceptable, but (76) must
be filtered out:

(75) Sally has as many books as Percy.

(76) *Sally has many books than Percy.

This is not, of course, to claim that (77) is anomalous:

(77) Sally has many more books than Percy.

In such structures, the conjunction is still Comp than and the function of
many is akin to that of an intensifier. In these constructions, the quantifier
is, as usual, sensitive to the count status of the head noun, as is seen by
comparing (77) with:

(78) Sally has much more money than Percy.

The rule for much is just like that for many but replacing "countable"
by "noncountable". Similar rules are also appropriate for quantifiers in the
second group like few and little, again with the obvious adjustments.
Before leaving the rules for proportional quantifiers, it is important to
acknowledge that their use is not necessarily constant. An interesting and
well-known illustration of the way in which the meaning of a proportional
quantifier can be modified under the influence of other items is provided by
few when it occurs within the scope of a, as in:

(79) Sally has a few books.

If the indefinite article is removed from (79), the whole amounts to a negative
assertion to the effect that the number of books concerned is not great. In
(79) as it stands, the assertion may be interpreted as positive in that it claims
that Sally has some books. If the nominal is further expanded into quite a
few books, then the number concerned is even asserted to be large.

8.3.5 Partitives and genitives


As noted in chapter 7, both partitive and genitive constructions form important
subgroups of nominals. Their semantic rules are roughly as follows.

(R.24) V(ofpar,) is that function F0f such that, for any α, β e FNP, F(a, ß) e
Fnp such that for any 7 e F/v, ( 7 ( F ( a , ß ) ) ) is {1, met} if and only
if for any sets Ε and E' such that E/x and E'/y satisfy the formulae
a(x) and ß(y) respectively, E/x satisfies the formula 7(x) and Ε C
E\
Some verb types 227

This rule allows for partitive constructions of any kind, including those
in which a has a numerical or proportional determiner, with or without the
empty category Η as head, e.g. six/many/most of the priests or six members
of the committee. The possibility of universal quantification over E, as in
all/each of the books requires the use of C in the statement of the rule.
The condition that 7(x) be satisfied by Ε reflects the fact that in partitive
constructions intransitive verbs are predicated of the part rather than the
whole - analogous considerations apply to genitives.

(R.25) V(ofg e „) is that function F„f such that, for any α, β e FNP, F(a, 3) e
FNP such that, for any 7 e F]V, (^(F (a, 0))) is {1, met} if and only
if for any sets Ε and E' such that E/x and E'/y satisfy the formulae
Q(X) and 0(y) respectively, E/x satisfies 7(x) and belong-to{E,E').

The referents of a and 0 are specified as sets rather than just individuals in
light of such expressions as the wishes of the people. It will further be noted
that the predicate belong-to is far from explicit. In the rule, belong-to is
intended to cover all types of possession, including alienable and inalienable.
I take of in such phrases as the murder of Smith to be vacuous.
The rule for is similar to that for genitive of save that β must be
specified as a common noun rather than a nominal and that E/y satisfies 3(y)
and E'/x ct(x), so that 7 takes the correct argument.

8.4 Some verb types

In the above remarks, the verbal element of any sentence has been treated
merely as intransitive. That was reasonable since, with the aid of lambda
abstraction, all alternative structures involving transitive verbs ultimately be-
come intransitive. However, it is clearly necessary to provide semantic rules
for basic transitives. In this section, I shall give rules for a selected few,
largely avoiding some classes and topics such as performatives and presup-
positions discussed in chapter 6.
As observed in chapter 7, it is a considerable virtue of Cresswell's (1973)
treatment that transitive verbs like marry are analysed simply as taking two
arguments of category ve, i.e. two term variables. We may, therefore, state
the semantic rule for a two-place transitive verb like date as:

(R.26) V(date) is that function FTV such that, for any x, y e Ve and indi-
viduals, a and a', a'/y satisfies the formula in (A,y(F(x,y))) and a/x
228 Semantic rules

the formula in (A,x(F(x,a'))) if and only if a and a' are human and
not intensional and form the pair < a,a' > such that a dates a'.

Thus, a sentence like:

(80) Percy dated Sally.

is true if and only if the appropriate ordered pair consisting of the denotata
of Percy and Sally satisfies, in the strict sense, the formula, dated(\,y). Of
course, if either a or a' is not human, the relation date fails and the relevant
formula is not satisfied.
As in the above rule and as observed earlier, appropriate selectional re-
strictions require to be built into the rules for most lexical words, including
verbs, common nouns and adjectives. Thus, the patient of eat must be marked
as eatable and its agent as animate. In the case of a verb like murder, both
agent and patient must be human and, in addition, they must be distinct. The
decision how much information of this sort is to be included in any particular
case must be taken ad hoc, depending on the aims of the analysis and sample
size. The finer the filtering, the more specific the selectional restrictions.
Of course, the verb date is typical only of a certain type of two-place verb.
Equally important are verbs like seek which are intensional. As previous dis-
cussion has indicated, the treatment of such verbs can be quite complicated.
However, since De includes individual concepts, the rules for such inten-
sional verbs may still be stated on the pattern of the rule for date with the
clause disallowing intensional objects suitably adjusted. This is broadly in
line with Cresswell's (1973) treatment and is, in my view, what Montague's
(1973) meaning postulates ultimately amount to. Where he decrees that the
intensional operator, A, be substituted by its extensional counterpart, v, in the
rules for specific verbs, the above suggestion is that A be allowed in the rules
for some verbs and disallowed for others. The only advantage of Montague's
treatment is that it is uniformly intensional.
Verbs of degree higher than 2 will, of course, need to have the number of
their places precisely specified. I shall not take that matter further here beyond
noting that it is not always easy to say how many places are involved. Thus,
for example, kill is 3-place since the action it denotes necessarily involves
an instrument in addition to the agent and patient. By contrast, the position
is not so clear with a verb like see since arguments for and against regarding
the eyes as instrumental may be put forward.
As observed in chapter 7, there are many cases in which the same lexical
item may represent either a transitive or an intransitive verb. Thus, bum
Some verb types 229

will require two distinct semantic rules corresponding to alternative lexical


entries.
In Chapter 7, I also discussed the difference between phraseal verbs like
turn off and verbs like turn which may be accompanied by prepositional
adverbs. Since, in the case of the former, the particle is semantically part of
the verb, the complex expression merits its own semantic rule, as in:

(R.27) V(turn off) is that function FJV such that, for any x, y e ve and
individuals, a, a', a'/y satisfies the formula in (A,y(F(x,y))) and a/x
the formula in (A,x(F(x,a'))) if and only if a is animate and a' is
-animate and neither a or a' is intensional and a turns a' off.

Of course, other phraseal transitives, such as look for, may require different
selectional restrictions, e.g. in respect of the animate vs inanimate contrast,
or the distinction between intension and extension, but otherwise, they will
have similar rules to turn o f f .
Verbs which figure prominently in philosophical and linguistic discussion
are those which take sentential and those with infinitival complements. Such
verbs as believe and want were discussed earlier in the context of de dicto/de
re readings, chapter 4, section 4, and I shall not belabour those points here.
Obviously, however, it is necessary now to consider some of the issues which
arise in respect of the relevant semantic rules.
It will be recalled that Cresswell (1985) distinguished between two under-
lying ί/ιαί-complementisers, that0 with domain in the set, P, of propositions,
and thats whose domain is the references of the parts of sentences, i.e. the
senses of propositions denoted by complement clauses. These different com-
plementisers provide a straightforward basis for explaining the ambiguity of
a sentence like:

(81) Percy believes that 5 + 7 = 12.

which on the reading with thato entails (82), but on that with thats does not.

(82) Percy believes that 8 + 4 = 12.

In chapter 7, that was assigned to a category which takes a member of


category t as argument and yields a nominal as value. Since we distinguish
between two distinct //iaf-complementisers, it is necessary to provide two
semantic rules. The following approximations are modelled on Cresswell's
(1973) rule. They differ from his in that the first allows for an option in the
choice of main verb - Cresswell allows only for intransitives. I also specify
the object domain of the main verb as the set of propositions, P, or senses
230 Semantic rules

of propositions, Ps. Finally, I include the subject condition for the entire
sentence.

(R.28) \(thato) is that function Fthat such that, for any 7, F(7) e FNP such
that: (I) for any β e f / y , (F(7))/x, = a'/x, satisfies the formula in
(λ,χ(/3,χ)), = (A,x(x,/3)), if and only if 7 e p; or (II) for any ß' e
FJV, (F(7))/y, = a'/y, satisfies the formula in (A,y(/3'(x,y))) if and
only if 7 e ρ and, for any individual, a, a/x satisfies the formula in
(A,x(/3'(x,a'))) only if a is human.

a , V ( t h a t s ) is that function Fthat such that, for any 7, F(7) e FNP such
that, for any β e Fjy, (F(7))/y, = a'/y, satisfies the formula in
(A,y(/3(x,y))) if and only if 7 e ρ <s, and for any individual, a, a/x
satisfies the formula in (A,x(/?(x,a'))) only if a is human.

(R.28, 28a) ensure that the two complementisers are assigned their correct
arguments. In addition, (I) of (R.28) guarantees that, for any appropriate
intransitive, say is true, and complement clause, say Jack runs:

(83) (That Jack runs is true) = (It is true that Jack runs).

The exemplification (83) is, of course, simplified. I shall not attempt to


show the derivation of the dummy subject it, but assume it to be a conse-
quence of extraposition of the complement clause. Further, I presume that
thato alone has the choice of transitive or intransitive main verb - a presump-
tion based on intuitions arising from the discussion of the truth-predicates is
true/is false in chapter 4.
As Cresswell (1985) demonstrates, one considerable advantage in treating
that clauses as nominals is that the semantics of verbs like entails can then
be similar to other transitives. Thus, the deep structure for (84) is (84a).

(84) That Sally cooks well entails that Sally cooks.

a. ((that(Sally cooks well))(λ, χ ((λ, y (entails,χ,y))(that(Sally cooks))))).

In keeping with the discussion of entailment in chapter 3, the semantic


rule for entails will then be:

(R.29) V(entails) is that function FTV such that, for any χ ,y e ve, q/y satis-
fies the formula in (A,y(F(x,y))) and p/x the formula in (A,x(F(x,q)))
if and only if ρ and q are propositions and (p q).

The semantic rule for presupposes will be along the lines of that for entails
save that the relation needs to be changed. The whole is # if q is false and
q true if ρ is either true or false.
Some verb types 231

If the complement verb is factive, the complement must be true in order


for the whole to be true. A much abbreviated outline of a rule for factive
know is the following:

(R.30) V(know) is that function Fjy such that, for any x,y e Ve and 7 e ρ
such that (that,7) e FNP, (that,7)/y, = a'/y, satisfies the formula in
(A,y(F(x,y))) if and only if ρ is 1, and a/x satisfies the formula in
(A,x(F(x,a'))) only if a is human.

The rules for verbs like believe will differ from that for know in that the
complement, 7, is required to be true not in fact, but in the belief-world of
the individual, a, who is the value of the subject NP. This is in line with
Hintikka's treatment (1962).
Cresswell (1973) provides a detailed treatment of the semantic rule for
want. I shall not summarise his discussion here. However, it should be ob-
vious that, unlike propositional attitude verbs such as believe, which take
sentential complements, want may take either an infinitival clause or ordi-
nary NP as its complement, as in:

(85) Sally wants to take a taxi/Sally wants a taxi.

In the infinitival case, the subject of the complement may be either Pro as
in (85), in which case, it might be advisable to regard the NP alternative as
the result of some reduction transformation on the underlying infinitive, or
the subjects of the main and complement clauses may be disjoint, as in (86).
In that case, any reduction is unthinkable. Thus, the alternative (87) clearly
does not derive from (86).

(86) Sally wants Jack to take a taxi.

(87) Sally wants Jack.

In addition, it is important that, when the complement of want is an


infinitive, the proposition expressed by that clause is necessarily nonfactive.
It is central to the meaning of want that the objectivum desideratum should
not be known to be a fact - the fundamental role played by the verb's original
sense 'to lack' is obvious.
The construction of a semantic rule for want is far from straightforward.
One approach would be to distinguish between different verbs called want.
One such verb would take infinitive complements and the other would have
ordinary NPs as complement. Another approach would be to assume that the
verb always has an infinitive complement in deep structure and to derive
the NP structures transformationally under the conditions alluded to above.
232 Semantic rules

Either alternative has both advantages and disadvantages. If separate verbs


are distinguished, the obvious synonymy between sentences like those in
(85) is not attributable to mere deletions. If only one verb is recognised,
specifications of the transformational deletions may become difficult to credit,
as in the case of (87).
Since no item may be ambiguous, I adopt the first alternative, sketched in
the following disjunctive rule.

(R.31) V(want) is that function FTV such that, for any x,y e Ve: for any
individuals, a, a', such that a'/y satisfies the formula in (A,y(F(x,y)))
and a/x satisfies (A,x(F(x,a'))): either (i) a' jk ρ and a φ a'; or (II) a'
= {Pro,to,IV); or (III) a' = (a',to,IV) and a' φ a.

In chapter 6, I discussed implicative verbs such as succeed/manage. As


that discussion demonstrated, the interesting property of such verbs is that
the propositions in which they appear always presuppose others involving
some such predicate as try. An abbreviated semantic rule for manage is the
following:

(R.32) V(manage) is that function FTV such that, for any x,y e Ve, at
time, j, a'/y satisfies the formula in (A,y(F(x,y))) and a/x satisfies
(A,x(F(x,a'))) if and only if a' = (Fro,to,IV) and, at some time, j '
prior to j, a/x satisfied the formula in (X,\(try to achieve(x,a'))).

Rule (R.32) specifies the subject of the infinitival complement as Pro since
the subject of manage in such constructions must always be identical with
that of the complement. Of course, to draft rules for other implicatives would
require taking syntactic differences into account. Thus, for instance, succeed
takes a gerundive in doing χ as complement, not a straightforward infinitive.
It is also obvious that the subjects of implicative verbs must be restricted to
animate beings.
In the previous chapter, I referred to the syntactic complexity of the verbs
seems/appears. Fortunately, their semantic rules are relatively straightfor-
ward - ignoring the sense of appear which may be glossed as 'come into
sight'. In (88) and (89), seems has the function of a modal operator probably.

(88) Jack seems to be famished.

(89) It seems that Jack is famished.

Let us assume, for simplicity, that because of its verbal origins, seems
cannot be affixed directly to a sentence, in the manner of probably. It is,
thus, either provided with a dummy subject, as in (89), in which case its
Some verb types 233

scope is in the form of a sentential complement, or it takes the subject of


its complement as a quasi subject, as in (88), in which case, its scope is
infinitival.
A major difficulty in formulating the relevant semantic rule is posed by
the specification of the modal operator. Given that seems is equivalent not to
possibly but to probably, it is necessary to indicate that the degree to which
the proposition in its scope is regarded as plausible - a purely subjective
judgment - is high. The semantic rule for probably may be intuitively stated
as:

(R.33) V(probably) is that function Ft/t such that, for any a e P, F(a) is
met if and only if (-L-(a)) is 1 and ((α = 1) more likely than (a =
0)) is met.

Armed with this rough definition, the rule for seems can be sketched as
follows.

(R.34) V(seems) is that function F,/t such that for any α e P, F(a) is met
if and only if probably(a) is met.

Turning to modal verbs, the complexity of their semantics in English is


notorious. In the following remarks, I shall be somewhat cavalier in my
disregard for the nice distinctions which can be made in the uses of the verbs
I analyse. I shall, moreover, largely ignore the temporal meanings of words
like will/shall, having referred to that aspect of their usage in several places,
including chapters 3 and 4.
In the previous chapter, modals were treated, along with the other auxiliary
verbs, as of category (t/t). Thus, for example, may denotes a function which
operates on a proposition to mark it as subjunctive rather than indicative.
This distinction permits the use of may in the following, the second having
two readings one of which carries the additional meaning of permission.

(90) It may rain this evening.

(91) Jack may leave the meeting early.

A disjunctive rule for may, based on (90) and (91), would be:

(R.35) V(may) is that function FAux such that, for any 7 e p, F(7) e ρ such
that F(7) is 1 if: (I) (-L-(7)) is 1; or (II) for some ,3 e FIV such that,
if a/x satisfies the formula in (Χ,χ(Θ,χ)), η = {β,a), there exists at
least one individual, a', such that a' permits that a/x satisfy (3,χ).
234 Semantic rules

Any event which is merely possible is subjunctive. In sense (II), the strictly
deontic operator, Permission, clearly requires to be analysed in terms of the
predicate permits.
The rules for the other modals will, time reference aside, be much like that
for may, though each will require its own peculiar specifications. Thus, for
example, can must be provided with several rules, including one based on
Ability. Must needs to be specified as denoting the modality of Inevitability
and also that of Compulsion. To state the rules for can and must would, as
observed earlier, be complicated by the interaction between these auxiliaries
and negation.
Semantic rules for the morphological pasts would, should, etc. are greatly
complicated by pragmatic considerations. Thus, for example, the notion of
Remoteness associated with past forms like would gives rise to its use in
polite requests and so forth.

8.5 Wh-words

Given the account of binding in chapter 5 and their syntactic description in


chapter 7, the semantic rules for wh-words will be either those for anaphors
or will have much in common with quantifiers. I shall not repeat the account
of anaphora.
Apart from literal reference to quantity, the important features in which
wh-words differ from quantifiers proper are in respect of domain restriction
and selectional rules.
As far as domain restriction is concerned, there are wh-words which are
restricted to individuals conceptualised as things, such as who/which. The
items when and where are, by contrast, restricted to moments of time and to
places respectively, while how is the interrogative of manner.
As far as selectional rules are concerned, given the presuppositional reser-
vations mentioned in chapter 6, section 6, who, along with its related forms, is
confined to animate, human individuals. In contrast, which and what are usu-
ally regarded, unless explicitly restricted as in which/what man, as confined
to nonhuman referents which may or may not be animate.
The following abbreviated rule for who establishes the pattern for wh-
words with values in the domain of individuals conceptualised as things.

(R.36) V(who) is that function FQlicmt such that, for any variable vq in a
formula, p, if there exists an individual, a, such that sJvq satisfies p,
Adjectives 235

then a is human and aJx satisfies any formula, p \ just like ρ save in
having χ for vq, which is a direct answer to F(p).

The temporal and locative interrogatives may be treated in a manner anal-


ogous to who. Of course, their respective rules must specify their domains
as moments of time and locations. In chapter 6, I did not discuss these par-
ticular items, but it is evident that the satisfaction of the formulae containing
the relevant variables involves denotata of adverbs, such as last summer/in
Mexico. Similarly, the manner interrogative how involves adverbials like by
train/quickly.

8.6 Adjectives

Obviously, there are many profound metaphysical problems surrounding our


understanding of the nature of properties, including especially, the validity of
the classical distinction between those which are essential and those which
are to be considered merely accidental. Such problems have been discussed
by many philosophers, including Russell (1912), and to attempt their review
here would, clearly, be beyond the legitimate scope of a linguistic study.
From the viewpoint of linguistics, the most important subclassification of
adjectives is into those which denote absolute properties, such as correct, and
those whose referents are properties which are gradable, for example strong.
Let us assume that absolute properties, such as Correctness, are unique and,
hence, well defined. Under this assumption, such properties seem concep-
tually more straightforward than their gradable counterparts and, therefore,
they provide the simplest exempla. This is not, however, to claim that abso-
lute properties enjoy prior status over their gradable counterparts. Cresswell
(1973) argues that the reverse holds, but I can see no compelling reason for
deciding either way.
Adjectives denote functions from properties and individuals into truth val-
ues. Taking correct as prototypical of an absolute adjective, the following
semantic rule may serve as a broad pattern. I, here, ignore the semantic
effects, mentioned in chapter 4, of the predicative and attributive uses of
adjectives, e.g. poor violinist/... is poor. I also ignore the distributional facts
outlined in chapter 7. The rule arbitrarily takes CN/CN as the representative
function-type.

(R.37) V(correct) is that function FCN /CN S U C H that for any A E FCN, F ( Q ) E
FCN such that, for any set, E. such that a/x satisfies the formula Q(X)
236 Semantic rules

if and only if a e E, there exists a set C E, such that a/x satisfies


correct(x) if and only if a e E,: otherwise, a/x satisfies incorrect(x).

As in the earlier generalised rule for common nouns and intransitive verbs,
the above rule for correct is extensional, assuming the referents of common
nouns to be sets. There are, of course, intensional adjectives, such as imag-
inary. However, as before, such cases are allowed for by the presence of
intensional objects in De.
The point of stipulating that the individuals which are the values of the
correci-function be a subset of the set which is the value of the unmodified
common noun is that adjectives which denote accidental properties frequently
restrict the reference of common nouns. Thus, for example, correct answer
denotes a subset of all answers. That £,· may equal Ε itself is obvious from
Α-type statements such as:

(92) All axioms are correct.

If the adjective denotes an essential property, then the set denoted by the
unmodified common noun is precisely that denoted by the adjective-noun
complex. Thus, men and mortal men have exactly the same referents.
The semantic rule for correct requires that the property in question be
absolute. There are, of course, numerous doubtful cases such as the referent
of just which may or may not be absolute, depending on context. Thus, (93)
is not usually judged to be false in spite of the gradability implicit in the
comparative:

(93) It is more just to forgive than to punish.

An important requirement of the semantic rule for any gradable adjective


like thin is that it allow for the noncontradictory status of sentences like (94).

(94) Jack is thin and he isn 't thin.

As noted earlier, such expressions are to be interpreted in natural languages


as asserting that the property in question is possessed to a degree between
two norms. Thus, (94) is roughly glossable as:

(94) a. Compared to some Jack is thin, compared to others, he isn't - he is


sort of thin.

The semantic rule must, however, allow for straightforward assertions such
as (95) in which Jack might be judged to be among the very least thin
individuals - to possess that property in its smallest degree - or, alternatively,
to be at the other extreme:
Adjectives 237

(95) Jack is thin.

In addition, of course, since gradables rely upon subjective judgement, as-


sertions containing them are broadly met rather than literally true or false.
Continuing to simplify, the semantic rule for thin will, therefore, differ
from that for correct in respect of the following clauses:

(R.38) V(thin) . . . F(a) e FCN such that for any set Ε such that a/x satisfies
ct(\) if and only if a e E, there exist at least three sets, Ej and
e e, such that a/x meets thin(\) if and only if a e Ej, and a e Ej
if and only if a is thin to degree η, η > i or < k.

It is apparent that rules like those for correct and thin are woefully meagre
in the context of the semantics of adjectives generally. Quite apart from the
wider philosophical issues alluded to, there are many well-known problems
which would need to be addressed in a more language-specific treatment.
One such is, of course, the identification of appropriate sortal restrictions.
Thus, for example, rusty must be restricted to individuals capable of rusting,
intellectual to human individuals, delicious to edible things, etc..
A celebrated instance of a referential problem involving adjectives is pro-
vided by fake. Does the common noun fake signature refer to signatures or to
fakes? In either case, the reference is contradictory. In his (1973) fragment,
Montague includes alleged and Lewis's discussion (1970) is centred on this
term. In the next section, I shall attempt to draft the outlines of a semantic
rule for the adverb allegedly, but I will not be primarily concerned with the
specific issue of category ambiguity.
Another issue of considerable interest is adjective ordering. As Vendler
(1971) demonstrated, ordering in multiple modification is by no means arbi-
trary in English - and, presumably, all other languages. Thus, big, white pig
is normal, but white, big pig is not.
Since degree complements very commonly involve adjectives, this seems
an appropriate point at which to refer to them. The construction of semantic
rules for the relevant correlatives, such as too to requires, of course, that the
property figuring in the main clause be gradable. To state the rule in all its
detail proves rather complex and the following sketch for too to is rough
indeed.

(R.39) V(too to) is that function Fcorrei such that, for any a e F/y and β e
Dt, F(a,ß) e F/v such that a/x satisfies (F(a. 3))(x) if and only if
possible(ß) is {1 ,met} if and only if a/x satisfies Q(X) up to degree
n, and a/x satisfies a(x) at degree n', n' > n.
238 Semantic rules

In this rule, I assume that the complement sentence β may either be assigned
truth or pragmatic satisfaction in view of contrasting pairs such as:

(96) Jack was too slow to win the race.

(97) Sally is too thin to be attractive.

The rule would, obviously, have to be expanded to cover adverbs and nom-
inals.

8.7 Adverbs

Some of the adverbs which function as sentence modifiers have figured al-
ready in this chapter, sections 2 and 4. I open this section with semantic
rules for some more sentential adverbs before proceeding to the discussion of
manner, time and place adverbs regardless of their sentence vs. IV-modifying
status.
In section 4, I proposed a semantic rule for probably which, like many
other adverbs, is not truth-functional since it involves subjective assessment.
The subjectivity of probably resides in judgement of plausibility. A somewhat
different case is offered by naturally as in (98).

(98) Naturally, Sally likes walking.

In (98), naturally expresses a comment, or point of view, regarding the


degree to which the propositional content in its scope may be expected to be
true, given the way things are.
Using a subscript to distinguish sentential naturally from its IV-modifying
counterpart, we might draft its semantic rule as follows:

(R.40) V(naturallyt/t) is that function Ft/t such that, for any a e Ρ and w
e W, F(a) is met if and only if α is 1 and for some individual, a,
such that a utters Naturally(a), [(a — 1) follows from w].

Rule (R.40) is very rough. In particular, the relation "follows from w" is
impressionistic. However, in the absence of a detailed discussion of causation,
the shorthand seems justified by its intuitive transparency.
Rather more complex is the semantic rule for a sentence-modifying adverb
like allegedly. The fundamental fact about this adverb is that the propositional
content of its scope is not asserted to be true or probable, but merely to have
been alleged to be so. Thus, (99) may be true even if Jack is a philanthropist.
Adverbs 239

(99) Allegedly, Jack is a miser.

However, to say that (99) is true if the proposition that Jack is a miser has
been alleged, is not sufficient. This is so because allegedly presupposes that
the situation denoted by the proposition in its scope is judged to be bad.
Thus, for instance, under normal circumstances, (100) is inappropriate.

(100) Allegedly, Jack is recovered from his sickness.

Of course, it is not clear whether the negative presupposition is shared by


the original speaker as well as the reporter, but that is not crucial since it
is the attitude of the latter only which is central to the appropriateness or
otherwise of the adverb's use. These facts suggest the following semantic
rule for allegedly - the form is colloquial to avoid being overly cumbersome.

(R.41) V(allegedly) is that function F,/, such that, for any a e P , F(a) is
met if and only if there exists an individual, a, such that a asserts
a' at time j and some other individual, a', utters allegedly(a) at
time j ' later than j and a either is a ' or is equivalent to a ' and a'
presupposes that the state of affairs expressed by a is bad.

In this rule it is not claimed that individual a' attributes the assertion of a
to a specific individual, a. Rather, it is the case that a is not attributed to a
or a' directly. This point is central to the appropriate use of allegedly. Since
allegedly modifies reported speech, it is necessary that its rule provide for
de re as well as de dicto interpretations.
In general, IV-modifying adverbs are less complex than their sentential
counterparts, though there seems to be a wider semantic range and such
issues as ordering are intricate.
A typical instance of a manner adverb is provided by quickly in:

(101) Sally walks quickly.

Since we regard intransitive verbs - basic or derived - as denoting properties,


quickly in (101) denotes a property of the property of walking. It, thus,
functions in a way somewhat like that of a quantifier like much, being both
gradable and subjective.
It is tempting to say that quickly in (101) takes the property, walking,
and the referent of Sally and returns a proposition. The proposition would
amount to the claim that the property of walking has the property of being
instantiated by Sally in a certain manner. Such a treatment would, however,
distance the semantics from the syntax and I shall, therefore, assume that,
240 Semantic rules

in (101), quickly simply takes walks as its argument to yield the property
denoted by walks quickly.
In light of this approach to IV-adverbs, a semantic rule for quickly would
have the following outline:

(R.42) V(quickly) is that function Flv /lv such that, for any a e F/V, F(a) e
F/v such that, for any set, E, such that a/x satisfies α(χ) if and only
if a e E, there exists a set Ε,, c E, such that a/x meets (quickly,a)(x)
if and only if a e £,·.

In section 2, I proposed a semantic rule for the negative adverb seldom. I


now turn to the analysis of often upon which the earlier rule largely depends.
Like seldom, often is, of course, non-truth-functional. Unlike seldom, how-
ever, often is IV-modifying, not sentential. Its semantic rule will, therefore,
parallel that for quickly, namely:

(R.43) V(often) is that function FIV /[V such that, for any a e F/v, F(a) e
Fiv such that, for any set, E, such that a/x satisfies a(x) if and only
if a e E, there exists a set, £,·, c E, such that a/x meets {often,a)(x)
if and only if a e £,·.

Manner and frequency adverbs like quickly/often, apart from their subjec-
tive satisfiability, are semantically simple. More complex are time adverbs
such as yesterday. In keeping with the earlier discussion, (102) is true if and
only if (103) is true at a time exactly one day earlier than that on which (102)
is uttered.

(102) The zoo was open yesterday.

(103) The zoo is open today.

At first glance, it might seem that what is required is a semantic rule for
the time adverbial now since that item fixes the time of an utterance as actual
present. However, while now may be used to refer to a span of time, as in
(104):

(104) Russia is now a democracy.

it is also employed to denote a point or moment. The reference of today, on


the other hand, is to a span containing many points, so that a sentence like
(103) may well be true even though (105) is false.

(105) The zoo is now open.


Adverbs 241

Evidently, while now(p) always implies today(p), not now{p) does not imply
not today(p). I conclude that the rule for now, (R.44), does not provide a
basis for a rule for today.

(R.44) V(now) is that function Ftj, such that, for any a e t , F(a) is 1 if and
only if, for any individual, a, such that a utters now(a) at a time, j,
a is 1 at j.

The rule for today will be as follows:

(R.45) V(today) is that function F,/, such that, for any a e Dt, F(a) is 1 if
and only if, for any individual, a, such that a utters today(a) during
a time-span, j, = 24 hours, α is 1 in j.

Given a rule like (R.45) for today, the formulation of rules for yester-
day/tomorrow is straightforward. The rule for yesterday will be as in (R.45)
save for the following abbreviated clauses:

(R.46) V(yesterday) is that function F(jt . . . a utters yesterday(a) during j


and today(a) is 1 in j' and j is the temporal successor of j'.

Time adverbs like today are, of course, truth-functional. Problems of a


different order are presented by such items as soon. A sentence like (106)
is met, appropriate, if and only if at some point in time closely following
another at which (106) is uttered, (107) is true.
(106) It will snow soon.

(107) It is snowing.

A very much abbreviated rule for soon would be as follows:

(R.47) V(soon) is that function F,/, ... F(a) is met if and only i f . . . a utters
soon(a) at a time, j, and a is 1 at a time, j' later than j, and for a,
close-toi}' ,j).

Of course, time adverbials are frequently not unitary adverbs like soon but
prepositional phrases such as on Friday/in ten minutes, etc.. The denotations
of such phrases depend on the meanings of their parts and it is, therefore,
necessary to provide rules both for the preposition and the NP concerned.
It is well known that, in English, there are strict semantic constraints on
the choice of prepositions in prepositional phrases. Thus, at co-occurs with
NPs denoting points of time, as in at six ο 'clock, while on combines with NPs
which denote days of the week and in takes NPs denoting divisions of the
day, weeks, months and seasons. There are parallel restrictions for locatives.
242 Semantic rules

Accepting these restrictions, let us take on as representative and let us


distinguish temporal on from locative on by a subscript. The following is an
outline of the rule concerned.

(R.48) V(onj) is that function FPrep such that for any a e FNP, F(a) e F/v /Iv
such that, for any j, j/x satisfies F(a,x) if and only if j is a day.

We could formulate the rule for temporal at as follows:

(R.49) W(atj) is that function Fprep such that, for any a e FNP, F ( a ) e FIV /IV
such that, for any j, j/x satisfies F(a,x) if and only if j is a moment
of time.

Finally, consider what is involved in the specification of rules for place


adverbials. In English, three such adverbials, here/there/yonder, are basic
members of the category of adverbs. The others are prepositional phrases such
as in London/by the river. The basic adverbs are not used truth-functionally,
depending, as they do, on contextual features, while the derived cases may be
dependent on context or independent according to the NP concerned, e.g. in
that house, compared with in London - I ignore combinations of preposition
and adverb like over here/there.
Clearly, here requires the specification of both an addressor and a contex-
tualised location. The relevant outline is as follows:

(R.50) V(here) is that function FLV /IV such that, for any β e FIV and a e
FNP such that (Β, A) is 1, ((F(/3)),a) is 1 if and only if there exists
a context, C, and a place, i, such that C(here) = i, and, for any
individual, a, such that a utters here(ß, α ) at i, (β, α ) is 1 at i.

The rule for there can be formulated using two distinct contextualised loca-
tions, i and i'.

(R.51) V(there) is that function FfV / I V such that, for any β e F[V and a e
FNP such that (Β, A) is 1, ((F(/?)),a) is 1 if and only if there exists
a context, C, and places, i, i', such that C(there) = i \ such that, for
any individual, a, such that a utters there(ß, α ) at i, (β, α) is 1 at i'.

As in the case of time adverbials, prepositional phrases denoting locations


are subject to various co-occurrence restrictions, for example, in London is
acceptable, but at London is not save with special verbs like aim a missile.
Apart from such details, these adverbs have semantic rules which, mutatis
mutandis are like those for their temporal counterparts and I shall not discuss
them further.
Bibliography

Ajdukiewicz, Kazimierz
1935 "Syntactic connexion", in: S. McCall (ed.), 207-231. (Translated by B.
Bruckman, 1967.)
Allwood, Jens - Lars-Gunnar Andersson - Osten Dahl
1977 Logic in linguistics. Cambridge: Cambridge University Press.
Austin, John Langshaw
1962 How to do things with words. Cambridge, Mass: Harvard University Press.
Bach, Emmon - Robert T. Harms (eds.)
1968 Universals in linguistic theory. New York: Holt, Rinehart and Winston.
Basson, A . H . - D . J . O'Connor
1953 Introduction to symbolic logic. London: University Tutorial Press.
Belnap, Nuel D.
1969 "Questions: Their presuppositions, and how they can fail to arise", in: K.
Lambert (ed.), 23-38.
Bolinger, Dwight
1967 "Adjectives in English: Attribution and predication", Lingua 18: 1-34.
1968 "Entailment and the meaning of structures", Glossa 2: 119-127.
Boole, George
1854 An investigation of the laws of thought on which are founded the mathemat-
ical theories of logic and probability. London: Macmillan.
Cantor, Georg
1932 Gesammelte Abhandlungen mathematischen und philosophischen. (E. Zer-
melo, ed.) Berlin: Springer.
Carnap, Rudolph
1947 Meaning and necessity. Chicago: University of Chicago Press.
1961 Introduction to semantics and formalisation of logic. Cambridge, Mass: Har-
vard University Press.

Chafe, Wallace L.
1970 Meaning and the structure of language. Chicago: University of Chicago
Press.
Charniak, Eugene-Drew McDermott
1985 Introduction to artificial intelligence. Reading, Mass: Addison-Wesley.
244 Bibliography

Chomsky, Noam
1957 Syntactic structures. The Hague: Mouton.
1965 Aspects of the theory of syntax. Cambridge, Mass: MIT Press.
1972 Studies on semantics in generative grammar. The Hague: Mouton.
1980 Rules and representations. New York: Columbia University Press.
1986 Language and problems of knowledge. Cambridge, Mass: MIT Press.
Church, Alonzo
1941 The calculi of lambda conversion. Princeton: Princeton University Press.
1951 "The need for abstract entities in semantic analysis", Proceedings of the
American Academy of Arts and Sciences 80, no. 1: 100-112. [Reprinted in:
J.A. Fodor-J.J. Katz (eds.), 437-445.]
Cole, Peter-Jerry Morgan (eds.)
1975 Speech acts. New York: Academic Press.
Copi, Irving M.
1971 The theory of logical types. London: Routledge and Kegan Paul.
Cresswell, Max J.
1973 Logics and languages. London: Methuen.
1985 Structured meanings: The semantics of propositional attitudes. Cambridge,
Mass: MIT Press.
Curry, Haskell B.
1977 Foundations of mathematical logic. New York: Dover Publications.
Davidson, Donald - Gilbert Harman (eds.)
1972 Semantics of natural language. Dordrecht: Reidel.
Dirven, Rene-Günter Radden (eds.)
1982 Issues in the theory of universal grammar. Tübingen: Gunter Narr Verlag.
Donnellan, Keith
1966 "Reference and definite descriptions", Philosophical Review 75: 281-304.
[Reprinted in: D.D. Steinberg - L. A. Jakobovits (eds.), 100-114.]
Dowty, David R.
1979 Word meaning and Montague grammar. Dordrecht: Reidel.
Dowty, David R. - Robert E. Wall - Stanley Peters
1981 Introduction to Montague semantics. Dordrecht: Reidel.
Fillmore, Charles J.
1968 "The case for case", in: E. B a c h - R . T . Harms (eds.), 1-88.
Fillmore, Charles J.-Terence D. Langendoen (eds.)
1971 Studies in linguistic semantics. New York: Holt, Rinehart and Winston.
Fodor, Jerry A.-Jerrold J. Katz (eds.)
1964 The structure of language: Readings in the philosophy of language. Engle-
wood Cliffs, New Jersey: Prentice-Hall.
Bibliography 245

Fodor, Janet Dean


1977 Semantics: Theories of meaning in generative grammar. Brighton: Harvester
Press.
George, Alexander (ed.)
1989 Reflections on Chomsky. Oxford: Basil Blackwell.
Gladkij, Aleksej V . - I g o r A. Mel'cuk
1969 Elements of mathematical linguistics. Berlin: Mouton.
Grice, Paul
1975 "Logic and conversation", in: P. C o l e - J . Morgan (eds.), 41-58.
Higginbotham, J.
1983 "Logical form, binding, and nominals", Linguistic Inquiry 14, 395-420.
Hintikka, Jaakko
1962 Knowledge and belief: An introduction to the logic of the two notions. Ithaca,
New York: Cornell University Press.
1983 The game of language. Dordrecht: Reidel.
1989 "Logical form and linguistic theory", in: A. George (ed.), 41-57.
Hintikka, Jaakko-Julius Moravcsik- Patrick Suppes (eds.)
1973 Approaches to natural language: Proceedings of the I970 Stanford workshop
on grammar and semantics. Dordrecht: Reidel.
Hughes, G e o r g e - M a x J. Cresswell
1968 Introduction to modal logic. London: Methuen.
Isard, Stephen
1975 "Changing the context", in: E. Keenan (ed.), 287-296.
Jackendoff, Ray S.
1972 Semantic interpretation in generative grammar. Cambridge, Mass: MIT
Press.
Jacobs, Roderick-Peter S. Rosenbaum (eds.)
1970 Readings in English transformational grammar. Boston: Ginn.
Katz, Jerrold J.
1964 "Analyticity and contradiction in natural language", in: J. A. Fodor-J.J.
Katz (eds.), 519-544.
1966 The Philosophy of language. New York: Harper.
Karttunen, Lauri
1971 "Implicative verbs". Language 47: 340-58.
1974 "Presupposition and linguistic context", Theoretical Linguistics 1: 182-94.
Keenan, Edward L.
1972 "On semantically based grammar", Linguistic Inquiry 3.4: 413-462.
1975 (ed.) Formal semantics of natural language. Cambridge: Cambridge Uni-
versity Press.
246 Bibliography

Keenan, Edward L.-Leonard M. Faltz


1985 Boolean semantics for natural language. Dordrecht: Reidel.
Kiparsky, Paul-Carol Kiparsky
1970 "Fact", in: D.D. Steinberg - L . A . Jakobovits (eds.), 345-369.
Kittay, Eva Feder
1987 Metaphor: Its cognitive and linguistic structure. Oxford: Clarendon Press.
Klima, Edward S.
1964 "Negation in English", in: J.A. Fodor-J.J. Katz (eds.), 246-323.
Kneale,
1962 William-Martha
The development Kneale
of logic. Oxford: Oxford University Press.
Koster, Jan
1986 Domains and dynasties: The radical autonomy of syntax. Dordrecht: Foris
Publications.
Kripke, Saul A.
1959 "A completeness theorem in modal logic", Journal of Symbolic Logic 24:
1-14.
1963 "Semantical considerations on modal logics", Acta Philosophica Fennica
16: 83-94.
Lakoff, George
1971a "On generative semantics", in: D.D. Steinberg-L.A. Jakobovits (eds.), 232-
296.
1971b "Presuppositions and relative well-formedness", in: D.D. Steinberg-L.A.
Jakobovits (eds.), 329-340.
1975 "Pragmatics in natural logic", in: E. Keenan (ed.), 283-286.
Lambert, Karel (ed.)
1969 The logical way of doing things. New Haven: Yale University Press.
1970 Philosophical problems in logic. Dordrecht: Reidel.
Langendoen, Terence D.
1969 The study of syntax. New York: Holt, Rinehart and Winston.
1970 Essentials of English grammar. New York: Holt, Rinehart and Winston.
Langer, Susanne Κ.
1953 An introduction to symbolic logic. New York: Dover Publications.
Lees, Robert B.
1960 The grammar of English nominalizations. (Supplement to International Jour-
nal of American Linguistics 26.) Bloomington: Indiana University Press.
Leonard, Henry S.
1967 Principles of reasoning. New York: Dover Publications.
Lewis, David
1970 "General semantics", Synthese 22: 18-67. [Reprinted in: D. Davidson-G.
Harman (eds.), 169-218.]
1973 Counterfactuals. Cambridge, Mass: Harvard University Press.
Bibliography 247

Lukasiewicz, Jan
1920 "On 3-valued logic", (Translated in: S. McCall (ed.), 15-115.)
Lyons, John
1968 Introduction to theoretical linguistics. Cambridge: Cambridge University
Press.
Mandelbaum, David G. (ed.)
1949 Selected writings of Edward Sapir in language, culture and personality.
Cambridge: Cambridge University Press.
McCall, Storrs (ed.)
1967 Polish logic 1920-1939. Oxford: Clarendon Press.
McCawley, James D.
1971 "Tense and time reference in English", in: C.J. Fillmore-T. Langendoen
(eds.), 96-113.
1981 Everything that linguists have always wanted to know about logic. Oxford:
Basil Blackwell.
Montague, Richard
1968 "Pragmatics", in: R. Klibansky (ed.) Contemporary philosophy: A survey.
Florence, La Nuova Italia Editrice, 102-122. [Reprinted in: R. Thomason
(ed.), 95-118.]
1970a "English as a formal language", in: B. Visentini et al. (eds.) Linguaggi nella
Societa e nella Tecnica. Milan, 189-224. [Reprinted in: R. Thomason (ed.),
188-221.]
1970b "Universal grammar", Theoria 36: 373-398. [Reprinted in: R. Thomason
(ed.), 222-246.]
1973 "The proper treatment of quantification in ordinary English", in: J. Hintikka
et al. (eds.). [Reprinted in: R. Thomason (ed.), 247-270.]
Montague, Richard - Donald Kalish
1959 "That", Philosophical Studies 10: 54-61. [Reprinted in: R. Thomason (ed.),
84-91.]
Moore, J.T.
1960 Fundamental principles of mathematics. New York: Holt, Rinehard and
Winston.
1962 Elements of abstract algebra. New York: Macmillan.
Partee, Barbara Hall
1962 "All about predeterminers", [Unpublished paper, MIT.]
1975 "Montague grammar and transformational grammar", Linguistic Inquiry vi
2: 203-300.
1976 (ed.) Montague grammar. New York: Academic Press.
Prior, Athur Norman
1968 Papers on time and tense. Oxford: Oxford University Press.
248 Bibliography

Quine, Willard van Orman


1941 Elementary logic. Boston: Ginn.
1960 Word and object. Cambridge, Mass: MIT Press.
1962 Methods of logic. London: Routledge and Kegan Paul.
Quirk, Randolph-Sidney Greenbaum
1973 A university grammar of English. London: Longman.
Reichenbach, Hans
1947 Elements of symbolic logic. London: Macmillan.
Rescher, Nicholas
1969 Many-valued logic. New York: McGraw-Hill.
Ross, John Robert
1970 "On declarative sentences", in: R. Jacobs-P.S. Rosenbaum (eds.), 222-272.
Russell, Bertrand
1905 "On denoting", Mind 14: 479-93.
1912 The problems of philosophy. Oxford: Oxford University Press.
1919 Introduction to mathematical philosophy. London: George Allen and Unwin.
1946 History of western philosophy. London: George Allen and Unwin.
1950 An enquiry into meaning and truth. London: Unwin Hyman.
1956 Logic and knowledge. London: Unwin Hyman.
Sapir, Edward
1944 "Grading: A study in semantics", Philosophy of Science 11: 93-116. [Reprinted
in: D.G. Mandelbaum (ed.), 122-149.]
Saussure de, Ferdinand
1915 Cours de linguistique generale. Paris: Payot.
Scott, Dana S.
1970 "Advice on modal logic", in: K. Lambert (ed.), 143-173.
Searle, John R.
1969 Speech acts. Cambridge: Cambridge University Press.
1971 "The problem of proper names", in: D.D. Steinberg - L . A . Jakobovits (eds.),
134-141.
Sgall, Petr
1975 "Conditions of the use of sentences and of semantic representation of topic
and focus", in: E. Keenan (ed.), 297-312.
Steinberg, Danny D. - Leon A. Jakobovits (eds.)
1971 Semantics - an interdisciplinary reader in philosophy, linguistics and psy-
chology. Cambridge: Cambridge University Press.
Strawson, Peter F.
1950 "On referring", Mind 59: 320-44.
1964 "Identifying reference and truth values", Theoria 30: 96-118.
Bibliography 249

Sweet, Henry
1891 A new English grammar logical and historical. [Reprinted I960.] Oxford:
Clarendon Press.
Tarski, Alfred
1941 Introduction to logic and to the methodology of deductive sciences. (Trans-
lated by O. Helmer.) Oxford: Oxford University Press.
1956 Logic semantics and metamathematics. (Translated by J.H. Woodger.) Ox-
ford: Oxford University Press.
Taylor, John
1989 Linguistic categorisation. Prototypes in linguistic theory. Oxford: The Claren-
don Press.
ter Meulen, Alice G.B. (ed.)
1983 Studies in model theoretic semantics. Dordrecht: Foris Publications.
Thomason, Richmond H.
1974 Formal philosophy: Selected papers of Richard Montague. New Haven,
Conn: Yale University Press.
Van Dijk, Teun A.
1977 Text and context. Explorations in the semantics and pragmatics of discourse.
London: Longman.
Van Fraassen, Bas C.
1969 "Presuppositions, supervaluations, and free logic", in: K. Lambert (ed.),
67-91.
Van Heijenoort, Jean (ed.)
1967 From Frege to Gödel: A source book in mathematical logic. Cambridge,
Mass: Harvard University Press.
Van Riemsdijk, H a n k - E d w i n Williams
1986 Introduction to the theory of grammar. Cambridge, Mass: MIT Press.
Vendler, Zeno
1967 Linguistics in philosophy. Ithaca: Cornell University Press.
1971 "Singular terms", in: D.D. Steinberg - L. A. Jakobovits (eds.), 115-133.
Wall, Robert
1972 Introduction to mathematical linguistics. Englewood Cliffs: Prentice-Hall.
Whitehead, Alfred North - Bertrand Russell
1910 Principia mathematica. Cambridge: Cambridge University Press.
Zermelo, Ernst
1908 "Investigations in the foundations of set theory", in: J. Van Heijenoort (ed.),
199-215.
Index

absolute 190 ambiguous middle term 97


— adjective 235 analysis tree 169
— property 235, 236 analyticity 10
— value 26 appositional clause 185
abstract category 188 appositive clause 89, 186
abstract complementiser 173, 174 appropriateness 14, 126, 134, 143,
abstract element 189, 192 157, 202
abstract noun 181, 182, 216, 221 Aristotle 43, 147
accessibility 55, 57, 58 article 177
accidental property 11, 235, 236 aspect 143
accusative 120, 122, 123 assertion 146, 147, 152, 156
active 86 assignment function 48, 64, 72
actual present 212, 213, 240 association 30
addition 22, 23, 31 attitude verb 123
adformula 183 attitudinal verb 172
adjective 50, 81, 121, 124, 162, 182, attributive 181 184, 185, 189-191,
186, 190, 192, 193, 199, 235
210, 214, 228, 235-237 Austin 14
— phrase 181, 182 auxiliary verb 101, 174, 175, 209
adjunction 40 axiom 30, 40, 41, 49, 57, 58
admissible element 34, 35
admissible valuation 149 barbara 45, 47
adverb 50, 127, 170-172, 182, 186, base 34, 130, 186
190, 192, 193, 199, 209, basic category 62, 163, 165, 200
214, 235, 237-242 basic expression 63, 165, 167, 172,
— phrase 183 184, 199-201
— placement 183 belief-world 57
adverbial 89 Belnap 152-155, 204
affirmative 202,221 binding theory 173
agentive noun 81 blocked derivation 35
Ajdukiewicz 63 Bolinger 184, 194
algebra 29 Boole 30
algorithm 34,35 Boolean algebra 30, 31, 40
alienable possession 122 bound 110
Allwood 54,92 — alphabetic variance 60
ambiguity 7, 16, 17, 34, 39, 45, 59, — anaphor 111, 113-115, 122, 124
65, 76, 104, 112, 114, 119, — anaphor constraint 111
122, 124, 125, 127, 146, — definition of 110
153, 169, 172, 208 — pronoun 109, 112
252 Index

— variable 47 co-ordinate 62, 127, 128


bounded set 218 co-ordinating 197, 203
cognitive linguistics 19
C classical necessitation 148 coindex 106, 111-113, 119
c-command 112, 113, 115, 119 command definition 110
— definition 110 comment 156
calculus of classes 43, 50 common noun 70, 176, 180,
calculus of functions 43 184-186, 190, 200, 215,
calculus of propositions 37, 42, 43, 217, 227, 228, 236, 237
50 commutation 30, 151, 189, 210, 211,
Cantor 20, 23 213
cardinal equivalence 24 comparable 190
cardinal number 23, 24 comparative 186, 190-193
cardinality 23, 24 compatibility 55
Carnap 3, 4, 12, 13, 23, 62, 129
complement 21, 31, 144, 172, 174
Cartesian set 24, 25
complementiser 92, 94, 166, 174, 188
case 115, 118, 129, 130, 179, 180
complete 31, 40
— assignment 121, 122
complex adverb 183
— saving 121
complex expression 58, 85, 165, 166
categorial grammar 58, 62, 63, 162,
complex function 27
165
complex nominal 175, 176, 180, 215
categorial language 60
complex predicate 165
categorial syllogism 45
complex proper name 140
category 58, 162, 203
causal 15, 206 complex proper noun 99
— relation 211 compositionality 4, 8, 11, 12, 16, 63,
causality 39, 207 104, 125, 166, 201
causation 206, 211 compound nominal 164
causative 206 conceptual impossibility 53
cause 211 conceptual necessity 51, 54
Chafe 84 conclusion 95, 97
chain 122 concord 121
characteristic property 27 conjunction 22, 30, 31, 37, 39, 52,
Chamiak 90 160, 163, 191, 192, 197,
Chomsky 90, 104, 105, 107, 109, 203, 205
110, 115, 118, 120-122, — ambiguity 89
124 — reduction 89
Church 18, 60 connective 196
class inclusive be 82, 83 connotation 14
classical necessitation 148, 149 consistency 57
classical valuation 149 consistent 31, 40 41
classical world 55 constant 64, 72
closed formula 47 — function 5, 214
co-operative principle 205 constraint 110
Index 253

context 13, 127, 153 degree 29, 186-188, 192-194. 198


— dependency 126 — complement 193
— grammar 107 deictic 127
— of situation 82, 157 deletion 90, 105, 113, 179, 184, 232
— of use 69, 126, 128, 130 denotation of types 68
free grammar 62 deontic logic 54, 58
contextual dependence 128 deontic operator 234
contextual domain 141, 157 deontic structure 101
contextual index 130, 131 derivation 34, 58, 105, 107, 189
contextual property 129, 201 derived category 62, 63, 163, 199
contingent condition 3, 4 derived expression 165, 167, 168,
contingent truth 10 190, 191, 201
contradiction 3, 4, 31, 41, 57, 136, detachment 40
141, 144, 145, 150 155 direct answer 154, 155
contrast 188, 203, 204 disambiguated language 169
conversational postulate 135 disambiguation 147
coreference 16, 85, 105, 106, 108, discontinuous element 191, 192
109, 114, 115, 117, 118, discourse analysis 141
120, 125, 127, 188, 191 disjoint 23, 30
correlative 194, 213 — reference 108, 112, 118, 119
count noun 176, 182, 187, 221, 225 disjunction 22, 30, 31, 37, 160
counterfactual 11 disjunctive syllogism 45
Cress well 2, 8, 13, 18, 20, 41, 42, 51, distinct 23
52, 55-58, 60, 65, 67, 68, distributed term 46
70, 86, 87, 92-99, 104, distribution 30
107, 124, 126, 128-133, distributive law 22
140-142, 147, 162, 163, domain 26, 27, 48, 104, 112
165-169, 172, 173, 177, — definition 110
183, 196, 202, 219, — restriction 79
227-231, 235 dominate 110, 112
cross-world identity 11, 55 Donnellan 153, 156
Curry 34, 35 double negation 210
Dowty 62, 69, 72, 74
de dicto 76, 91, 229, 239
de re 76, 91, 229, 239 echo question 116
de Saussure 2 effective procedure 34
declarative 14, 134, 155 ellipsis 189, 192
deep structure 2, 90, 104, 169, 179, emphasis 188
213, 230, 231 empty category 106, 108, 115, 122,
defeat 42 124, 173, 177, 182, 198
definite article 84, 141, 157, 177, empty quantifier 182
186, 190 empty set 21, 23
definite description 61, 69, 83, 138, entailment 15, 52, 53, 135, 139, 151,
140-142, 156, 177 158, 159, 209, 230
254 Index

epistemic 54 focus 156


— necessity 54 Fodor 9, 12, 14
equality 28, 52, 73, 74, 96, 192 formal grammar 2
— symbol 94 formal necessity 51
equative 186, 191 formation rule 37, 47, 66, 165
equivalence 22, 28, 32, 37, 40, 53, formula 47
56, 57, 60, 83, 94, 95 free 110, 117
— relation 28 — definition of 110
erotetic logic 133, 153 — pronoun 109, 111, 112
erotetic presupposition 154, 155 — variable 47, 49, 60, 166
essential property 11, 99, 100, 235, Frege 4, 7, 10, 52, 61, 63, 65, 78, 82,
236 83, 93, 95
essential syntactic property 114 function 3-8, 24-27, 29, 34, 40, 43,
etymology 81 48, 50, 55, 62, 63, 70, 93,
excluded middle 30 98, 129, 131, 163, 167,
exclusive disjunction 22, 37, 39, 150, 177, 179, 180, 201
219 functional application 63
existential be 82, 84 functor 58, 163, 164
existential presupposition 139, 149, future 68, 99, 212
152, 158 fuzzy 20, 137, 209, 224
existential proposition 152, 156
existential quantification 99, 222 Gallin 69
extension 4, 6-8, 10, 23, 48, 50, 55, game-theoretic semantics 84
61, 72, 83, 92, 93, 214, gender 156
215, 229 generality 81
extensional operator 228 generative semantics 2, 14
extensional type 64 generator 34
extensional verb 6, 8 genitive 122, 180, 226, 227
extensionality 6, 61 gerundive 90, 232
Government and binding theory 14,
fact 10 16, 105
factive presupposition 144, 158 gradability 210
factive verb 144, 145, 149 gradable 13, 41, 137, 190, 225
factive 158, 231 — adjective 210, 236
false question 154 — property 205, 235, 237
Faltz 15, 53, 144 grammatical present 212
felicity 14, 126, 135 Greenbaum 182
— condition 14, 134, 136, 137 Grice 14
field 26 grouping ambiguity 88, 89
Fillmore 120, 130
filter 66, 76, 105-108, 112, 117, 198, habitual aspect 100, 212
218, 226, 228 happiness 134
finite verb 121 heaven 42, 55-57, 201
first order calculus 49 Higginbotham 118
Index 255

higher order calculus 49, 50 informativity 53


higher order function 176, 179 initial element 34
Hintikka 57, 82-84, 114, 115, 231 intensifier 190
homomorphism 12, 33 intension 3, 4, 6-8, 10, 61-65, 72,
homonymy 81,91 92, 215, 229
Hughes 52 intensional adjective 236
Humean property 211 intensional ambiguity 17
Husserl 63 intensional function 96
intensional isomorphism 12
hypothetical syllogism 45 intensional logic 18, 37, 61, 104
intensional model 72
identical 21
intensional object 65, 98, 99, 131,
identifying be 74, 82, 96
143, 201, 228, 236
identity 16, 56, 57, 83, 94-96, 158
intensional operator 228
— function 94
intensional preposition 64
— of indiscernibles 96
intensional type 64
ideographic name 194
intensional verb 228
imperative 126, 127, 132, 155
intensionality 52, 61
implication 37, 39, 41, 151
intensive 188
implicative verb 146, 151, 152, 232
interpretation 41, 48, 50, 61, 72, 128,
implicature 14
201
impossibility 54
intersection 22
inalienable possession 122
intransitive verb 5, 6, 58, 59, 64, 124,
inclusive disjunction 22, 39, 150
164-166, 168, 172-174,
inclusive or 202
176, 189, 191, 198, 199,
indefinite article 26, 177
215, 217, 227, 236, 239
indefinite quantifier 79
indeterminate value 147, 148 invalid 41
index 62, 105, 106, 108, 127 invalidity 53
indexical 13, 127-129 iota operator 140
— model 132 irreflexive relation 28
indexing rule 108 Isard 135
indicative 233 isomorphism 12, 32
individual 5, 6, 11, 47-49, 68, 99,
Jackendoff 115
127, 139, 142-144, 167,
joint 22
177, 201, 214, 215, 217,
justification 211, 212
218, 227, 231, 234-236
— concept 70, 98, 201, 215, 228
Kalish 16, 95
infelicity 156
Karttunen 134, 146, 158
inference 40, 47, 211, 212
Katz 10, 15
infinite 23
Keenan 15, 53, 137, 144
— verb 121
Kepler paradox 96
infinitive 98, 106, 108, 111, 121, 123,
Kiparsky 144
124, 172, 173, 193, 194,
Kittay 14
231, 232
256 Index

Klima 197, 208 logical product 22


Kneale - Kneale 96 logical quantifier 176, 217, 222
Koster 104, 109, 174 logical representation 90, 162
Kripke 55 logical sum 22
logical validity 51
Lakoff 4, 86, 126, 133, 134, 136, 143 logical word 55
lambda abstraction 18, 61, 120, 132, logically true proposition 4
164, 165, 178, 190, 227 Lukasiewicz 147
lambda binding 168 Lyons 81, 210
lambda categorial language 196
lambda conversion 60, 71 major premise 95
lambda operator 59, 140, 164 major term 45
Langer 30 many-to-many relation 29
language acquisition 18 many-to-one relation 29
law of substitution 7 mapping 34, 148
Lees 179 mass noun 176, 181, 182, 187, 216,
leftness condition 119, 120 218 220, 221, 225
Leibniz 7, 10, 16, 95 material implication 52, 53
—' law 52 mathematical linguistics 1
Leonard 152-154 McCawley 38, 48, 51, 53, 58, 98,
Lesniewski 63 137, 140-146, 148, 149,
Lewis 3, 11, 58, 62, 126-128, 134, 153, 157-159
237 meaning postulate 76, 228
lexical ambiguity 16, 17 meaningful expression 66
lexical content 78, 79 meet 22
lexical decomposition 153 meso- 29
lexical noun 108, 111 metalanguage 207
lexicalism 107 metaphor 14
lexicon 63, 199 middle term 45
linguistic-knowledge 3 minor premise 95, 97
logical connective 203 minor term 45
logical constant 38, 40, 51, 84, 147, modal 50
202, 203, 205 — calculus 104
logical deletion 170 — logic 18, 39, 42, 54, 61, 68, 72
logical form 2, 104, 105, 116, 130, — operator 63, 182, 193
133, 163-166, 169, 175, — verb 54, 174
177, 179, 183, 185, 186, modality 42, 50, 74, 101
188, 192 194 model 13
logical form representation 18, modus ponens 40
107-109, 111, 112, monotonicity 42
116-118, 120, 127 Montague 1, 2, 10-13, 15-20, 29, 37,
logical implication 9 42, 45, 47, 50, 58, 61-66,
logical impossibility 53 68-72, 74, 76, 82, 87, 92,
logical invalidity 51 93, 95, 96, 98, 109,
Index 257

126-129, 131, 132, 140, nonrestrictive relative clause 89, 185


147, 162, 165, 168, 169, nonspecific 86-88, 218, 220
171-174, 177, 182, 183, nonsymmetrical relation 28
185 186, 196, 201, 217, noun modifier 185
228, 237 noun phrase 6, 16, 43, 83, 87,
— grammar 1, 68, 105 105-107, 109, 114, 122,
mood 45, 47 181
Moore 33 -— conjunction 90
morphology 4, 173, 186-188 np-movement 113
movement 106, 119, 175, 188, 192 null set 21-23
multiplication 22, 31, 164
object deletion 170
narrow scope 86, 89, 220 obligation 54, 101
natural logic 135 oblique 120-123
natural number 21, 23 — context 64, 65
necessary condition 3, 4, 57 old information 107, 138, 141, 156,
necessary connection 211 157, 219
necessary proposition 51, 52 one-to-many relation 29
necessary truth 10 one-to-one relation 29, 32, 33
necessity 4, 7, 10, 41, 50, 51, 54 opacity condition 111, 112, 115, 117,
— operator 52, 53, 101 119
negation 21, 30, 31, 37, 85, 87, 88, opaque context 52, 64, 87, 88, 91
101, 139, 146, 167, 202, open formula 47
207-210, 220 open individual 131, 141-143
negative comparison 187 open proposition 124, 129, 130, 141,
negative 192 147, 173, 201
negator 182, 209 ordered pair 6, 8, 24-26, 93
new information 138, 156-158, 220 ordered set 23
nominal 58, 90, 125, 140, 162, ordering 120, 130
165-167, 171, 173, 175, — function 132
176, 178, 180, 185, 192,
193, 208, 213-215, 217, paradox 65, 94, 96
221, 226, 227, 229, 230, part of speech 79, 162, 186
238 Partee 177, 179, 186
nominalisation 90 partial function 99, 142
nominative 120-122 particle 170-172, 188, 193
nonclassical necessitation 149 particular affirmative 43
nonclassical world 55 particular negative 43
nonessential property 11 particular quantification 43, 44, 48,
nonfactive clause 145 82
nonfactive verb 144, 145 partitive 121, 179, 180, 184, 189,
nonlogical 202 190, 226, 227
— necessity 10 passive 86, 122
nonmonotonic 54 past tense 75, 98, 99
258 Index

performative antinomy 136 Prior 42, 68


performative 126, 133, 134, 137 probability grammar 1
permission 54 problem of knowledge 3
phrasal verb 171, 229 product set 24, 25, 72
plane of expression 2 progressive aspect 170
plural 84 prohibition 54
plurality 182 projection principle 114
point of reference 127 pronoun 13, 107-113, 115, 117-119,
polysemy 80, 81, 101, 220 127, 157, 171
positive 186, 189, 192 proper noun 5, 11, 46-48, 62, 83, 97,
possibility 50, 54, 57, 74, 101 117, 165, 167, 168, 178,
— operator 53 183, 201, 214
possible world 3-6, 8, 11, 41, 42, 47, proper subset 21
50, 51, 53, 55-57, 61, 62, property 5, 6, 41, 47, 49, 70, 71, 96,
68, 127, 130, 142, 201, 202 99, 114, 128, 129, 142,
possibly true proposition 51, 53, 131 167, 176, 177, 186, 186,
postpone 171, 172 188, 193, 194, 205, 215,
postulate 30, 31, 64 216, 224, 236, 237, 239
— of properties 214, 239
power set 24
proportional quantifier 176, 187, 188,
pragmatic 79, 105, 125, 201, 204,
190, 192, 217, 222
206, 207, 234, 238
proposition 3-6, 8-10, 12-15, 27, 30,
— operator 39, 210
31, 4 1 ^ 3 , 47, 50-53,
— presupposition 137-139, 155
55-57, 70, 71, 81, 92-94,
pragmatics 13, 126
123, 124, 129, 139, 141,
predicate calculus 18, 43, 58, 61, 73,
142, 148, 155, 156, 201,
94, 104, 106, 116, 164
202, 204, 205, 208, 210,
predicative 181, 182, 184, 189, 190,
213, 229, 233, 239
199
propositional attitude 8, 54, 83, 87,
— be 74, 82, 83, 96
91-93, 95, 97, 98, 123,
preposition 121, 122, 130, 171, 172,
124, 166, 174, 231
178, 179, 183, 184, 241,
propositional calculus 18, 40, 41, 45,
242
47, 163
prepositional adverb 229 propositional content 9, 12, 134, 202,
prepositional phrase 185, 241, 242 238
present perfect 143, 213 propositional equivalence 56
present 68, 213 propositional function 5, 17, 47-49,
— tense 75, 100, 148 58, 59, 76, 124
presupposition 15, 52, 126, 148, 152, propositional identity 56
153, 156-159, 219, 239 protoproposition 56, 57
presuppositional failure 69, 138, 147, prototype theory 78
155, 156 pseudocontradiction 205
presuppositional language 148
primary occurrence 92 quantification 43, 88, 116
Index 259

quantifier 49, 85, 88, 106, 107, 116, Russell 15, 23, 24, 37, 40, 47, 69, 82,
167, 177, 178, 184, 186, 84, 92, 140, 141, 155, 177,
192 186, 204, 207, 235
— interpretation 106, 107, 119
— order 86 Sapir 223
— phrase 71, 87, 106, 176, 178, 179, satisfaction 5, 14, 17, 25, 48, 49, 59,
182, 183 73, 84, 130, 134, 135, 137,
— scope 17, 44, 45, 104, 106, 119, 140, 149, 209, 215, 224,
175 235, 238
— word 88, 167, 168, 176 scope 60, 85, 87, 105, 106, 120, 207
question 133, 153, 154 — ambiguity 85, 86, 88, 89
Quine 16, 79-81, 85-88, 102, 185, — of negation 147
186, 205, 220 Scott 127
Quirk 182 Searle 84, 136
second order calculus 49
range 6, 26 secondary occurrence 92
reconstruction 118 secondary presupposition 154
reduction 189, 192 selectional restriction 116
reference modifier 184 selectional restriction 216, 217, 228,
reflexive pronoun 106, 111, 114, 115, 229, 234
117, 123 semantic presupposition 138, 139,
reflexive relation 28 146, 151, 158
reflexivity 57 semantic redundancy 94
regimentation 102, 185 semantic representation 2, 18, 39, 59,
Reichenbach 22, 24, 27, 29, 50 61, 76, 104, 125, 130, 132
relation 5, 6, 15, 16, 22, 23, 25-29, sense 6, 62, 63, 72, 83, 93, 95, 96,
41, 48, 52, 55-57, 60, 71, 10, 129, 214, 229
94, 95, 107, 108, 111, 148, sentence grammar 107
149, 151, 158, 188, 192, sentence modifier 50, 183, 196
206, 207, 211 sentential adverb 63, 64, 182, 190
relative clause 90, 110, 112, 113, sentential complement 87, 91, 92,
141, 144, 175, 179, 185, 193, 194
186, 189, 198 sentential connective 203
relative pronoun 111, 143, 186 set 20, 28, 29, 31, 48, 141
relevance 14, 49, 204, 205 — theory 20
remote subject 124 Sgall 130
reported speech 91 shallow structure 2, 105, 120, 169,
Rescher 69, 147 219
restrictive relative clause 89, 185 similarity relation 23
Tightness 207 simple past 143
rigid designator 11 simple present 98, 100
rule of derivation 32 sortal presupposition 142, 143, 153,
rule of inference 32 156
rules of functional application 196 specific 7, 86-88, 170, 218, 220
260 Index

speech act 134, 156 — binding 107


speech-act theory 14 transformation 25, 34, 86, 107,
state of affairs 3, 10, 47, 48, 50, 148 1 6 9 - 1 7 1 , 185, 231, 2 3 2
statistical linguistics 1 transformational 14, 16, 184
Strawson 69, 131, 142, 151, 155 — grammar 86, 89, 105, 170, 182
strict implication 9, 53, 160 transitive relation 28
structural ambiguity 16 transitive verb 6, 7, 122, 1 6 7 - 1 7 2 ,
structure dependence 109 174, 198, 227, 228, 2 3 0
subjunctive 123, 233 transitivity 28, 58
subordinate 189 truth 3, 4, 8, 9, 21, 30, 31, 39, 65,
subordinating 197, 203 79, 80, 105, 126, 127, 134
subset 21, 24, 28 — condition 3, 9, 202
substantive verb 74, 82, 82, 97 — predicate 9 4
substitution 5, 7, 8, 40, 51, 59, 94, 97 — table 38, 40, 149, 203
— by definition 4 0 — value 3, 8, 9, 14, 38
superlative 186, 189-191 functionality 51
supervaluation 1 4 8 - 1 5 0 value gap 147
suppletion 79, 1 8 6 - 1 8 8 type 63, 64, 167
surface form 93
surface structure 2, 104, 107, 116,
uncertainty 207, 208
118, 125, 127, 166, 170,
underlying item 93, 94, 178
173, 183, 189, 213
understand 129
Sweet 78
understanding 3
syllogism 45, 95, 97
uniform substitution 4 0
symmetrical 58
union 22
— relation 28
unique individual 16, 47, 99, 140,
synonymy 9, 10, 12, 49, 180
141, 177, 214
syntactic function 5, 162, 163
unique value 5, 26, 27, 48, 131
tag-question 2 0 9 unit set 23
Tarski 9, 73, 94 unitary nominal 176, 177, 179, 181,
tautology 3, 7, 31, 41, 139, 151 192, 198
Taylor 78 universal affirmative 43
temporal connective 38, 212, 213 universal grammar 18, 66, 79, 109,
temporal quantifier 79, 80, 99 111, 113, 120
tense 68, 121, 130, 131, 143, 153, universal negative 43
209, 212, 213, 68 universal quantification 43, 44, 48,
— logic 18, 68 49, 74, 82, 222
term of a syllogism 45 universal set 21, 31
term variable 164, 165, 175 universal statement 45
thing 5, 47, 234 universe of discourse 140, 141
Thomason 1, 27, 7 4 unordered pair 25
topic 142, 156 utterance 75, 126, 128, 130, 131,
trace 106, 107, 113, 116, 118, 119 132, 2 4 0
Index 261

vacuous 143, 144, 146, 147, 155 wellformed formula 37


— preposition 121, 179-181, 183 wellformedness condition 105-107
valid question 154 wellformedness constraint 109
validity 46, 51, 55, 56 wh-interpretation 118-119
value 26 wh-movement 107, 119, 122
Van Dijk 156, 157, 204 wh-question 116, 119, 120, 75
wh-trace 117, 118
Van Fraassen 148, 149, 160
wh-word 116, 118, 133, 174, 209,
Van Riemsdijk 113
234
Van R i e m s d i j k - W i l l i a m s 104, 105,
Whitehead 37, 40
110, 111, 114, 115,
wide scope 85, 86, 89, 220
117-119
world-heaven 57
Vendler 141-237
world-knowledge 3
verb 27, 43, 50, 70, 122, 192, 193,
228 yes/no question 175
— phrase adverb 182
— phrase modifier 182 Zermelo 21
void set 23, 31
Mulder, Jan

m Foundations of Axiomatic
Linguistics
m 1989.15.5 χ 23 cm. ΧΠ, 475 pages. Cloth.
ISBN 311 011234 5
m (Trends in Linguistics. Studies and Monographs 40]

m This work presents a new theoretical approach to


the study of languages, and differs from other
m approaches in three major respects.
The methodology of the theory, a variant of Pop-
m per's hypothetico-deductive method, could be
termed "negativist", in order to differentiate it
from the "positivist" tradition the author sees as
m prevalent in European and American linguistics.
The theory is axiomatically based, closely rea-
m soned, and simple in relation to its adequacy,
thus making it a powerful theory.
m Finally, the theory, in close conjunction with the
methodology, is the sole instrument in the de-
m scription of languages.
Illustrated by examples from European and
m Oriental languages, the work gives a comprehen-
sive introduction to the theory, deals with pho-
m nology and morphology (in chapter V), and dis-
cusses syntax in general, as well as presenting a
m detailed description of English syntax (chap-
ters VI and VII). The "sentential level" is also
m covered (chapter VI) while chapter VIII presents
the axioms and definitions, i. e., the whole of the
m theory in compact form.

m mouton de gniyter
Berlin · New York
m Dietmar Zaefferer (Editor)
m Semantic Universale and
Universal Semantics
m 1991. 24 χ 16 cm. VIII, 242 pages. Cloth.
ISBN 3110133911

m (Groningen-Amsterdam Studies in Semantics 12)

(Foris Publications · Berlin · New York)


m
This collection of ten articles brings together empirically
m oriented investigations into semantic universale by logi-
cally interested linguists and theoretically oriented
research on universal semantics by linguistically oriented
m logicians.
Traditionally, empirical studies of those properties of
m the semantic subsystem of human language that are
shared by any other language have been carried out pri-
m marily by linguists, and have tended to be hampered not
only by a lack of data, but also by a lack of terminological
precision and rigor.
m Theoretically oriented research on a theory of semantic
subsystems that is general enough to fit any language has
m been the domain of logicians and philosophers of lan-
guage. Although this work has been precise and rigorous,
m it has tended to be of such generality as to be almost void of
empirical content.
m The uniting of these two approaches provides a basis
for a discussion of the adequacy of standard first order
logic for the modelling of natural language semantics.
m
Contributors are Johannes Bechert, Johan van Benthem,
David Gil, Manfred Immler, Ekkehard König, Manfred
m Krifka, Godehard Link, Hans-Jürgen Sasse, and Dietmar
Zaefferer.
m
m mouton de gruyter
m Berlin · New York