Sie sind auf Seite 1von 189


ISBN 963 85504 5 7
Imre Ruzsa, 1997.
Printed in Hungary
Copies of this book are available at
Aron Publishers
L. Eotvos University, Department of SymbolicLogic
Nyomdai CP STUDIO, Budapest
? ' "
), ,. .f ~ . r ? : ~ /i' : ,. T i..
In'menuiriammy wife (1942-1996)
; .
~ .
The material of the present monograph originates from a series of lectures held by the
author at the Department of Symbolic Logic of L. Eotvos University, Budapest. The
questions and the critical remarks of my students and colleagues gave me a very
valuable help in developing my investigations. My sincere thanks are due to all of
Special thanks are due to PROFESSOR ISTVAN NEMETI and Ms AGNES KURUCZ
who read the first version of the manuscript and made very important critical remarks.
In preparing and printing this monograph, I got substantial help from my son
DR. FERENC RUZSA aswell asfrom my daughter AGNES RUZSA.
* * *
This work was partly supported by the Hungarian Scientific Research Foundation
(OTKA II3, 2258) and by the Hungarian Ministry of Culture and Education (MKM
lmre Ruzsa
Budapest, June 1996.
Chapter 1 Introduction
1.1. The subject matter of metalogic
1.2. Basic postulates on languages
1.3. Speaking on languages
1.4. Syntax and semantics
Chapter 2, Instruments of metalogic
2.1. Grammatical means
2.2. Variables and quantifiers
2.3. Logical means
2.4. Definitions
2.5. Class notation
Chapter 3 Language radices
3.1. Definition and postulates
3.2. The simplest alphabets
Chapter 4 Inductive classes
4.1. Inductive definitions
4.2. Canonical calculi
4.3. Some logical languages
4.4. Hypercalculi
4.5. Enumerabilityand decidability
Chapter 5 Normal algorithms
5.1. What is an algorithm?
5.2. Definitionof normal algorithms
5.3. Deciding algorithms
5.4. Definite classes
Chapter 6 The first-order calculus (QC)
6.1. What is a logical calculus?
6.2. First-order languages
6.3. The calculus QC
6.4. Metatheoremson QC
6.5. Consistency. First-order theories
Chapter 7 The formal theory of canonical calculi (CC*) 76
7.1. Approaching intuitively 76
7.2. The canonical calculus 1:* 78
7.3. Truth assignment 81
7.4. Undecidability: Church' s Theorem 83
Chapter 8 Completeness with respect to negation 85
8.1. The formal theory CC 85
8.2. Diagonalization 87
8.3. Extensions and discussions 90
Chapter 9 Consistency unprovable 93
9.1. Preparatory work 93
9.2. The proof of the unprovabilityof Cons 94
Chapter 10 Set theory 98
10.1. Sets and classes 9R
10.2. Relations and functions 103
10.3. Ordinal, natural, and cardinal numbers 106
10.4. Applications 110
References 114
Index 116
List of symbols 122
APPENDIX (Lecture Notes): Type-Theoetical Extensional
and Intensional Logic 123
Part 1: Extensional Logic 127
Part 2: Montague's Intensional Logic 148
References 182
1.1 The Subject Matter of Metalogic
Modern logic is not a single theory. It consists of a considerable (and ever growing)
number of logical systems (often called - regrettably- logics). Metalogic is the sci-
ence of logical systems. Its theorems are statements either on a particular logical system
or about some interrelations between certain logical systems. In fact, every system of
logic has its own metalogic that, among others, describes the construction of the sys-
tem, investigates the structure of proofs in the system, and so on. Many theorems
usually known as "laws of logic" are, in fact, metalogical statements. For example, the
statement saying that modus ponens is a valid rule of inference - say, in classical first-
order logic - is a metalogical theorem about a system of logic. A more deeper meta-
logical theorem about the classical frrst-order logic tells us that a certain logical calcu-
lus is sound and complete with respect to the set-theoretical semantics of this system of
Remark. It is assumedherethat this essayis not the first encounterof the reader withlogic (neither
with mathematics), so that the examples above and some similar ones later on, are intelligible. However,
this doesnot meanthat the understanding of this bookis dependent on someprevious knowledge of logicor
Another very important task of metalogic is to answer the problem: How is logic
possible? To give some insight into the seriousness of this question, let me refer to the
well-known fact that modern logic uses mathematical means intensively, whereas mod-
em mathematical theories are based on some system(s) of logic. Is there a wayout from
this - seemingly - vicious circle? This is the foundational problem of logic, and its
solution is the task of the introductory part of metalogic. The greater part of this essay
is devoted to this foundational problem.
The device we shall apply in the course consists in dealing alternatively with
mathematical and logical knowledge, without drawing a sharp borderline between
mathematical and logical means. No knowledge of a logical or a mathematical theory
will be presupposed. The only presupposition we shall exploit will be that the reader
knows the elements of the grammar of some natural language (e.g., hislher mother
tongue), can read, write and count, and is able to follow so-called formal expositions.
(Of course, the ability last mentioned assumes - tacitly - some skills which can bebest
mastered in mathematics.)
The introductory part of metalogic is similar to the discipline created by David
Hilbert, called metamathematics. (See, e.g., HILBERT 1926.) The aim of metamathe-
matics was to find a solid foundation of some mathematical systems (e.g., number the-
ory, set theory) by using only so-calledfinite means. In Hilbert's view, fmite mathemat-
ics is sufficient for proving the consistency of transfinite (non-fmite, infinite) mathe-
matical theories. In a sense, the foundation of a logical calculus (which is sufficient for
mathematical proofs in most cases) was included in Hilbert's programme. (perhaps this
is the reason that scientists who think that modem logic is just a branch of mathematics
- called mathematical logic - often say metamathematics instead of metalogic.)
Metalogic is not particularly interested in the foundation of mathematical theories.
It is interested in the foundation of logical systems. In its beginning, metalogic will use
very simple, elementary means which could be qualified as finite ones. However, the
author does not dare to draw a borderline between the finite and the transfinite. We
shall proceed starting with simple means and using them to construct more complex
Every system of modem logic is based on a formal language. As a consequence,
our investigation will start with the problem: How is it possible to construct a lan-
guage? Some of our results may tum out to be applicable not only for formal languages
but for natural languages as well .
This essay is almost self-contained. Two exceptions where most proofs will be
omitted are the first-order calculus of logic (called here QC, Chapter 6) and set theory
(Chapter 10). The author assumes that the detailed study of these disciplines is the task
of other courses in logic.
Technical remarks. Detailed information about the structure of this book is to be
found in the Table of Contents. At the end of the book, the Index and the List of Sym-
bols helps the reader to find the definitions of referred notions and symbols. In the inner
reference, the abbreviations 'Ch .' , 'Sect.' , 'Def.', and 'Th.' are used for 'Chapter',
'Section', 'Definition', and 'Theorem', respectively. References for literature are given,
as usually, by the (last) name of the author(s) and the year of publication (e.g.,
'HILBERT 1926'); the full data are to be found in the References (at the end of the
book). - No technical term and symbol will be used without definition in this book. -
At the end of a longer proof or definition, a bullet '.' indicates the end. - A consider-
able part of the material in this book is borrowed from a work by the author written in
Hungarian (RUZSA 1988).
1.2 Basic postulates on languages
On the basis of experiences gained from natural languages, we can formulate our first
postulate concerning languages:
(L1) Each language is based on afinite supply ofprimitive objects. This supply is
called the alphabet ofthe language.
In the case of formal languages, this postulate will serve as a normative rule.
In spoken languages, the objects of the alphabet are called phonemes, in written
languages letters (or characters). The phonemes are (at least theoretically) perceivable
as sound events , and the letters as visible (written or printed) paint marks on the sur-
face of some material (e.g., paper) - this is the difference between spoken and written
languages. We shall be interested only in written languages.
In using a language, we form fmite strings from the members of the alphabet, al-
lowing the repetition (repeated occurrence) of the members of the alphabet. In the case
of spoken languages, the members of such a string are ordered by their temporal con-
secution. In the case of written languages, the order is regulated by certain conventions
of writing. (The author assumes that no further details are necessary on this point.)
Finite strings formed from the members, of the alphabet are called expressions - or,
briefly, words - of that language. The reader may comment here that not all possible
strings of letters (or of phonemes) are used in a (natural) language. Only a part of the
totality of possible expressions is useful; this part is called the totality of well-formed or
meaningful expressions. (But a counterexample may occur in aformal language!) Be it
as it is, to define the well-formed expressions, we certainly must refer to the totality of
all expressions. Thus, our notion of expressions (words) is not superfluous.
Note that one-member strings are not excluded from the totality of expressions.
Hence, the alphabet of a language is always a part of the totality of words of that lan-
guage. Moreover, by technical reasons, it is useful - although not indispensable - to
include the empty string .called empty word amongst the words ' of a language. (We
shall recur to this problem later on.)
Our second postulate is again based on ~ p r i n s with natural languages:
(L2) If we know the alphabet of a language, we know the totality of its words
(expressions). In other words: The alphabet of a language uniquely determines the
totality of its words.
To avoid philosophical and logical difficulties we need the third postulate:
(L3) The expressions ofany language are ideal objects which are sensibly realiz-
able or representable (in any copies) by physical objects (paint marks) or events
(sound events or others).
This assumption is not a graver one than the view that natural numbers are ideal ob-
jects. And it makes intelligible the use of a language in communication.,
Thus, in speaking of an expression (of a language) we speak of an ideal object
rather than of a perceivable object, i.e., a concrete representation of that ideal object. In
other words: Statements on an expression refer to all of its possible realizations, not
only to some particular representation of the expression. (Reference to a concrete copy
of an expression must be indicated explicitly.)
1.3 Speaking about languages
Speaking (or writing) about a language, we must use a language. In this case, the lan-
guage we want to speak about is called the object language, and the language we use
is called the language of communication or, shortly, the used language. The object
language and the used language may be the same.
The used language is, in most cases, some natural language, even when the object
language is aformal one. However, the used language is, in most cases, not simply the
everyday language, but it is supplemented with some technical terms (borrowed from
the language of some scientific discipline) and perhaps with some special symbols
which might be useful in the systematic treatment of the object language. (If the object
language is a natural one, then the used everyday language is to be supplemented, ob-
viously, with terms of linguistics. In the case of formal languages, the additional terms
and symbols are borrowed from mathematics, as we shall see later on.)
The fragment of the used language that is necessary and sufficient for the descrip-
tion and the examination of an object language is called usually the metalanguage of
the object language in question. In fact, this metalanguage is relativized to the used
language. (Changing the language of communication, the metalanguage of an object
language will change, too.) If the object language is a formal one, there might be a
possibility to formulate its metalanguage as a formal language. (Theoretically, this
possibility cannot be excluded even for natural object languages.) However, in such a
case we need a meta-metalanguage for explaining - to make intelligible - the formal-
ized metalanguage. This device can be iterated as often as one wishes, but in the end
we cannot avoid the use of a natural language, provided one does not want to play an
unintelligible game.
When speaking about an object language, it may occur that we have to formulate
some statements about some words of that language. If we want to speak about a word,
we must use a name of the word. Some words (but not too many) of a language may
have a proper name (e.g., epsilon is the proper name of the Greek letter e), others can
be named via descriptions (e.g., 'the definite article of English'). A universal method in
'a written used language (to be used in this essay as well) consists in putting the ex-
pression to be named in between simple quotation marks (inverted commas); e.g.,
'Alea iacta est' is a Latin sentence which is translated
into English by 'The die is cast' .
(Note that in a written natural language the space between words counts as a letter.)
The omission of quotation marks can lead to ambiguity, and, hence, it is a source
of language humour. An example:
- What word becomes shorter by the addition ofa syllable?
- The word 'short'.
Surely, it becomes 'shorter' but not shorter.
1.4 Syntax and semantics
The science dealing with symbols (or signs) and systems of symbols is called semiot-
ics. Languages as systems of symbols belong, obviously, under the authority of-semiot-
ics. Semiotics is, in general, divided to three main parts: syntax, semantics, and prag-
matics. (See, e.g., MORRIS 1938, CARNAP 1961.)
The syntax of a language is a part of the description (or investigation) of the lan-
guage dealing exclusively with the words of the language, independently of their
meaning and use. Its main task is to define the well-formed (meaningful) expressions
of the language and to classify them.
The part of linguistic investigations that deals with the meaning of words and with
the interrelations between language and the outer world but is indifferent with respect
to the circumstances of using the language belongs to the sphere of semantics.
Finally, if the linguistic investigation is interested even in the circumstances of
language use then it belongs to the sphere of pragmatics.
No rigid borderlines exist between these regions of semiotics. In natural languages,
most parts of syntactic investigations cannot be separated from the study of the com-
municative function of the language, and, henceforth, investigations in the three regions
become intertwined strongly. Of course, in the systematization of the results of studies,
it is possible to omit the semantical and the pragmatic aspects; this makes possible
pure syntax as a relatively independent area of language investigation.
In case of formal languages, the situation is somewhat different. A formal lan-
guage is not (or, at least, rarely) used for communication. It is an artificial product
aiming at the theoretical systematization of a scientific discipline (e.g., a system of
logic, or a mathematical theory). Its syntax (grammar) and semantics are not discovered
by empirical investigations, rather, they are created, constituted. Thus, seemingly, here
we have a possibility for the rigid separation of syntax and semantics. However, if our
formal language is not destined to be a l'art pour l'art game, its syntax must be suit-
able for some scientific purposes; at least a part of its expressions must be interpret-
able, translatable into a natural language. Consequently, the syntax and the semantics
of a formal language created for some scientific purpose must be intertwined: it is im-
possible to outline, create the former without taking into consideration the latter. After
the outline of the language, of course, the description of its syntax is possible inde-
pendently from its semantics. In this case, the role and function of the syntactic notions
and relations will be intelligible only after the study of the semantics.
In this essay, the following strategy will be applied: Syntax and semantics (of a
formal language) will be treated separately, but in t1l:e description of the syntax, we
shall give preliminary hints with respect to the semantics. By this, the reader will get an
intuitive picture about the function of the syntactic notion. However, our main subject
matter belongs to the realm of syntax.
Fonnallanguages are "used" as well, even if not for communicative purposes, but
in some scientific discipline (e.g., in logic). Applications of a fonnallanguage can be
assumed as belonging to the sphere of pragmatics - if somebody likes to think so.
However, this viewpoint is not applied in the literature of logic.
After these introductory explanations, we should like to tum to our first problem:
How is possible to construct the syntax of a language? However, we shall deal first
with the means used in the metalanguages. This is the subject of the following chapter.
2.1 Grammatical Means
The basic grammatical instruments of communication are the declarative sen-
tences. This holds true for any metalanguage - and even for our present hyper-
metalanguage used for the description of instruments of metalanguages. We formulate
defmitions, postulates, and theorems by means of declarative sentences. In what fol-
lows, we shall speak - for the sake of brevity - of sentences instead of declarative
sentences. Thus, sentence is the basic grammatical category of metalanguages.
Another important grammatical category is that of individual names - in what
follows, briefly, names. Names may occur in sentences, and they refer to (or denomi-
nate) individual objects, of course, in our case, grammatical objects (letters, words, ex-
pressions). In the simplest case, they are proper names introducedby convention. Com-
poundnames will be mentioned later on.
Themostfrequent components of sentences willbe calledfunctors. In thefirst approxi-
mation, functors are incomplete expressions (in contrast to sentences and names which are
complete ones insofar theirrole in communication is fixed) containing one or moreempty
places called argument places which canbefilled in by somecomplete expressions (names
or sentences) whereby onegetsa complete expression (nameor sentence):
Remark. Thereexist functors of whichsomeemptyplaceis to be filled in by another functor. In our
investigations, we shall not meet withsucha functor. Hence, the aboveexplanation on functors - although
defective - will sufficefor our purposes.
Functors can be classifiedinto several types. The type of a functor can be fixed by
determining (a) the category of words permitted to fill in its argument places (for each
argument place), and (b) the category of the compoundexpressionresultedby filling in
all its empty places.
According to the number of empty places of a functor, we speakon one-place or mo-
nadic, two-place or dyadic, three-place or triadic, ... , multi-place orpolyadic functors.
A functor can be considered as an automaton whose inputs are the words filled in
its empty places, and its output is the compound expression resulted by filling in its
empty places. Using this terminology, we can say that the type of a functor is deter-
mined by the categoriesof its possibleinputs and the categoryof its output.
A functor is said to be homogeneous if all its inputs belong to the same category,
i.e., if all its emptyplaces are to be filledin with words of a single category. We shall
deal only withhomogeneous functors.
In metalanguages, we shall be interested in the following three types of
(homogeneous) functors .
(1) Sentence functors forming compound sentences from sentences. Their inputs
and outputs are sentences.
(2) Name functors forming compound names from names. Their inputs and out-
puts belong to the category of names.
(3) Predicates forming sentences from names. Their inputs are names, and their
outputs are sentences.
Another cross-classification of functors consists in distinguishing logical and non-
logical functors. Logical functors have the same fixed meaning in all metalanguages.
All the sentence functors we shall use are logical ones. We shall meet them in Sect. 2.3.
Among the predicates, there is a single one that counts as a logical one: this is the dy-
adic predicate of identity. All the other functors we shall deal with are non-logical ones.
Name functors express operations on individual objects in order to get (new) ob-
jects. Well-known examples are the dyadic arithmetical operations: addition, multipli-
cation, etc. Thus, the symbol of addition, '+' , is a dyadic name functor. The expression
is a compound name (of a number) formedfromthe names '5' and '3'. Here the input places
surround the functor (we can illustratethe empty places by writing' ... +-'); this is the gen-
eral convention with respect to using dyadic functors (so-calledinfix notation).
In metalanguages, the most important dyadic name functor we shall use is called
concatenation. It expresses the operation by which we form a new word from two
given words , linking one of them after the other. For example , from the (English)
words 'cow' and ' slip' we get by concatenation the word 'cowslip' (or even, in the re-
versed order, 'slipcow'). This operation will beexpressed as
where , n, is the concatenation functor. Now, one sees that (in any language) the words
consisting of more than one letter are composed from letters by (iterated) applications
of concatenation. We shall deal with this functor in more detail in Sect. 3.1.
Monadic predicates are used to express properties of individual objects. If the ar-
gument place of such a predicate is filled in by a name, we get a sentence stating that
the object denominated by the name (if any) bears the property expressed by the predi-
cate. An arithmetical sentence can serve as an illustration:
Eleven is an uneven number.
Here the property is expressed by the predicate ' ... is an uneven number'; and its argu-
ment place is filled in by the name 'eleven'.
Multi-place (or polyadic) predicates express relations between individual objects.
In arithmetic, the symbol '<' is an abbreviation of the dyadic predicate' ... is smaller
than --'; thus, e.g., '9 < 7' expresses a (false) arithmetical statement.
Among the dyadic predicates, it is the logical predicate of identity. Its well-known
symbol, '=' , can be expressed by (English) words as ' ... is identical with ---'. Putting
names for the empty places we get a sentence stating that the two names denote the
same object; e.g.,
8 + 5 =6 + 7,
9 x 7 = 61.
Obviously, sentence (1) is true (for '8+5' denotes the same number as '6+7' does), but
(2) is false (since 9x7' and '61' denote different numbers) . We can express the denial
of (2) by
In general, let us agree in using the symbol
':F- ---
for expressing' .. . is not identical with ---' , or, in other words, ' ... differs from ---'.
No doubt, identity is a logical predicate in the sense that its meaning is uniquely
fixed by the stipulation that its output is a true sentence if and only if their inputs de-
note the same object. As a consequence, a sentence of form
a =a
where the letter 'a' is replaced by any name (assuming only that this name refers to a
unique object) is always true (is a logical truth), although it conveys no information.
Remark. Unfortunately, in the literature of mathematics and logic, the term 'equality' is mostly used
instead of ' identity' (and one finds 'equals' instead of ' is identical with'). This is regrettable, for identity
and equality can be clearly distinguished. For example, a triangle may have two equal sides (or angles)
which are not identical. Or, in the eye of the law, we are (supposedly) all equal, but, certainly , not identical
with each other.
The grammatical categories reviewed in this section are very important ones in
metalanguages. However, this is not at all what we need. We cannot dispense with the
use of further means treated in the next section.
2.2 Variables and Quantifiers
Let us consider the following sentence speaking about the grammar of a certain
natural language:
(1) Each substantive noun is a noun phrase.
In this sentence no (individual) nameoccurs. We see in it the predicate 'is a noun phrase',
and we suspect that the expression 'substantive noun' refers somehow to the predicate 'is a
substantive noun'. And we realize that the grammatical categories treated in the previous
section areinsufficient forparsingmeaningful metalanguage sentences.
Let us re-formulate(1) as follows:
(2) Be it any word, if it is a substantive noun then it is a noun phrase.
Here the two predicates are directly present, but their argument places are filled in by
the pronoun 'it' insteadof a name. The core of (2), namely
(3) if it is a substantivenoun then it is a noun phrase
seems to be a correct sentence of English, although it1 information content is unclear as
long as the referenceof the pronoun is not given. Now the prefix 'Be it any word' tells
us that the reference of 'it' may be any word. Sentences containing pronouns in the
place of names - such as (3) - may be called open sentences. By applying a suitable
prefix - like ' be it any word', or, more generally, 'be it anything' - to an open sentence
we can get a closed sentence having an unambiguousinformationcontent.
In formal languages, it is customary to use special symbols called variables insteadof
pronouns. Thus, variables are artificial pronouns in formal languages. They were con-
sciously and regularly used in mathematics in modem times. It proves to be useful to intro-
ducethemin metalanguages as well. (Theparsing of our example (l), perhaps, doesnot give
convincing evidence for the useof variables, but our laterexamples will besufficient.) In the
formal objectlanguages, variables mayoccurin several grammatical categories. However, in
metalanguages, we onlyneed so-called individual variables referring to individual objects
(incaseof syntax, towordsof a certain language). Thus, variables mayoccurin everyplace
wherenames may occur. We shall use mainly singleitalicised letters as variables (Roman
andGreekletters, bothupper- and lower-ease ), but sometimes groups of letters, letters with
sub-or superscripts willbe used.
Using the letter 'x' as a variable, we can re-formulate the open sentence (3) as fol-
(3') if x is a substantive noun then x is a noun phrase.
And for the closedsentence (2), we introduceas its regular form:
(2') For all x (if x is a substantive noun then x is a noun phrase).
Here the prefix 'For all x' is to be called the universalquantification of the variable x,
and the open sentence between parentheses following the prefix is to be called the
scope of the quantification. The parentheses serve to stress the limits of the scope
(which might be important if the sentence occurs in a longer text).
Now we shall introduce another device in order to get closed sentences from open
ones, called existential quantification. Let us consider the sentence:
(4) Some adverbs end in '-ly'.
We shall re-formulate this as follows:
(4') For some x (x is an adverb, and x ends in '-ly').
This is to be understood as stating that from the open sentence
x is an adverb, and x ends in '-ly'
we can get a true sentence at least in one case by putting a name (of a word) in places
of the variable x. That is, (4') says that there is at least one adverb ending in '-ly'.
Thus, the plural in (4) - which suggests that there are several such adverbs - is ne-
glected in this reconstruction.
The prefix 'For some x' in (4') is to be called the existential quantification of the
variable x, and the open sentence between parentheses after the prefix is to be called
again the scopeof the quantification.
For the abbreviation of the prefixes used in the universal and the existential quan-
tification we shall use
'I\x' instead of 'For all x', and
'Vx' instead of ' For some x'.
The symbols '/\' and 'V' are called universal and existential quantifiers, respec-
tively. These will be used exclusively in metalanguages only. (In object languages, we
shall use other symbols for the quantifiers.) The adjectives 'universal' and 'existential'
are understandable, but the terms 'quantifier', quantification' are somewhat mislead-
ing: they came from the logic of Middle Ages.
Now the final parsing of our examples is as follows:
(2") /\x(if x is a substantive noun then x is a noun phrase).
(4") Vx(x is an adverb, and x ends in '-ly').
(In fact, these are not the final results. The expressions 'if... then' and 'and' will be re-
considered in the next section as logical sentence functors.)
The quantifiers are not functors, at least not in the sense treated in the previous
section. They belong to the category of variable binding operators not occurring in
natural languages.
In order to introduce some fine distinctions, we have to take into consideration that
a variable may occur more than once in a sentence. Thus, we need to speak sometimes
about certain occurrences (e.g., the first, the second, ... , etc. occurrence) of a variable
in a sentence. Now, we say that a certain occurrence of a variable, say 'x', in a given
sentence is a bound one if it falls within a subsentence of form
'/\x( ... )' or 'Vx(... )'
(here the dotting represents the scope of the quantifier), and we say that the quantifier
binds the variable following it throughout in its scope. Occurrences of a variable in a
sentence that are not bound ones will be called free occurrences of that variable in the
sentence spoken of.
Thus, a variable may have both free and bound occurrences in the same sentence.
However, in the metalanguages, we shall try to avoid sentences in which the same vari-
able occurs both free and bound.
A sentence is said to be an open one if some variable has some free occurrence in
it, and in the contrary case it is said to be a closed sentence.
An application of a quantifier may be called effective if its variable (i.e., the vari-
. '\
able following the quantifier immediately) has some free occurrences in its scope. In
the contrary case, the quantification may be said vacuous or ineffective. In formal lan-
guages, vacuous quantification is permitted. In metalanguages , we shall avoid this as
far as it is possible.
Applying an (effective) quantification to an open sentence, the number of (distinct)
free vari,ables in the resulting sentence will diminished by 1 - comparing to the sen-
tence in its scope.
A name may be again open or closed. A name is open if it involves some vari-
ables (think of a name functor whose empty places - or some of them - are filled in by
variables); thus, a variable alone counts as an open name, too. A name is closed if no
variables occur in it. We do not introduce an operator that forms a closed name from an
open one. Thus, variables of a name can be bounded only by quantifiers p p l i ~ to a
sentence involving the name in question.
Substitution offree variables in a sentence. Given a sentence involving free oc-
currences of a variable, we can get another sentence by substituting each free occur-
rence of the variable by the same closed name. Substituting open names for a variable
is also permitted under the condition that no variable of the name will be bounded in
the resulting sentence. More detailed: If x is the variable to be substituted by a name
involving the variable y, and the sentence in question involves a subsentence of form
'Ay( ...r or 'Vy( ... r then x must not occurfreely in this subsentence- for, in the con-
trary case the quantifier with y would bind a free variable (namely y) of the name. To
see the importance of this condition, let us considerthe following example:
(5) If x is a word then Vy(y is longer thanx).
The universal quantification (with respect to x) of this sentence seems to be true, if
appliedto a language. Thus, one can think that the variablex can be substitutedby any
name without risking the meaningfulness of the result. However, by substituting y for
x, we get:
If y is a word then Vy(y is longer than y).
I suppose, no commentis needed.
Boundvariables are used in order to showclearly the inner structures of quantified
sentences. They cannot be substituted by closed or compound names. However, a
bound variablecan be substitutedby another one. It is unimportant what letter is used
as a variablein a quantifieras long as the scoperemains intact. Thus, we can substitute
a bound variable, say 'x' , by another variable, say ' y', provided y has no free occur-
rences in the scope of the quantification. Such a substitution is to be understood as re-
placing ' y' for theoccurrence of 'x' following the quantifier andfor all free occurrences
of 'x' in the scopeof the quantifier. For example, in (5), the bound variable 'y' can be
substitutedby 'z' - but not by ' x' -:
Ifx is a word then Vz(z is longer thanx).
The substitutionof bound variables is often called re-naming of bound variables. In
metalanguages, we shall rarelyapply this device. (In formal object languages, it is an
Bya universal andanexistential sentence let us mean a sentence offonn 'Ax(...)' and
'Vx(...r,respectively, where 'x' is anyvariable. Given sucha sentence, let us omit the initial
quantifier (andthevariable following it), andletus substitute thevariable x bya closed name
in the remaining sentence. The result will becalled an instance or an instantiation of the
original (universal or existential) sentence. Taking into consideration the meaning of the
quantifiers, it is obvious thatwecancorrectly infer
(a) froma universal sentenceto its any instantiation, and
(b) fromany instantiation of an existential sentenceto the existential sentence.
In metalanguages, we are compelled to use variablessystematically. In most cases, our
variables refer to the words of a certain language. In the contrary case, we shall give a
declaration about the permitted reference of the variables. Without such a declaration,
the meaning of quantification would be unclear. (What does it mean 'For all x' ifwe do
not know to what sort of objects the variable 'x' refers?)
Tacit quantification. Let us introduce the convention that if a "completed" meta-
language sentence (i.e., one which is not a part of a longer sentence) involves free vari-
ables then it is to be understood as if its free variables would be bounded by universal
quantifiers standing before the sentence and having as their scope the whole sentence.
This is a usual device applied both in mathematics and metalogic, and is called tacit
universal quantification. Of course, we can never omit an existential quantifier, or a
universal one applied only to a subsentence (a clause).
Naming open expressions. In a metalanguage, sometimes we must refer to ex-
pressions involving free variables. Imagine, e.g., a grammatical rule saying that if A
and B are sentences then
(6) if A thenB
is a (compound) sentence as well. Here 'A' and 'B' are used as variables (referring to
the words of some language) , and, moreover, the rule seems to be a universal one, that
is, these variables are tacitly quantified. How can we name the expression standing in
line (6)? Including it by quotation marks would be wrong, for we do not want to say
that the expression 'if A then B' is a sentence. Instead, we should like to say that we get
a sentence from the schema (6) whenever we put sentences in the place of 'A' and 'B'.
Probably, a long and complicated circumscription would be possible, but it is more
simple and economical to introduce some new boundary marks in order to naming
schemata involving variables (such as (6)). We shall use double quotation marks
(double inverted commas) for this purpose. Then, the grammatical rule mentioned
above can be expressed by the following sentence:
(7) AAI\B (if A is a sentence and B is a sentence then "if A then B"is a sentence).
Expressions bordered by double inverted commas - such as "if A then B " above - will
be called quasi-quotations. We agree that quantifiers - explicit or suppressed (tacit)
ones - are effective with respect to variables occurring in a quasi-quotation (in contrast
to variables occurring in a simple quotation). Now an instantiation of a universal sen-
tence involving a quasi-quotation is to be formed as follows: Occurrences of variables
within the quasi-quotations are to be substituted by words - not by names of words -;
and the signs of the quasi-quotation (i.e., the double inverted commas) are to be re-
placed by simple quotation marks (i.e., by simple invert,edcommas); and the other oc-
currences of variables (outside of the quasi-quotation) are to be substituted - as usually
- by names of words. Thus, an instance of (7) is as follows:
if 'pro' is a sentenceand 'con' is a sentence
then 'if pro thencon' is a sentence.
Provided, of course, that the words ' pro' and 'con' are possible values of the variables
'A' and 'B' occurringin (7).
Let us realizethat an occurrence of a variablecan be considered as an open name,
and, hence, it could be includedbetweendouble invertedcommas. For example, a part
of (7) couldbe writtenas
if "A" is a sentenceand "B" is a sentence ...
Accordingto our convention above, an instantiation of such a sentence would give just
the correct result; e.g.,
if 'pro' is a sentenceand ' con' is a sentence .. .
Thus, considering occurrences of variables as quasi-quotations would lead to no confusion.
However, this treatment would be superfluous, and, hence, we.shall avoid its use. Let us
agreethat weshallonlyusequasi-quotations in naming complex expressions involving (free)
variables. Also, quasi-quotations withina quasi-quotation mustbeavoided.
The means introduced in this section will be usedintensivelyin the next section.
Remark. If thereader is familiar withfirst-order logicthenmost of this section seemsto bewell-known for
himlher. Note,however, the special use of variables andquantifiers in metalogic. In fact, our instruments in meta-
logic findroomintheframe ofclassical first-order logic, but wedonotreferheretoany formal systemoflogic.
2.3 Logical means
Our first topic in this sectionis the investigation of sentence functors (mentioned
in Sect. 2.1 already)used in metalanguages.
The single monadic sentence functor we shall use is called negation. It serves to
express the denial of a sentence. In the case of simple sentences, it can be expressedby
insertinga negativeparticle('not' , or 'does not') into the sentence. 0Ne saw in 2.1 how
the denial of an identitysentence can be expressed.) In the general case, negation can
be expressedby prefixingthe sentencewiththe phrase 'it is not the case that'.
"' Our dyadic sentence functors are called conjunction, alternation, conditional,
and biconditional. Conjunction and alternation are expressed by inserting 'and' and
'or', respectively, betweentwo sentences. The formof a conditional is
(1) if A, (then)B
provided 'A' and 'B' refer to sentences. HereA is called the antecedent and B is called
the consequent of the conditional sentence (1). The word 'then' is between parenthe-
ses, for sometimes it is omitted. Finally, the formof a biconditionalis
(2) A if and onlyif B,
and it is used as an abbreviationof a more complexsentenceof form
(2') if A then B, and if B thenA.
The artificial expression 'if and only if' comes from mathematics but its use became
general nowadays in scientific literature. It is often abbreviated by 'iff '. In what fol-
lows, we shall use this abbreviationsystematically.
We shall introduce symbols for expressing these functors, according to the follow-
ing conventions where the variables 'A' and 'B' refer to (closedor open) sentences:
(3) "-A" for "it is not the case that A";
(4) "A & B" for "A and B";
(5) "A v B" for "A or B";
(6) "A B" for "if A, (then) B";
(7) "A ::> B" for "A if and only if B".
The meanings of our sentence functors will be fixed by the followingtruth conditions
based on the assumptionthat sentences are either true or false.
(a) If A is false then "- A" is true, otherwise it is false.
(b) If both A and B are true then "A & B" is true, otherwiseit is false.
(c) If both A and B are false then "A v BH is false, otherwiseit is true. (This
shows that our use of 'or' corresponds to that of 'and/or' rather than to that
of 'either ... or' .)
(d) IfA is true and B is false then "A => B H is false, otherwiseit is true.
(e) If bothA and B are true, or if both are false, then "A ::> B" is true, other-
wise it is false. (Takinginto considerationthat accordingto (2'), "A ::> BH
is an abbreviationof "(A B) & (B =>A)", this conditionfollows from(b)
and (d) above.)
Remarks. 1. On the basis of everyday experiences, the reader may doubt the assumption that sen-
tencesare either true or false. However, this doubt is not justifiedin metalanguages wheretruth is created
byfiat , i.e., somebasicsentences are true as postulates or as definitions, and other sentences are inferred
fromthese. Perhaps, the reader mayacceptthat in somelimiteddomains, the true-false dichotomy of sen-
tencesis acceptable, especially if we are speaking about ideal objects like in mathematics or '" in meta-
2. The truth conditions (a) ... (e) are moreor less in accordance withthe everyday use of the words
expressing our functors. Rule (d) seemsto be most remotefromthe everyday use of 'if ... then', but this is
just the sentence functor very useful in forming metalanguage sentences. In most cases, the symbol
will occur between open sentences standing in the scope of (tacit or explicit) universal quantification(s).
Examples of this useof 'if ... then' are the sentences occurring just in therules(a) ... (e) above.
3. The symbols introduced in this section and in the preceding one will be used sometimes in the fol-
lowingexplanations whenever theiruseis motivated by the aimsof exactness and/orconciseness. However,
the everyday expressions of these symbols ('and' , 'or', 'if ... then', 'iff,'for all' etc.) will be used fre-
Given the meaning of our sentence functors, it is clear that they are logical func-
tors. They - and their symbols - are often called (sentence) connectives in the literature
of logic. Let us note that in mathematical logic, the terms disjunction, implication, and
equivalence are used instead of our alternation, conditional, and biconditional, re-
spectively. These are not apt phrases , for they can suggest misleading interpretations.
ITwe apply more than one of our sentence functors in a compound sentence then
the order of their applications can be indicated unambiguously by using parentheses.
However, some parentheses can be omitted if we take into consideration the properties
of our functors.
First, it follows from the truth conditions above that conjunction and alternation are
commutative and associative, and hence, we can write, e.g.,
"A &B &C" and "A vB vC'
(where the variables 'A" 'B', and 'C' refer to sentences) without using parentheses.
Further, we can see easily that the truth conditions of
"(A&B)oC" and "A o (B o C)"
are the same. That is, if the consequent of a conditional is a conditional then the ante-
cedent of the latter can be transported by conjunction to the antecedent of the main
conditional. This suggests the convention to omit the parentheses surrounding the
conditional being the consequent of a conditional, i.e., to write
"A 0 B C" instead of "A (B 0 C)".
Of course, this convention can be applied repeatedly.
Finally, we can realize that the biconditional is
(a) reflexive , in the sense that "A A" is always true,
(b) symmetrical, in the sense that from "A B" we can infer "B A", and
(c) transitive, in the sense that from "A B" and "B C" we can infer "A C".
Identity bears the same remarkable properties - the fact which legitimates the use of
chains of identities of form
a=b=c= ...
(where the variables 'a', 'b', 'c' refer to names), practised from the beginning of pri-
mary school. Then, chains of biconditionals of form
A ~ B ~ C ...
will be used sometimes in the course of metalogical investigations.
As illustrations of our new symbols, let us re-formulate the more detailed logical
structure of some sentences used as examples
(2")* Ax(x is a substantive noun ee x is a noun phrase).
(4")* Vx(x is an adverb & x ends in '-ly').
(5)* x is a word::::::> Vy(y is longer than x).
(7)* I\A/\B ((Ais a sentence & B is a sentence) ::::::> "if A then B" is a sentence).
Inferences. In the metalogical investigations, we draw some inferences from our
starting postulates and definitions (definitions will be treated in the next section) on the
basis of the meaning of our logical words - i.e., quantifiers, identity, sentence functors,
be they expressed by symbols or by words of a natural language. The meaning of these
words or symbols was exactly fixed by their truth conditions in the present and the pre-
ceding section. No formal system of logic will be used here as a basis legitimating our
inferences - at least not before Chapter 6 that treats of a system of logic.
However, on the basis of the mentioned truth conditions, a list of important infer-
ence patterns could be compiled. In the preceding section, it was mentioned, e.g., in-
ference from a universal sentence to its instantiations. The properties of the sentence
functors treated in the present section also contain some hidden inference patterns. In-
stead of giving a large list of inference patterns, we only stress two important ones:
(A) From a conditional "A::::::>B" and from its antecedent A we can correctly infer its
consequent B. This pattern is called modus ponens [placing mood] (in formal systems,
sometimes called detachment).
(B) From a conditional "A::::::> B
and from the falsity of its consequent, i.e., from
"-B H, we can correctly infer to the falsity of its antecedent, i.e., to "-A". This pattern is
called modus tollens [depriving mood]. It is the core of the so-called indirect proofs. In
such a proof, one shows that accepting the negation of a sentence would lead to a sen-
tence which is known to be false, that is, a conditional of form "-A ~ -B" is ac-
cepted. From this and from the falsity of "-B" - i.e., from the truth of B -, the falsity of
"-A" - i.e., the truth of A - follows by modus tollens.
2.4 Defmitions
In metalanguages, we often use definitions, mainly in order to introduce terms or sym-
bols instead of longer expressions. Such a definitionconsists of three parts: (a) the new
term called the definiendum, (b) the expression stating that we are dealing with a
definition, and (c) the expression that explains the meaning of the definiendumcalled
definiens. In verbal definitions, (b) is indicated by words such as 'we say that', 'is said
I to be', 'let us understand', etc. Some examplesof verbal definitions:
(1) By the square ofa number let us mean the number multipliedby itself.
(2) We say that a number x is smaller than the number y iff there is a
positivenumber z such that x +z = y.
Here the definienda are italicized, and the words indicating that the sentence is a defi-
nition are printed in bold-faceletters. In (2), the definiendumis, in fact, the relation ex-
pressed by the dyadic predicate 'is smaller than', but its argument places are filled in
by variables. This s ~ o w s that we cannot avoid the use of free variables in definitions.
Definitionsinvolvingfree variables will be called contextual definitions. In (1), the use
of a variable (referring to numbers) is suppressed due to the fact that it defines a very
simple monadic name functor (i.e., the operation of squaring numbers). In the canoni-
cal forms introduced below, a special symbol standing between the definiendum and
the definiens will indicate that the complexexpressioncounts as a definition.
Now, the canonical form of a contextual definition of a predicate has the follow-
ing shape:
where A indicates the definiendum: an open sentence formed from the predicate to be
defined by filling in its argument places with (different) variables, and B indicates the
definiens: an open sentence involving exactly the same free variables which occur in
the definiendum. Of course, the predicate to be defined must not occur in the definiens
(the prohibitionof circularityin defmitions).Furthermore, in order that the definitionbe
reasonable, the definiens must contain only functors to be known already.
For example, the canonical form of (2) - if we use the sign '<' instead of 'is
smaller than' - is as follows:
x <Y ~ Vz(zis positive & x +z =y).
Thecanonicalform of thecontextual definitionofa namefunctor has thefollowing shape:
where the definiendum a involves the functor to be defined filled in by different vari-
ables on its argument places, and the definiens b is an open name involving exactly the
same variables occurring in a. Again, the functor to be defined must not occur in b, and
the functors occurring in b are to be known ones.
In (4), we can put a new name for a, and a compound closed name for b, to get an
explicit definition of a name. Practically, in this case the new name serves as an ab-
breviationfor the (probably longer) name in place of the definiens.
As an example , let us re-formulate the verbal definition under (1):
square of x =df x.x
The sign of definition in (3) is ~ d f and in (4) is '=df " taking into consideration that
the symbol of biconditional can only occur between sentences, and the symbol of
identity can only occur between names.
Contextual definitions are to be understood as valid ones for all permitted values of
the free variables occurring in them. That is, contextual definitions are universal sen-
tences with suppressed quantifiers. Consequently, we can infer from such a definition
to its instantiations, omitting even the subscript 'er' from the side of ~ d f or ' =df' .
Furthermore, we can replace in any sentence an occurrence of a definiens of a definition
by its definiendum or vice versa; the resulting sentence counts as a logical conse-
quence of the original one and the defmition.
Another important type of definition is the so-called inductive definition. We shall
meet it in Section 4.1.
Remark. Contextual definitions couldbe replaced by explicit onesif we woulduse theso-called lambda-
operator. However, theauthordoesnotseeconsiderable benefit initsintroduction intheseintroductory chapters.
2.5 Class Notation
Instead of saying that 'of is an (English) preposition, we can saythat' of is a member of the
classof (English) prepositions. In general, instead of saying that a monadic predicate holds
for a certain object, we can say, alternatively, that this object is a member of the classof ob-
jects of which the predicate in question holds. This class may be called the extension or the
truth domain of that predicate. Instead of a monadic predicate, we can apply this way of
speech to any open sentenceinvolvinga single variable.
This way of speech is sometimes advantageous in metalogical investigations.
Thus, we shall introduce new notational devices for using the terms mentioned above.
Let 'fp(x)' denote an open sentence involving the single variable 'x', and let' rp(a)'
denote the sentence resulting from "fp(x)" via substituting a name a for x. Then, ex-
pressions of form
(1) {x: rp(x)}
will be called class abstracts. (Recommended reading: "the class of rp-s".) Intuitively:
(1) is a name of the class being the extension or truth domain of the open sentence
"rp(x)". The variable x is qualified here as a bound one (i.e., the expression "x:" works
as a quantifier) in accordance with the fact that in verbal readings, the mentioning this
variable is often avoidable. For example,
{x: x is a preposition}
can be read as ' the class of prepositions'.
Remark. In the literature, insteadof (1),the following notation is usedas well:
{xl {O(X)}.
The expression of form
(2) a E {x: qJ(x)}
- where a is a name - means, intuitively, that the object denoted by a is a member of
the class "{x: rp(x)}". That is, the symbol 'E' expresses the relation 'is a member of',
the membership relation. However, the exact meaning of (2) will be determined by the
following defmition:
For example:
a E {x: qJ(x)} ~ fp(a).
'of E {x: x is a preposition} ~ 'of is a preposition.
According to the definition in (3), class abstracts are elirninable; thus, their use does
not compel us to accept the ontological hypothesis about the existence of classes as a
new sort of abstract entities. The use of class notation does not mean, either, an entry
into set theory (this will be treated of in Chapter 10).
A class abstract is acceptable just in the case the open sentence occurring in it is
acceptable. To wit: the class of horses is just as exact as the term 'horse' (or the predi-
cate 'is a horse'). Of course, we do not want to speak about the class of horses in our
metalanguages; what we shall deal with will be classes of linguistic entities.
In the following definitions, we shall use Roman capital letters (A, B, C) as vari-
ables referring to class abstracts. We shall call them, briefly, class variables, although
they refer merely to class abstracts. These variables are not quantifiable, for we do not
assume the existence of the totality of classes, nor even the totality of all class abstracts.
In fact, class abstracts are language dependent (grammatical) entities. Thus, the open
sentences (definitions, statements) involving class variables are to be understood as
follows: Given a language, class variables can be substituted by any legitimate class
abstracts of that language, i.e., by class abstracts of form "{x: tp(x)}" whenever "rp(x)"
is an accepted open sentence (of the language) with a single free variable x. - Further-
more, lower-case Roman letters (a, b, c) will refer to names.
Some abbreviations. The denial of a sentence of form "a E A" will be written as
(read: "a is not a member of A"). The compound sentence "a E A & b E A" will be
abbreviated sometimes as
a, hEA.
This convention may be extended to more than two names.
Assume that the open sentence"rp(x)" has the following particular form:
x =a1 v x =a2v .. . x =an, _
that is, it is an n-member alternation of identities of form "r = a,", and, of course, at ,
a2, ... , an are names. Let us include in this form even the case n = 1. Then, the mem-
bers of the class "{r. rp(x)}" are just the objects denoted by alt a2, ... , an' This moti-
vates the following abbreviation:
(Did you hear the sloganaccording to which a class can be defined in two ways: (a) by a property of its
members, or (b) by enumerating its members? Nowyou see that (b) is merely a special case of (a). Note
howeverthat enumeration holdsonlyforfinite classes the members of which all havenames. Try to defme
byenumeration theclassof treesin a bigforest!
Relations between classes. We say that A is a subclass of B, or B is a superclass
of A - in symbols: " A B " - iff every member of A is a member of B. That is:
(5) A B <=)df 1\x (x E A X E B).
The expressions 'x E A' and 'x E B' occurring in the definiens are eliminable by (3)
whenever A and B are replaced by fixed class abstractions. For example:
{x: tp(x)} {x: 'f/(x)} <=) l\x(fI'{x) => 'f/(x).
If two classes are mutually subclasses of each other then we say that their extensions
coincide. Unfortunately, the symbol expressing this coincidence - used generally in
literature - is the sign of identity C='). Thus, according to this convention, the defini-
tion of coincidence of extensions is as follows:
Or, by using (5):
A =B ::>df (A B & B A).
A =B ::> df J\x(xE A X E B)
Note that "A = B" may hold even in a case A and B are defined by different properties
(open sentences). To mention an example, in the arithmetic of natural numbers we find
{2} = {x: x is an even prime number}.
Identity between individual object is a primitive relation characterized by the fact that
each object is only identical with itself. (However, an object may bear more different
names, and this fact makes identity useful.) On the other hand, the symbol '=' as used
between class abstracts bears a meaning introduced by definition (6); thus, "A = B"
only means what this definition tells us.
If the extensions of A and B do not coincide, and A is a subclass of B then we say
that A is a propersubclass of B,' in symbols: "A c B". That is:
(7) A c B ::>df (A c B & A;t: B).
It may occur that a quite meaningful predicate defines a class with no members. For
{x: x is a prime number & 7 <x < I I }.
The simplest definition of such an empty class 'is:
{x: x 7= x}.
The extensions of two empty classes always coincide, that is, empty classes are
"identical" with each other (in the sense of (6)). Hence, we can introduce the proper
name ' 0 ' for the empty classes:
(8) o = df {x: x;t: x}.
This definition is to be considered as the concise variant of the following contextual
X E 0 ::> df x;t: x.
Analogously, we can introduce a proper name, say r, for any class abstracts
"{x: rp(x)}" by
r =df {x: (x)}.
By our definitions in (5) and (8), it is obviousthat
ok A, and AkA.
Operations with classes. We introduce the dyadicclass functors of union, inter-
section, anddifference, symbolized by 'u', '(l' ,and '-', respectively.
(9) A u B =df {x: x E A v X E B},
(10) A n B =df {x: x E A & x E B},
(11) A-B =df {x: x E A & x ~ B}.
Someproperties of theseoperations:
(12) A u B = B u A,'
(13) (A u B) u C= A u (B u C);
(14) A u A =A;
(15) Au 0 =A;
(16) A u (B (l C) = (A u B) (l (A u C);
(17) A-(A-B) = A (l B;
(18) AS; B ~ (A u B = B) ~ (A (l B = A);
A (l B =Br. A;
(A (l B) (l C = A (l (B (l C);
A (lA =A;
A (l 0 = 0;
A (l (B u C) = (A (lB) u (A (l C);
(A-B) u B =A u B;
A B ~ A
Identities (12), (13), (14) tell that union and intersection are commutative, associative,
and idempotent (self-powering) operations. In (15), the role of the empty class in the
operations is shown. Union and intersection are distributive with respect to each other,
as (16) tells us. In (17), we see important connections between difference and the two
other operations. Finally, in (18), some interrelations between the subclass-relation and
our operations are presented. - The proof of these laws is left to the reader. (Use the
definitions (9), (10), (11), (5), and (8).)
Finally, let us note that two classes are said to be disjoint iff theyhave no common
members, i.e., if their intersection is empty.Thus,
expressesthat A and B are disjoint classes.
3.1 Definition and Postulates
In what follows, we shall use script capital letters (Jil, '.B, C, etc.) to refer to an alphabet.
The totality of words formed from the letters of an alphabet Jil will be denoted by "Jil",
and the members of jilO will be called sometimes "jil-words".
According to our postulate (Ll) in Sect. 1.2, an alphabet is always afinite sup-
ply of objects. Hence, we can use class notation in displaying an alphabet, e.g.,
where a b ... ,an stand for the letters of jil.
According to the postulate (L2) in Sect. 1.2, if we know Jt then we know jilas
well. By using class notation, we may write:
jilO ={x: x is anx-word}.
Of course, this identity cannot serve as a defmition of jilo, for 'is an jil-word' is an un-
defined predicate.
We mentioned already in Sect. 2.1 that the introduction of the empty word may
be useful for technical reasons (this will be demonstrated in Ch. 4). We shall use the
symbol '0' for the empty word:
o =df the empty word.
Obviously, this notion is language-independent.
Also, in Sect. 2.1, the name functor concatenation was mentioned; its symbol is
,n , . We can imagine that the words of an alphabet are "produced" starting from the
empty word via iterated concatenation of letters to words given already.
Thus, at the beginning of the description of a language, we have to deal with
four basic notions: the letters of the language, the words of the language, the empty
word, and the name functor concatenation. Up to this point, we have an intuitive
knowledge about these notions. We shall say that these four notions together form a
language radix. Now we shall give a so-called axiomatic treatment of these notions
the significance and the importance of which will be cleared up gradually later on.
DEFINITION. By a language radix let us mean a four-component system of
whereJ'i andJ'iare nonempty classes, 0 is an individualobject (the emptyword), fl is
a dyadicoperationbetweenthe members of J'i
, and the postulates (Rl) to (R6) below
are satisfied. The members of J'i will be calledletters, and the members of Jil
will be
Remark. Pointed brackets above under (*) are used to sum up the four components of a lan-
guage radix into a whole. The reader need not think of the set-theoretic notion of an ordered quadruple -
whichis undefinedup to this point.
(Rl) J'i c Jil
and 0 e Jil

(R2) x, yeJil
=> X fly eJilO.
(Tacit universalquantification of the variables. Similarlyin the following postulates.)
(R3) Concatenationis associative: if x, y, Z e Jil
(X fl y) fl
=X fl(y fl
Consequently, parentheses can (and will) be omittedin the of iteratedconcatena-
tions. - In what follows, the variables x, y, z will refer to members ofJilo.
This tells us that a wordis nonempty iff it has a final letter.
That is, the last letter of a wordis uniquely determined.
(R6) (x fl y =X :) Y =0) & (x fl Y =Y <=> x =0).
That is, a concatenation is identical with one of its members iff its other member is the
Some consequences of our postulates:
By (R6), the emptywordis ineffective in a concatenation:
fl0=x=0 flx.
From"a. eJil & x =Y fl a." it follows by (R4) that "X;1: 0". In other words:
(2) cx eJil=> yfl cx:;t0.
Particularly, if y =0 then - with respect to (1) - we get that
which means that
aEj{:::) a:;t0,
that is, the empty word does not belong to the class of letters. (Note that this was not
explicitly stated in our postulates.)
Assume that x, y Ej{
, and
Now, if y = 0, then, by (R4), it is of form "z n a" where a Ej{. Thus, in this case, (4)
has the following form:
or, with respect to (R3),
However, this is excluded by (2). Hence, (4) excludes the case y:;t 0. On the other
hand, "x n 0 = 0" implies "x = 0 ", by (l). Thus, we have that
By (1), this holds conversely, too. Hence, we can replace the in (5) by ":>". In
words: The result of a concatenation is empty (i.e., the empty word) iff both its mem-
bers are empty.
Our postulates and their consequences are in full accordance with our intuitions
with respect to the four basic notions of a language radix. Among others, they assure
that the empty word can be "erased" everywhere, for it is ineffective in concatenations.
What is more, these postulates determine "almost" uniquely the class j{0: the objects
which are j{-words according to our intuitions are really (provably) in j{0. However,
(Rl) . .. (R6) do not assure that jIO contains no "foreign" objects, i.e., objects which are
not words composed from the letters of jI. This deficiency could be supplied e.g. by the
following postulate:
(R7) If B is a class such that
(i) 0 EB, and
(ii) (x EB & a Ej'f) :::) X n a EB,
thenx" c B.
In other words: 51 must be a subclass of all classes satisfying (i) and (ii). Another
usual formulation: 51 is the smallest class which contains the empty word and the
lengthening of every contained word by each letter of 51.
Now we see that (R7) involves a universal quantification over classes, and,
hence, it passes the limits of our class notation introduced in Sect. 2.5. On the other
hand, 51 is not perfectly determined by the remaining postulates (Rl) to (R6). We are
compelled to refer to our postulate (L2) introduced already in Sect. 1.2. On the basis of
experiences in natural languages, we can accept that the class of 51-words is perfectly
determined by the alphabet 51. - However, the notion of a language radix introduced in
the present section will be utilized later on (mainly in Ch. 7).
Remark. The systems called language radices above are called free groupoids with a unit ele-
ment in mathematics when the postulate (R7) is accepted as well. They form a particular family of alge-
braic systems. Here is the field of the grupoid, n is the groupoid operation, 0 is the unit element, and
is the class of free generators. - Let us note that accepting (R7) makes possible to weaken some of the
other postulates ; e.g., (3) above is sufficient instead of (R4), and (2) instead of (R6).
3.2 The Simplest Alphabets
3.2.1. Notational conventions. (a) It is an obvious device to omit the sign of
concatenation and using simple juxtaposition instead; i.e., writing "xyz" instead of
"x (") y (") z".
(b) Displaying an alphabet, we have to enumerate its letters between braces.
The letters are objects, thus, in enumerating them, we have to name them. For example,
the two-letter alphabet whose letters are '0' and '1', is to be displayed as
{CO', '1' }.
Could be omitted here the quotation marks? We can answer this question by YES,
agreeing that inside the brackets, the letters stand in an autonymous sense as names of
themselves. However, we can give a deeper "theoretical" argument in doing so.
Namely, if we do not want to use a language for communication, if we are only inter-
ested in the structure of that language then we can totally avoid the use of the letters
and words of the language in question - by introducing metalanguage names for the
letters and words of the object language. (For example, the grammar of Greek or of
Russian could be investigated without using Greek or Cyrillic letters.) Hence, we shall
not use quotation marks in enumerating the letters of an alphabet, but we leave unde-
cided the question - being unimportant - whether the characters used in the enumera-
tion are names in the metalanguage for the letters, or else they stand in autonymous
(c) In presenting an alphabet, different characters denote different letters. That
is, the alphabet contains just as many letters as many are enumerated in the alphabet.
The simplest alphabet is, obviously, a one-letter one:
(1) J{o = {a}.
However, the words of this minimal alphabet are sufficient to naming the positive inte-
gers: the words a, CIa, CICIa, ... can represent the numbers 1, 2, 3, ... . (Even, the
empty word can be considered as 0.) - We shall exploit this interpretation of J{o later
The two-letter alphabet
is used naming natural numbers in the so-called dyadic (or binary) system. In the eve-
ryday life, we use the decimal system in writing natural numbers; this system is based
on a ten-letter alphabet. However, the dyadic system is exactly as good as the decimal
one (although the word expressing the same number is usually a longer one in the dy-
adic system than in the decimal system). This leads to the idea: Is it possible to replace
any multi-letter alphabet by a two-letter one? The existence of the Morse alphabet sug-
gests the answer YES. In fact, this is a three-letter alphabet I: ,-, I} where the third
character serves to separate the the translations of the letters of (say) the English alpha-
bet. For example, the translation of ' apple' to a Morse-word is:
. -I - - I - - I - I
which shows that Morse-alphabet is, in fact, a .three-Ietter one.
Let our "canonical" two-letter alphabet be
(3) JI 1 ={CI, ~
Furthermore, let C be an alphabet with more than two letters, e.g.,
We define a universal translation method from C into Jl
Let the translation of the letter
Y; the Jlrword beginning with a and continued by i copies of ~ (for 0 ~ i ~ n). This
rule can be displayed in the following table:
the letters of C
translate into
a(3 .,. (3
n copies of f
Then, the translation of a C-word is defmed, obviously, as the concatenation of the
translations of its letters. Detailed, in a hair-splitting manner: The translation of 0 is
0; and if the translation of a c-word c is c", and the translation of Yi is gi then the
translation of "cyi" is c gi " We avoided here the use of a separating symbol by the
rule that the translation of each letter of C begins with a..
Translations of c-words among the Jilrwords can be uniquely recognized: di-
vide the given Jilrword to parts, by putting a separating sign, e.g., a vertical stroke,
before each occurrence of the letter a.. Now if it is the translation of some c-word then .
each part must be the translation of a letter of C (it is easy to control whether this holds
or not). Re-translating the C-Ietters, we get the c-word the translation of which the
given Jilrword was. - Summing up:
3.2.2. THEOREM. Any finite language based on an alphabet with more than
two letters can be equivalently replaced with a language based on the two-letter al-

We shall exploit this theorem in Sect. 4.4.

Note that the table above can be continued beyond any great value of n. Thus,
our theorem holds even for a denumerably infinite alphabet - an alphabet that has just
as many letters as much natural numbers are.
A language radix itself is not a "full-fledged" language, although it is the radix
of the "full tree" of a language. After fixing the radix of a language, the next task is to
define the well-formed ("meaningful") expressions of the language. In most cases ,
these are divided into classes called categories of the language. The problem of defin-
ing such categories will be the main topic of the next chapter.
4.1 Inductive Defmitions
In order to solve the problem outlined at the end of the preceding chapter - i.e., the
definition of categories of alanguage - we shall introduce a new type of definition
called inductive definition.
4.1.1. Explanation. Let us assume a fixed alphabet jl, and, hence, a language radix
Vtl jlo, 0, II ). Assume, further, that we want to introduce a subclass F of jlO
(intended as a category of the language based on jl) by pure syntactic means. The gen-
eral method realizing this plan is applying an inductive definition that consists of three
(a) The base of the induction: we give a base class B (of course, B jlO) by
an explicit definition, stating that F must include B (Bc F).
(b) Inductiverules: we present some rules of form
where a J, .. . , ak and b are words formed from the letters of jl and, possibly, from
variables referring to jl-words, that is, the shape of these words is
where co, Cit .. . . c; E jl
(any of these may be 0), and Xit ... 'X
are variables refer-
ring to jl-words.
(c) Closure: we stipulate that the class F must contain just the words pre-
scribed to be in F by (a) and (b).
Except some trivial cases, the stipulations (a), (b), (c) cannot be re-written into
a form of an explicit or contextual definition (cf. Sect. 2.4) - at least we have no means
to do so, up the present time. Thus, inductive definition is, in fact, a new type of defini-
tions .
4.1.2. Supplements to the explanation of inductive definition.
Ad (a). The base Bcan be defmed explicitly by enumerating its members, e.g.,
where b
, ... .b; are fixed jl-words (in this case, B is fmite), or by using variables as
under (2), e.g.,
B = {x: x EjlO & Vy(x ="ayb" v x ="ycy") },
where a, b, care fixedx-words (in this case, B is infinite). Finally, B can be a class de-
fined earlier. - Note that if B is empty then the class F defined by the induction is
empty, too.
Ad (b). The inductive rules involving freevariables are to be understood as uni-
versal ones, i.e., with suppressed universal quantification of their free variables (in ac-
cordance with our convention on metalanguages). Two examples of inductive rules:
X, Y E F => "axbyc" E F,
(x E F & U axbyc" E F) => Y E F.
In both rules, a, b, c are fixed -words (some of them may be 0).
Ad (c). In fact, the closure tells that F must be the smallest subclass of
satisfying the conditions in (a) and (b). Thus, the closure contains a suppressed univer-
sal quantification over the subclasses of
We met a similar situation in the postulate
(R7) (in Sect. 3.1), with the considerable difference that in the present case, the quan-
tification is limited to the subclasses of 51
(not all classes in the whole .world). How-
ever, in the sense we use class notation, the notion totality of all subclasses of is
undefined; thus, we have no logical justification of the closure. - In what follows, we
shall omit the formulation of the closure whenever we indicate in advance that we want
to give an inductive definition. Clearly, in such a case, it is sufficient to present the base
and the inductive rules.
We saw that inductive definitions are not eliminable by our present means. (The
kernel of the problem lies in the closure.) However, our intuition suggests that a sub-
class of 51
is clearly determined by an inductive definition. And, what is perhaps
more important, we are unable to determine categories of languages without using this
tool. As a consequence, we accept inductive definitions as a new means in metalogical
Remark. In set theory, inductive definitions can be reduced to explicitdefinitions. However, the
introduction of the language of set theoryis impossible without inductive definitions. (More on this topic
see Ch. 10, Sect.4.1.)
Our next goal is to find out some new devices of presenting inductive defini-
tions that will pave the way for some generalizations as well. We approach this goal via
some examples.
4.1.3. The simplest example of an inductive definition is as follows. Given an alpha-
bet 51, let us define the subclass A* of
by induction as follows:
Base: 0 E A*.
Inductive rule: (x E A* & (X E51) => "xa" E A*.
This rule comprises as many rules as many letters are in .91.. Thus, if .91. = {<X.o, a
... ,
an}, then we can enumerate them:
X E A* :::::) "xao" E A*.
X E A* :::::) "xa( E A *.
X E A* :::::) "xa.
" E A*.
The reader sees that A* = .91.
A "theoretical" consequence: .91.
is an inductively de-
fined subclass of itself, for any alphabet .91..
We can simplify the notation by omitting the dull occurrences of "s .91. *" and
the quasi-quotation signs. Furthermore, let us use '-7' instead of ':=>'. We get a ta-
(5) o
Any inductive definition can be presented by such a table. The first line represents the
base, and the other lines represent rules. Each rule has an input on the left side of the
arrow, and an output on the right side of the arrow. Even the first line can be consid-
ered as a rule without any input. Note that the name of the class to be defined does not
occur here; you can give it a name later on.
The rules mentioned as examples under (3) and (4) contain two-two inputs.
Remembering that
"(A & B) :=> C' is the same as "A:::::) B :::::) C'
(cf. Sect. 2.3), we can re-formulate them as follows:
x -e axbyc
Thus, a rule may involve more than one input.
4.1.4. Our next example will be more complicated. Let us consider the dyadic alpha-
= {O,l} introduced in Sect. 3.1. We want to define the class D of dyadic numer-
als representing natural numbers divisible by three. (We shall call numerals the words
expressing numbers in a certain alphabet.) These numerals are >td-words such as
(6) 0, 11, 110, 1001, 1100, 1111, 10010, ...
(in decimal notation: 0,3,6,9,12, 15, 18, ... ). Some initial members of this (infinite)
totality D can be included into the base, and the other ones are to be introduced via in-
ductive rules. Our intuitive key to the inductive rules is: by adding three to any member
of the class D, we get another member of D. The problem is to express "adding three"
in the dyadic notation. Let us note that each member greater than three has one of the
following forms in the dyadic notation:
xOO, xOl, xlO, xlI
where x is any dyadic numeral other than O. It is obvious that by adding three to "xOO"
we get "x11". Thus, we can formulate a rule:
(7) xOO ~ xl l.
Assuming that the input is "good" (i.e., represents a number divisible by three) then
the output is a "good" one as well.
By adding three to "xOl" we get "yOO" where y is the numeral following x
(i.e., x plus one). Similarly, "xl0" plus three gives "yOl", and "xlI" plus three gives
"y10" where, again, y is the numeral following x. In order to express these facts, let us
introduce the notation "xFy" for "x is followed by y". Then we can formulate the
missing three rules as follows:
xFy ~ x l ~ yOO,
xFy ~ xl0 ~ yOl,
xFy -,\ xl l ~ ylO.
Of course, this is incomplete without defining the relation represented by 'F. This is
simple enough:
x y ~ xlFyO.
The first line serves as the base: it says that any word ending with 0 is followed by the
word we get by replacing its final letter by 1. The second line is an inductive rule say-
ing that "xl" is followed by "yO" provided x is followed by y. - Now we see that even a
relation (a dyadic predicate) can be defined via induction.
Putting the empty word for x in (11) we get 'OFl', i.e., that 0 is followed by 1.
Putting 0 for x and 1 for y in (12), we get 'OFI ~ OIFIO'. Knowing that the input
holds, we get 'OlFIO' . However '01' is not accepted as a well-formed dyadic number,
thus, we cannot accept this result as saying that 1 is followed by 10. We must add an
extra stipulation:
(13) IFI0.
The continuation is correct: we get 'I OFll' from (11) by putting 1 for x; then we get
which, using (13), gives 'llFlOO', and so on.
Now, the complete inductive definitionof our class D can be compiled by tak-
ing the first three members of (6) as input-free rules, and collecting the rules from (7) to
(13). We get the following nice table:
(14) o
xFy -7 xlFyO
XOO -7 xII
xFy -7 xOl -7 yOO
xFy -7 xlO -7 yOl
xFy -7 xlI -7 yIO
We see that in some cases, we shall use three sorts of letters in an inductive definition:
(a) letters of the starting alphabet Yl
(b) letters as variables referring toYl-words - of
course, these letters must not occur in Yl , (c) subsidiary letters (like F in our example
above) representing (monadic, dyadic, etc.) predicates necessary in the definition -
again, these letters must differ from those in (a) and (b). Furthermore, the character' -7'
occurringin rules must be foreign from all the mentionedthree supply of letters.
Conventions for using subsidiary letters. In our example, the subsidiary letter 'F'
was surrounded by its arguments. If a subsidiary letter, say 'G' represents a monadic
predicate, then its argument will follow this letter, as "Gx". If the letter 'H' stands for a
triadic predicate, then its arguments, x, y, z will be arranged as follows:
In the case of a tetradic predicate letter 'K', the arrangement of the arguments x, y, z, u
is a similar one:
The convention can be extended for more than four-place predicate letters, but, fortu-
nately, in our praxis we can stop here.
In what follows, tables such as (5) and (14) representing inductive definitions
will becalled canonical calculi. In fact, a canonical calculus is a finite supply of rules
(permitting some rules without any input). But the more detailed explanations will fol-
low in the next section where the connections with the inductive definitions will be
cleared up exactly.
4.2 Canonical Calculi
4.2.1. DEFINITION. Let C be an alphabet and I a character not occurring in C.
We define inductively the notion of a c-ruleby the following two stipulations:
(i) Iff f is a C-rule.
(ii) Ifr is a c-rule, and f eC
then r'' is a c-rule.
(Remark. Note that by (i), (0 is a c-rule.)
4.2.2. DEFINITION. Let j{ and j{ / be alphabets such that Jil I;; j{ / and I j{
Then, a finite class K of j{ <rules (given by enumeration of its members) will be
called a canonical calculus over Jil. The letters in j{ / -j{ (if any) will be called vari-
ables of K, or K-variables. The members of K - i.e., the J'l/-rules occurring in K -
will be called rules ofK.
Remarks. 1. A canonical calculus Kover j{ is a finite class of J'l U 0/ U }
-words where tV is the class of K-variables. - 2. As a "pathological" case, the empty
word (0 is a canonical calculus over any alphabet. - 3. If K = {rt, ... r
} and '.' is a
new character (not occurring in J'l' u then K can be represented as a single
word of form
This means that canonical calculi can be assumed as mere grammatical entities.
4.2.3. DEFINITION. Let J'l be an alphabet and K be a canonical calculus over J'l.
We define inductively the relation uf is derivable in K" - in symbols: UK H f' - by
the following three stipulations:
(i) If t e thenK r.
(ii) Substitution: If K H f, andj" is the result of substituting an j{-word for a
K-variable x inf (the same word for all occurrences of x in}) then K H
(iii) Detachment: If K H f, K H and the arrow does not occur inf
then K H g. (The stipulation for f means that f E (Jil U 0/)0 where tV is the class of
4.2.4. DEFINITION. Let j{ be an alphabet. We say that the class of words F is an
inductive subclass of j{0 iff there exist '13 and K such that
(i) '13 is an alphabet and J'l ~ '13,
(ii) K is a canonical calculus over '13, and
(iii) F = {x: x EJ'l & K H x}.
(Briefly: F is the class of j[-words derivable in K.) Here the members of 'lJ-J'l (if
any) may be called the subsidiary letters of the calculus K.
Comments. 1. From now on, inductive definitions can be treated as defmi-
tions by means of a canonical calculus. Obviously, the notion of an inductive subclass
(given above) includes (is a generalization of) the notion inductively defined
subclass of j[0 (as given in the Explanation 4.1.1 in the preceding section). Thus,
the acceptance of inductive definitions can be re-formulated by stating that:
whenever j[ and '13 are alphabets, j[ ~ '13, and Kis a canonical
calculus over j[ then the open sentence
is an accepteddefinitionof a monadic predicate(and, hence, of a subclass of j(0).
2. Note that a canonical calcu!us "in itself' defines nothing; to get a definition
of a class F we must add the stipulation "F = {x: x E j[0 & K H x}". Even in doing so
we may get that Fis empty.
3. Using a canonical calculus K in defining a subclass of j[0, we use, in fact,
the alphabet 5f. u tV uS u ~ where tV is the class of K-variables and S is the class
of subsidiary letters used in K. One of tV or S or both may be empty. The usefulness
of subsidiary letters was exemplified in the previous section under (14) in the defmition
of the class D.
4. If '13 ~ C, and K is a canonical calculus over '13/ then, obviously, it is a ca-
nonical calculus over C as well .
5. The definitions in the present section are (mostly) verbal inductive defini-
tions. The question arises whether these definitions could be transformed into more
rigorous ones, into definitions by some canonical calculi. We shall get the positive an-
swer to this question in Sect. 3.4.
Conventions. For the sake of brevity, we shall sometimes use the term 'calculus' in-
stead of 'canonical calculus'. This convention will hold till Ch. 6 where we shall
speak about logical calculi; after this, the adjectives 'canonical' and 'logical' cannot
be omitted. - Instead of saying that "F is an inductive subclass of 5f.
" we sometimes
will say that "F is an inductive class". In general, the term 'inductive class' will be
used in this sense.
4.2.5. THEOREM. For all alphabets 51, 0 and ;1. are inductive subclasses
of51.0; and if E = {eh ... en} is an enumeration of 51-words, then E is an inductive
subclass of 51.0.
Proof. The calculus involving the singlerule "x x" (wherex is a variable)
definesthe emptyclass (the base of the induction being empty). Concerning ;1.0, the
calculus under (5) of the preceding sectiondoes the job. Alternatively, the calculus in-
volvingthe simpleinput-free rule "x" defmes ;1.0, too. For E, the calculusinvolving just
the input-freerules eI, .. . ,en is appropriate.
4.2.5. THEOREM. If ;I. is an alphabet, F and G are inductive subclasses of
51.0 thensoare Fu G and rr. G.
Proof. Assume that K
and K
are the calculi defining F and G, respectively.
Choosetwo newletters, say '<p' and 'y'. Byprefixing a rule r with <p (or withy) let
us mean the rule r' obtained from r by inserting <p (y, resp.) after each in r (if
any) and beforer. Let K
' be the calculus obtainedfromK
by prefixing all its rules
with <p o Obviously, if fe5t then K
+ f iff K
' + <pf. Similarly, let K{ be the cal-
culus obtainedfromK
by prefixing all its rules withy. Again, iff e 5t
then K
+ f
iff K
' + yf. Nowlet us considerthe calculus K
' u K
' . By addingto this the rules
<px -? X
rx -?x
(wherex is a K
- or a Krvariable), the resultingcalculusK
defines Fu G:
{x : xe;l.& K
+ x} = Fu G.
On the other hand, by addingthe singlerule
to K
' u K
' , the resultingcalculusK
defines F n G:
{x: x e;l.& K
+ x} = Fn G.
Remark. The question may arise: whether the difference F-G (of two inductive
classes) is an inductiveone?We shall see in Sect. 4.4 that this is not alwaysthe case.
4.3 Some Logical Languages
4.3.1. A language of propositional logic. We shall deal with a verysimple language
called the language of classical propositional logic. (It is unimportant whether the
reader is familiar with this systemof logic.) We want to defme the most important
category of this language calledthecategory of formulas.
This category contains an infinite supply of atomicformulas which are to be
meant representing unanalized sentences. Using an "initial" letter '1t' and an
"indexing" letter '1' , theycan be expressedas
1t, m, 1tU, rtttt, ...
Compoundformulascan be formedfromgiven formulasx and y in the form"-x" and
"(x::) y)", where '-' and '::)' are the functors of negation and conditional, respectively.
(Thus, '::)' corresponds to the metalanguagesymbol '=:>'.) No other symbol is needed.
Thus, the alphabet we need is as follows:
={(,), 1t, t , -, o}.
(The subscript 'PL' refers to propositionallogic.) In the calculus definingthe class of
formulas, we shall use the subsidiaryletters'!' and 'F' ('1' for index, and 'F' for for.
mula), and the letters 'u' and ' v' as calculus variables. Now our calculus called K
consists of the followingrules:
: 1. 10
2. Iu ~ Iut
3. Iu ~ F1tu
4. Fu~ F-u
5. Fu ~ Fv ~ F(u::) v)
5*. ~ u
(The numbering of rules does not belong to the calculus. We use the ordinals only for
helping to refer to the singularrules.)
Comments. Rule 1 tells that the empty word is an index. Let us agree that we
omit 0 in any rule except when it stands alone, i.e., if '0' is a rule. Thus, rule 1 can
be written simply as 'I'. - Rules 1 and 2 together define the class of indices; one sees
that indices are just the {t}-words. - Rule 3 defines the atomic formulas, rules 4 and 5
define compoundformulas. Finally, rule 5* serves to releasethe derivedwords fromthe
subsidiary letter ' F' . Let us call such rules as releasing rules.
Now we can define the class of formulas of classical propositional logic - in
symbols: 'FormpL' - as follows:
Let us note that FormpL can be defined by a calculus immediately over .st
, i.e.,
without any subsidiaryletters. Namely:
11. Tt
12. Ttt
13. u t ~ u n
14. u ~ u
15. u ~ v ~ (u:::> v)
By 11 and 12 we can get the atomicformulas ' rc' and 'm'. By 13, a word terminating
in t can be lengthened by an t . Thus, rules 11, 12, and 13 together are sufficient to pro-
ducingatomicformulas. Rules 14and 15needno comment.
However, the preceding calculus seems to bemorein accordwith our intuitions
concerning the gradual explanations of "what are the formulas". Releasing of subsidi-
ary letters seemsto be unimportant, due to the simplicity of the studiedobject language.
We can define a subclass of FormpL called the class of logical truths of
propositional logic. The formulas of this class have the property that they are always
true independently of the fact whetherthe atomicformulas occurring in themare true or
false, assuming that negation (-) and conditional (::: have the same meaning as '-'
and ~ in the metalanguages (cf. Sect. 2.3). For this definition, we introduce the cal-
culus K
as an enlargementof K
above. Our basic alphabetremains J'l.
; we use
a newsubsidiary letter 'L' and a newvariablew. Omit rule 5* from K
and add the
following newrules:
6. Fu ~ Fv -7 L(u:::> (v:::> u))
7. Fu -7 Fv ~ Fw ~ Lu :::> (v o w)):::> u:::> v): (u:::> w)))
8. Fu ~ Fv ~ L-u:::> -v):::> (v:::> u))
9. Lu ~ L(u :::> v) ~ Lv
10. Lu -7 U
The last rule releasesthe subsidiary letter 'L'. The calculusK
consists of the rules 1
to 5 (takenfromK
) and 6 to 10givenjust. Let us define:
= {x: x E J'l.
& K
Jt+ x}
as the class of logical truths of propositional logic. Referring to the truth conditions of
the negation and the conditional (as given in Sect. 2.3), it is ;easy to prove that the
members of L
are, really, logical truths. In fact, L ~ L contains all logical truths ex-
pressible in propositional logic - but we do not prove here this statement. (The proof
belongsto the metatheory of classicalpropositional logic.)
4.3.2. A first-order language. Our next topic will bea language of classical first-order
logic. First-order logic uses all the grammatical and logical means used in metalogic (cf.
Sections 2.1, 2.2, 2.3): it applies (individual) variables, names, name functors, predicates,
and quantifiers. (The adjective 'first-order' refers to the fact that onlyvariables referring to
individuals are used.) First-order languages mayuse different stocks of name functors and
predicates. We shalldeal herewiththe maximal first-order language which has an infinite
supplyof namefunctors for all possible numbers of argument places, and, similarly, an infi-
nite supplyof predicates for all possible numbers of argument places (and, of course, an in-
finite supplyofvariables). Thus,thealphabet weneedmustbe muchricherthan JlPL.
We applyas initial letter'x' (gothic eks) for variables, it will be followed by an {t}-
wordas an index. Forindicating numbers ofemptyplaces offunctors, we shalluse theGreek
letter' 0' (omicron); {o}-words maybe calledarities. As initialletterfor nomefunctors we
shall usethe letter'cp', it will be followed by an arityand an index. If the arityis empty, we
have a name. For predicates, we shall use the initialletter '1t', followed by an arityand an
index. Ifthearityis empty, we havean unanalyzed atomic formula. We apply''tI' as theuni-
versal quantifier, and the symbols '-', 'zi', '=', and the parentheses will be used as well.
(Themissing sentence fimctors - e.g., conjunction - and the existential quantifier canbein-
troduced viacontextual definitions.) - Thus,ouralphabet willbe:
J{MF = {(,), t, 0, X, <p, 1t, =, -, ::l, 'tI}.
(Thesubscript 'MF' refers to 'maximal first-order'.) The main category of this language is,
again, that of theformulas. To define it, .we need some auxiliary categories and relations
which will beexpressed bysubsidiary letters. Theclassof oursubsidiary letters is:
s = { I, A, V, N, P, T, F }.
Here I, A, V, T, and F stand for index, arity, variable, term andformula, respectively.
N and P represent dyadic predicates; the intuitive meaning of "xNy" is "y is a name
functor of arityx", and that of "xPy" is "y is a predicateof arityx".
Now we can formulate the following calculusoverJ{MF U S with variables x, y, z:
1. I
2. Ix Ixt
3. A
4. Axo
5. Vxx
6. Ax Iy xNcpxy
7. Ax Iy XP1txy
8. Vx -?Tx
10. Ax xoNy Tz xNyz
11. Ax xo Py Tz xPyz
12. Fx
14. F-x
15. Fx Fy F(x::l y)
16. Vx Fy F't/xy
By adding the releasing rule
we get the calculus K
defining the class of formulas, FormMF of our language.
Other categories of this language can be defined by using a part of this calculus and
another releasing rules; e.g.,
7*. xPy
10*. Tx
Rules 1 to 4, 6 and 6* define the class of (atomic) name functors; rules 1 to 4, 7 and
7* define the class of (atomic) predicates; rules 1 to 6, 8, 9, 10, and 10* define the
class of terms; and rules 1 to 13 and 16* define the class of atomicfonnulas.
Comments. Most of our rules are obvious ones. The sign '0' of empty word is
omitted everywhere (in 1, 3, 9, and 12). Rule 12 says that a name functor of arity 0 is
a term, and rule 12 states that a predicate of arity 0 is a formula. Rule 10 prescribes
that by adding a term to a name functor of arity xo (i.e., x plus one) we get a name
functor of arity x (i.e., arity diminishes with one). Rule 11 tells the same for predicates.
Note that these rules do not prescribe to include the argument between parentheses
when filling in the empty places of a functor. This is as good as it is, for a functor and
its arguments are uniquely separable, due to the fact that every possible argument be-
gins with <p or with x. In some first-order languages, parentheses in the same position
cannot be omitted. In rule 13, the parentheses surrounding the identity "x = y" could be
omitted, but we retain them for the sake of easier readability. In rule 15, the parentheses
surrounding the conditional "x ::J y" cannot be omitted without a risk of ambiguity. -
Our examples show the technical advantages of the use of the empty word. Without it,
we would use more rules in our calculi.
Remark. In the literature of mathematical logic and mathematical linguistics, one finds different
approachesin formalizing inductivedefinitions by means of some sort of calculus. See, e.g., POST 1943,
POST 1944, SMULLYAN 1961, MARTIN-LOp 1966, FEFERMAN 1982. - Our 'rules' are sometimes called
' productions', the basic alphabet (of a calculus) is said to bethe 'terminal alphabet', and the subsidiary
lettersare called 'non-terminalsymbols'.
4.4 Hypercalculi
We posed the question in Sect. 4.2: Could the notions concerning canonical calculi be
defined by canonical calculi? We are going to answer this question positively in this
section. The key to our answer lies in the Th. 3.2.2 according to which each language
can bereplaced by a language based on the two-letter alphabet j[t =
The intuitive background is as follows. Imagine a calculus K over an alphabet
'13. According to the theoremjust cited, '13 can be replacedby j{i- Let ; bea third let-
ter, then, the K-variables can bereplacedby words such as ;, ....
Choose a fourthcharacter, say '-< ' , in order to substitute the arrows ('..-? ') in the rules
of K. Then, each rule of K will be translated as a word of the alphabet
{a., <l . If the translationsof the rules of K are rIt rz. ... ,r
then, choosing a fifth
character' .', we can formthe expression
as the translationof K into a single word of the five-letter alphabet
j{cc = -<, -l-
That is, any canonical calculus can be expressed- via translation- as an j{cc-word.
4.4.1. The hypercalculus HI. - We define the hypercalculus HI that presents the
class of all calculi over j{1. Our starting alphabet will be Jil
but we need the "class of
51 = {I, L, W, V, T, R, K}. .
Thus, H. will be a calculus over j{cc c s. We shall use the letters 'x' and 'y' as H.-
The intuitive meaningof our subsidiarylettersis included in the followingglos-
sary: I - index, L -letter, W - word, V - variable, T - term R -
rule, K- calculus.
H.: I. I
2. Ix..-?
3. Ix..-? Lax
4. Ix..-? V;x
5. W
6. Wx..-? Ly..-? Wxy
7. T
8. Tx ..-? Ly ..-? Txy
9. Vy..-?Txy
10. Tx..-? Rx
II. Ry..-? Rx-< Y
12. Rx..-?Kx
13. Kx.y
By rules 1 and 2, "indices" are the By 3, "letters" are the words beginning
with a. and continuing with an index. By 4, "variables" begin with; and continue with
an index. By 5 and 6, "words" are formed from 0 by adding "letters". By 7, 8, and 9,
"terms" are formed from 0 by adding "letters" and/or "variables". The remaining
rules are, hopefully, obvious.
By adding to H
the releasing rule
the resulting calculus H{ defines the class of canonical calculi:
CCal =df {k E5t
& HI'''' k}.
The alphabet Jtccu 5t is representable in the alphabet Jt
= as well. Thus, even
' can be expressed by a single word hI , and hI E CCaI. That is, H{ derives itself
(more exactly, its own translation hI).
4.4.2. The hypercalculus H
- Our next task is to define the relation
word/is derivable in calculus k
by means of a canonical calculus. For this aim, we shall enlarge the hypercalculus HI
by new rules. We need two new subsidiary letters 'S' and ' D', and the new variables u,
v, w, z,
According to Def. 3. of Sect.4.2, there are two important type of steps in the
course of deriving a word: (l ) substituting a word for a variable, and (2) detaching a
derived word! (containing no arrow) from a derived word ''/ g", to get g. The lat-
ter is easily expressible by a rule. Not so the former, although, intuitively, substitution
seems to be a very simple notion.
The subsidiary letter'S' will be used as a four-place predicate. The expression
"vSuSySx" is to be understood as "we get v from u by substituting y for x", The
conditions are always "Vx" and "Wy", i.e., that x is a "variable" and y is a "word".
However, we omit these conditions whenever they are unimportant, i.e., if "uSuSySx"
holds, that is, u remains intact with respect to the substitution. In fact, there are such
cases. Let us begin with these:
14. Lu uSuSySx
15. -< S-< SySx
16. Vx Iz
17. Vx Iz
These rules tell that in substituting a variable, the letters, the character '-< " and the
other variables remain intact. Now an "active" rule:
18. Vx Wy ySxSySx
(we get y from x by substituting y for x). We finish by formulating that substitution in
a compound "uz" means substitution in its parts u and z:
19. vSuSySx wSzSySx vwSuzSySx
Now we turn to derivations. We use the subsidiary letter 'D' as a dyadic predicate,
"xDy" means that "y is derivable from x".
20. xDx
21. Rx Ky y.xDx
22. Rx Ky x.yDx
23. Rx Ky Kz y..x. zDx
24. zDu vSuSySx zDv
25. xDy xDy-< z Ty xDz
Rule 20 is obvious. Rules 21, 22, and 23 assure that in a calculus, each of its rules is
derivable. Rule 24 and 25 express substitution and detachment, respectively.
Let H
be the hypercalculus constituted by the rules 1 to 25. We do not add a
releasing H
, for its task is to define the relation D rather than a class. Now,
" Ka) & (H2" Wb) & (H2" aDb)
holds iff a represents a calculus in which b is derivable .
4.4.3. The hypercalculus H
- In order to get an important information about in-
ductive classes, we shall enlarge our hypercalculus Hz by some new rules. As new
subsidiary letters, we shall use 'F', 'G', and 'A' .
.We define first the lexicographicordering of the ;tee-words. Such an ordering
is almost the same used in vocabularies, dictionaries and encyclopaedias, with the ex-
ception that a shorter word always precedes a longer one. It is based on the alphabetical
ordering of the letters. In our case, we assume that a. is the first letter, it is followed by
by;, ; by '-< " and '-< ' by '.'. Using "xFy" for "r is followed by y", our rules
defining the lexicographic ordering of .stee-words, are as follows:
26. Fa.
30. x-< Fx.
31. xFy x.Fya.
(Note that by 26, 0 is followed by a..)
As the next step, we shall introduce numerical codes of J'lcc-words. We shall
use {a}-words as numerals (cf. the comments on the alphabet x, = {a} in Sect.3.2),
and apply the simple strategy: the code of the empty word let be itself, and if the code
of x is y then the code of the wordfollowing x let be "yo;". Encodingwords (of a formal
language) by natural numbers was first used by Kurt Godel (in GODEL 1931). By this,
we shall call the numeral coding of a word as its Godel numeral. In the followingtwo
rules, "xGy" may be read as "the Godel numeral of x is y".
32. G
33. xFy xGz yGza
Now assume that the word k EJ'lcc represents a calculus (i.e., HI + Kk), and its Godel
numeral- as determinedby the rules 26 to 33 - is g, that is, the word 'kGg' is deriv-
able in our enlargedcalculus. Now, it may happen that H
+ kDg that means that in
k, its own Godel numeral is derivable. Such a calculus may be called an autonomous
one, and its Godel numeral an autonomous numeral. Using the subsidiary.letter 'A' for
' is an autonomousnumeral' , we can define this notionby the rule:
34. xDy xGy Ay
Let H
be the hypercalculus consisting of the rules 1 to 34. Then, the class of autono-
mous numerals, Aut, can be definedas follows:
Aut =df {x: x EJ'lo& H
+ Ax}.
Or, if we add the releasingrule 34* to H
, to get the calculus H
' ,
then we have:
Aut =df {x: x E Aoo & + x}
which shows explicitlythat Aut is an inductive subclass of J'loo.
4.4.4. THEOREM. The class of non-autonomous numerals, J'loo-Aut, is not an induc-
tive subclass of J'loo.
Proof. We have to show that no calculus defines the class of non-autonomous
numerals. It is sufficient to deal with calculi over J'l1 = (remembering that every
alphabet can be repleced by a two-memberone). Assume k E CCal, and let its Godel
numeral be g (i.e., H
+ Kk, and H
+ kGg). Then, there are two possibilities:
(a) g is derivable in k (i.e., H
+ kI?g). Then g EAut, hence, it is not true
that only non-autonomousnumerals are derivablein k.
(b) g is not derivable in k, i.e., g is a non-autonomous numeral. Hence, it is not
true that all non-autonomous numerals are derivable in k.
By (a) and (b), no calculus derives J{oo-Aut - what was to be proven.
Corollary 1. For all alphabet )il, there exist inductive subclasses F and G of
)il0 such that F-G is not an inductive subclass of )il0.
Proof We can assume, without violating generality, that >to ~ l (for a certain
letter of )il may be chosen for the role of a.). It is obvious thatJ{oo is an inductive sub-
class of J{0, and so is Aut, for jI can be enlarged with subsidiary letters in order to
reconstruct H
' . By our theorem, >too-Aut is not definable by means of any calculus.
Thus, taking J{o in the role of F, and Aut in the role of G, our statement holds true.
Corollary 2. For all alphabets )il, there exists an inductive subclass F of)ilo
such that )il-F is not inductive.
Proof. Take into consideration that J{ois inductive, and
Knowing that ;toO-Aut is not inductive, we have that )il-Aut cannot be inductive
(cf. Th. 6 in Sect. 4.2). Thus, Aut fulfils the role of F.
The results of these corollaries were foreshadowed in a Remark at the end of
Sect. 4.2. The importance of these results will be cleared up later on, partly in the next
4.5 Enumerability and Decidability
Inductive definitions - i.e., definitions by means of some canonical calculi - are ac-
cepted tools of introducing categories of a language. If categories F and G are well-
defined, then F u G, F (\ G, and F-G seem to be well-defined as well. However, as
we have seen in the preceding section, it may happen that although F and G are induc-
tive subclasses of a certain jIo, F-G is not so (but F u G and F (\ G are always
such ones, according to Th. 6 of Sect. 4.2). What is the peculiarity inductive classes
posses, and non-inductive classes do not posses?
This peculiarity is enumerability, in an extended sense of the word. In general,
a class F is said to be enumerable, iff there exists some procedure (or algorithm) by
which it is possible to list the members of F one after another, i.e., the procedure gives
the initial member of the list, and whenever given a member in the list, the next mem-
ber will be determined uniquely except the given member is the last one (in which case
Fis finite). More exactly, for all members of F - except at most a single one - the pro-
cedure uniquely determines a successor in the way that
(i) the initial memberis not a successorof any member,
(ii) no memberof F is a successorof itself,
(iii) the successorof each memberbelongsto F,
(iv) each memberof F except the initial memberis a successorof a unique
memberof F (thus, successors of differentmembers are different), and
(v) if there is a singlememberin Fwithouta"successorthenit is calledthe
final memberof the enumeration; in this case, F is finite.
In the case F is infinite, the enumerating procedure, of course, never ends and never
stops, it runs ad infinitum.
It is obvious that any finite class of words is enumerable. Also, empty classes
will be qualifiedas enumerable ones (sayingthat the procedure "do nothing!" enumer-
ates all membersof 0).
If Jil is an alphabet, the class JilO is enumerable, e.g., by means of the lexico-
graphic ordering of its words. (The methodin the preceding sectionexpressed by the
rules 26 to 31can be easilyadaptedto any alphabet.) In Sect. 5.2, an algorithm will be
presentedwhich"calculates"the successorof any wordf E Jilo.
Another algorithm (ibid.) will be able to "calculate" for any wordf E Jil -
wheref involves just k copies of a fixed letter "(E Jil - the wordf' involving again k
copies of"( and standing "nearest" afterf in the lexicographic ordering of Jil-words.
Fromthis it then follows:
4.5.1. LEMMA. The class of Jil-words involving exactly k copies of a fixed let-
ter of Jil is enumerable; here k is any positive integer.
Now we can outline - sketchily - a procedure for enumerating all members of
an inductiveclass. Let Jil be an alphabet and F be an inductive subclass of Jil defmed
by a calculus K. Disregarding trivial cases, let us assume that K contains some input-
free rules (thus, F is not empty), and some K-variables occur in the rules of K (thus, K
is not finite). Fromthe rules of K we can get further derivedwords either by substitu-
tion or by detachment. Our strategyconsistsin applying alternatively these two permit-
ted acts of derivation.
In order to apply all possiblesubstitutions of variables, we need, in advance, an
enumeration ofall substitutions. Assumethat
is an orderingof the variables of K. Atotal substitution of thesevariables is determined
by a k-tuple of Jil-words a., ... ,ab meaningthat Xl is substituted by at (in all rules of
K), X2 by a2 ' ... , and Xk by ak' Let 'I' be a newcharacter(not occurring in Jil), then,
this substitution can be expressedby the 'Jil u {If-word
involving just k-l copies of the letter 'I '. By our Lemma above, the class of such words
is enumerable. Hence, the class of all substitutions of our variables is enumerable. Note
that the first word in this enumeration is
010/ ... 10,
or, omitting 0, this word consists of k-l copies of 'I '. That is, the frrst substitution
consists in putting the empty word for all occurrences of all variables in the rules of K.
Now, our enumerating procedure runs as follows.
Step 1. Apply the first substitution of variables . Arrange the words resulting
from the rules of K by this substitution in two sequences 8 and P: put the words re-
sulting from the input-free rules (words without arrow) into 8 and the other ones into P.
(The order of the words in 8 and P is indifferent; however, if you have an ordering of
the rules of K then the ordering of 8 and Pmay follow this ordering.)
Step 2. For each member of form "/ ~ g" in P - where / involves no arrow -
examine whether / occurs in 8. If so, and if g involves no arrow, then add g to 8
(except when it occurs in 8 already), and omit "/ ~ g" from P. If/occurs in 8 but g is
not arrow-free then replace ' '/ ~ g" in Pby g. Finally, if / does not occur in 8, go fur-
Step 3, and, in general, Step 2n+l (n ~ 1). Apply the second (the n-th) substi-
tution of variables. Extend 8 by adding the arrow-free words resulting from the rules of
Kby this substitution, and Pby the other words.
Step 4, and, in general, Step 2n. The s ~ as Step 2.
The sequence marked by '8' will be extended in each step of uneven number, and,
sometimes, in steps of even number. The other sequence marked by 'P' contains
51. u { ~ } -words, it increases in steps of uneven number and may change or even de-
crease in steps of even number. Clearly, the ever-growing sequenceS gives the enu-
meration of 5l-words derivable in K.
In case K uses subsidiary letters we need a further procedure to omit the words
from 8 involving subsidiary letters. But this can done by simple inspection of the mem-
bers of 8.
It seems to be obvious that a necessary condition of accepting a syntactic cate-
gory of a language as a well-defined one is that the category be an inductive class de-
fined by some calculus. If this condition is fulfilled then - according to the results of the
present section - the words belonging to the category can be enumerated systemati-
cally. However, if the category contains infinitely many words then the enumeration -
although it can be continued without any limits - never ends. Thus, the enumeration
itself does not give a methodto decideon everyword whetherit belongs to the category
in question.
4.5.2. DEFINITION. A category of a language is said to be decidable iff there exists a
universal procedure by which for any word of the language it is decidable in finite
numberof steps whetherit belongsto the categoryin question.
Although the notionof procedure is unclear, we can give examples in which its
use seems to be quite correct.This is the case in the next theorem.
4.5.3. THEOREM. If Jil is an alphabet, and both F and Jil
-F are inductive
subclasses of Jil
then F is decidable.
Proof. We give a decisionprocedure for F. Byour assumptions, both F and its
complementary class Jilo_F are enumerable. Hence, any word a E Jil must occur in
one (and only one) of the enumeration of these classes. Apply the enumeration proce-
dure outlined above for these classes, by forming alternatively the strictly increasing
of words of F and Jilo-F, respectively (whereS, is the result of applying 2i steps in
enumerating F, and Si/ is analogously for Jilo_F). Since the worda must occur in one
of theseenumerations, there must be a numbern suchthat either a is in S; or a is in S; /.
Thus, after a finite number of steps in our procedure, the question "a E F ?" is an-
The question "What is a (deciding) procedure?" seems to be unimportant in
case we can present a decisionprocedure for a class F. However, it will be an important
one if we are unable to present such a procedure, and, moreover, we suspect that our
class F is undecidable. To prove such a guest, an exact notion of procedure is indis-
pensable. Furthermore, we can pose the questions: Is everyinductive class decidable?
Is every non-inductive class undecidable? (Or, in contraposition: Is every decidable
class (of words) an inductiveone?) Our next chapter is devoted to the investigation of
these and similar questions.
5.1 What is an Algorithm?
Instead of the term 'procedure' - used in the preceding section - the expression
'algorithm' is widely used in mathematics. In the most general sense, an algorithm is a
regulated method (a procedure) of solving all (mathematical) problems belonging to a
certain class. All of us have learned in the primary school a method of multiplying a
pair of numbers written in decimal notation. This method is a simple example of an al-
gorithm that is applicable for all pairs of numbers written in decimal notation. In fact, it
deals not with numbers but with numerals, i.e., with words of a certain alphabet.
This last remark leads us to delimit the intuitive notion of an algorithm. We
stipulate that each algorithm must be connected with an alphabet j[ I and its task is
always to transform j[-words into j[-words. (Note that pairs, triples, etc. of j[-words
can be considered as words of a larger alphabet - e.g., of jl u {I} - ; thus, our"delimi-
tation does not exclude the above example of the multiplying algorithm. We can apply
subsidiary letters in an algorithm, too.) If an algorithm deals with j[-words, we shall
call it an algorithm over the alphabet j[.
Now, an algorithm over j[ must prescribe what we have to do with an input
word fEjlo. In most cases, it depends on the question which letters' occur inf Thus,
the working of an algorithm may begin with a question concerning f In most cases,
the form of such a question is: "does the word a occur in f ?" The further step depends
on the two possible answers. The work of the algorithm may be continued either by
putting a new question or else by a command what we have to do with the given word.
A command, in general, prescribes to change a subword of the given word by another
one. (This includes prefixing or suffixing by some letters of the given word as well as
omitting some of its letters. Think of the application of the empty word!) After per-
forming the act prescribed by the command, we have a new word and the algorithm
may pose questions concerning t' or may give a new command. Thus, questions and
commands may occur alternatively in an algorithm. Finally, the algorithm may give a
command of stopping the work and producing the output.
Thus, an algorithm may involve questions and commands, and a steering appa-
ratus that tells us how to begin, how to continue, and how to finish our work.
In the general setting, we cannot exclude the case that an algorithm over an al-
phabet j[ is not applicable for some j[ -words (including the case that it works
"forever", getting tangled up into a circle of steps). Of course, a "good" algorithm must
be applicable at least to the members of a certain (nonempty) subclass F of j[0.
Since both canonical calculi and algorithms are based on alphabets, it may be
useful to stress an intuitive but essential difference between them. The rules of a calcu-
Ius tell us what we may do, what are permittedto do. On the other hand, the commands
of an algorithm tell us what we must do, what.are prescribedto do.
All what has been said up to this point is not an exact definition of algorithms,
rather, it is merely an illustration of an inexact, intuitive notion used during centuries in
mathematics. Only in the twentieth century was the claim raised to give an exact for-
mal definition of this notion. However, we never can tell that a formal definition of a
notion reflects exactly the content of a non-formal, intuitive notion. Most of we can do
is to introduce the notion of a class of algorithms hoping that any effectively working
algorithm can be substituted by a member of this class. To be more exact, let us take
into consideration the following definition.
5.1.1. DEFINITION. We say that two algorithms are equivalent with respect to an al-
phabet jl iff for all jl-words they apply with the same result, i.e., iff E jlO then either
(a) none of them is applicable to f, or else (b) both are applicable to f, and both
transformf into the same word f",
Now, the statement that every algorithm can be substituted by an algorithm be-
longing to a certain type (class) T of algorithms is to be understood as follows: Given
an alphabet jl and an algorithm G, there exists an algorithm G/ belonging to T such
that G and G/ are equivalent with respect to jl.
According to the preceding considerations, such a statement never can be
proved rigorously - although it can be somewhat confirmed by empirical facts, as long
as no counterexamples exist. Or it may serve as a definition of algorithms - in which
case an alleged counterexample will be refused by telling that it is not an algorithm at
all. (However, this would be a very curious situation. If it is proved that a procedure
works effectively and successfully then it is unreasonable to say that it is not an algo-
Empirical investigations on existing algorithms show that their questions and
commands can be "dissolved" - in most cases':" into more simple steps. Then, the
challenge arises: Try to find the simplest type of questions and commands as well as
the simplest forms of steering! This will lead to the most general type of algorithms.
There are several proposed solutions of this problem, which, in the course of
time, were proved to be equivalent. In the field of researches in the foundations of
mathematics (i.e., in metamathematics) , the problem of algorithms was re-formulated
as the problem of effective computability ("reckonability") of (number theoretic) func-
tions. In this field, the most popular solutions are elaborated in the theory of recursive
functions and in the theory of Turing machines (the latter are idealized computers).
(See, e.g., KLEENE 1952.)
Researches directed immediately toward algorithms were too. We
shall treat in detail the theory of Markov algorithms - also normal algorithms -
for this is best suited to our aims concerning the foundations of metalogic. (MARKOV
1954; a good report is in MENDELSON 1964, Ch. 5, 1.)
We give here an intuitive picture of Markov (or normal) algorithms. (The for-
mal definitions follow in the next section.) Let us agree in using the term normal al-
gorithm in referring to this class of algorithms.
A normal algorithm over an alphabet Jil is, in essence, an ordered class of
commands of form
where a and b are Jil-words - called the input and the output of the command, respec-
tively -, and the characters '-? ' and '.' (the arrow and the dot) do not belong to Jil

A command of form (l) is said to be a non-stop command, and a command of form

(2) is said to be a stop command.
A command C is said to be applicable to a word f E Jil
iff its input a occurs as
a part in f. If C is applicable to f then by applying it to f let us mean changing the
first occurrence of its input a by its output b.
Given a normal algorithm N over an alphabet Jil and a word f E Jil
, the appli-
cation ofN to f runs as follow.
(l) Find the first command of N applicable tof. If there is' no such command
then N does not apply tof. If there is such a command C then apply it tof, to get an-
other word g. If C is a stop command, the work of N is finished, N transformedf into g.
(2) If C is a non-stop command then apply the procedure described in (l) to g,
that is, find again the first command of N applicable to g, and so on.
Now, in applying N to a wordf, there are three possible cases:
(a) The algorithm N blocks the wordfin the sense that either N does not apply
tof, or after some steps we get a word g such that N does not apply to g.
(b) The algorithm N is successful with respect to f in the sense that after a fi-
nite number of steps we get a word g as the result of applying a stop command. Then
we say that N transformedfinto g.
(c) The algorithm N is everlasting with respect to f, it runs forever without
blocking and without stopping by a stop command.
Of course, only case (b) is a useful one.
The empty word may occur in commands. A command of form
means to prefixing a word with b (for the first occurrence of 0 in a nonempty word is
just before of its first letter), and
means erasing the first occurrence of a. A command
means to do nothing - thus, it seems to be superfluous - , it is applicable to all words.
The stop command
may indicate finishing the word of the algorithm. An algorithm containing this com-
mand never blocks, for it is applicable to any words.
Now we see that in normal algorithms the questions are included into the com-
mands. Every question is of form: "does the word a occur in the studied word?", and
every command prescribes a change of the first occurrence of a subword in the studied
word by another one. Probably, these are the simplest forms of questions and com-
mands . Further, the steering in normal algorithms is uniformly regulated by the pre-
scriptions (1) and (2) above; beyond these, only the order of commands takes part in
the steering. However, by using subsidiary letters, we can modify the steering - as we
shall see later in the examples .
5.2 Definition of Normal Algorithms
5.2.1. DEFINITION. By a normal algorithm over the alphabet Jot let us mean an or-
dered (nonempty) fmite class of words called commands of form
(i) a-+b and (ii)
where a, b e j[0 (any of them may be empty), and the characters '.-7 ', '.' do not be-
long to Jot. Here a and b are called the input and the output of the command, respec-
tively. A command is said to be a non-stop one if it is of form (i), and a stop one if it is
of form (ii).
5.2.2 DEFINITION. A command C with input a is said to be applicable to the word
f E Jotifff is of form "xay". If b is the output of C, and a does not occur in x then the
word "xby" will be said the result ofapplying C to f = "xay", and it will be denoted
by C(j).
5.2.3. DEFINITION. Let N be a normal algorithm over the alphabet J:t, andf E Jot
define the relations "N blocks f', "N leads f to g", and "N transforms f into g" by si-
multaneous induction, according to the stipulations (a) to (e).
(a) If no command of N is applicable to f then N blocks f - in symbols: "N(j)
= #.
(b) Assume that C is the first command applicable tof, and C(f) = g. Then:
if C is a non-stop command, N leads f to g - in symbols: "N(f/g)"; and
if C is a stop command, N transformsf into g - in symbols: "N(f) = g".
(c) If N(f/g) and N(g/h) then N(j/h).
(d) If N(j/g) and N(g) = # then N(j) =#.
(e) If N(j/g) and N(g) = h then N(f) =h.
5.2.4. DEFINITION. We say that the normal algorithm N is applicable to a wordf iff
there is a word g such that N(f) = g.
We have that
[a e5t]
No: 1.
It is clear that the ordering of the first n commands is indifferent here, but the n+I-th
command must be the last one. As a short presentation of No, we can apply the follow-
ing concise notation:
No: 1.
Thus, any normal algorithm over an alphabet .!it can be represented by a single word of
the alphabet 5t u ~ ., .}.
lliustration 1. Let .!it ={ai, ... , an} be any alphabet. The normal algorithm No below
transforms each 5t-word into the empty word:
/e5t <=> No(f) =0 .
Comments. 1. According to our definitions, N(j) = g iff there exists a fmite se-
quence fh f2' ... ,fn with fl = f such that N leads/; to /;+1 (for 1 S i < n), and N
transforms fn into g. Furthermore, N(j) = # iff there exists a finite sequence fh f2' ... ,
f with /1 =/ such that N leadsf, to/;+1 (l S i < n) andN blocks/no
2. We do not introduce a term and a notation for the case in which the algorithm
neither blocks a wordfnor is applicable to/, i.e., when we try to apply the algorithm to
/ it runs forever . We shall not be interested in this case.
3. If the commands of a normal algorithm are, in their ordering, C
, , C;
then we can represent it by the word
Of course, line 1 comprisesn different commands. - In what follows, similar abbrevia-
tions will be used systematically.
Illustration 2. Let ;t be an alphabet containinga pair of parentheses, i.e., .
;t =;t' u {(, )} where ;t' = {alt ... , an}, n ~ 1.
Let us give a normal algorithmN
that checks whether the parenthesesin ;t-words are
well-paired. For this aim we need three subsidiaryletters L, R, and t ; thus, N
will be
an algorithmover ;t u {L, R, t}. Of course, the subsidiary letters must be alien to ;t,
and we want N
to be applicable to all ;t-words. Given a word f e ;to, our algorithm
will preftxit with 'L ' , and then L will "jump" over the letters off, one after the other.
However, wheneverLjumps over a left parenthesis 'C, it will be preftxed by an 't' as
an indexcountingthe parentheses. Theset-S will followthe letterL. WheneverL jumps
over a right parenthesis ' )' , an t will be erased, but if there is not an t to erase then L
will change to 'R' , and R will go to the end off. Thus, finally.j' will be either with L
without any t , or by one or more copies of t and L, or by R. In the first case, the paren-
theses inf are well-paired, in the other cases they are not so. Now, our algorithmis as
: 1. L a ~ a L
2. L( ~ (t L
3. ta ~ a t
4. t L) ~ L
5. L ~ )R
6. R a ~ a R
7. t L ~ R
8. t R ~ R
9. L ~ L
10. R ~ R
11. ~ L
[a e;t' ]
Commands 1 to 10 involve subsidiary letters in their inputs. Hence, if f e ;to, only
command 11 is applicable tof (and it will be never applied in the following steps).
Note that command5 is to be appliedonlyif none of the precedingones are applicable,
i.e., when L meets a right parenthesis, and no t precedes L, which means that our
right parenthesis has no left mate. H some left parentheses have no right mates then
commands 7 and 8 change L to R and erase the remainingt-s. The stop commands are
9 and 10. Thus, we have: If f e;t then
Np(f) = "fL" if the parentheses inf are well-paired, and
Np(f) = ''fR'' otherwise.
Illust rati on 3. Assume that Yl = {ab ... ,an}, n 2, and let us assume the lexico-
graphic orderingof Yl-words based on the orderingof the enumerationof the letters ab
... ,an (cf. the hypercalculus H
in Sect. 4.4). The normal algorithm N
transformsevery jl-word into its successor (i.e., into the word following it immediately
accordingto the lexicographic ordering). It uses the subsidiaryletters T and 'J'.
: 1. aI [a EYl]
3. a; a;+l [i < n]
4. an J Jal
5. J
Command 6 prefixesthe word! EYl
with'!'. By iterated applicationsof command 1,
'I' goes to the end off, and, by command 2, it changes to J. If the final letter of! is not
an then command 3 finishes the work. Otherwise we applycommand4, and after this,
command 3 or command 4 will be applicable. In case we must apply again and again
command4 (which means that each letter of! is an), J goes back to the leftmost posi-
tion, and command5 closes the work.
Using the existenceof this algorithm, and referringto the notionof enumerabil-
ity (cf. Sect. 4.5), we have:
5.2.5. THEOREM. Forall alphabets Yl, the class Yl
is enumerable.
Illustration 4. Let j[ be as in Illustration 3. We produce a normal algorithm
N[kl that transforms everyword! EYl
into a word g containing exactlyk copies of the
letter an and standing nearest after! (in the lexicographic ordering). Here k is a fixed
positive integer. This algorithmuses the subsidiaryletters I, J, K, Ko, K}, ... ,KA? K

N[kl :
1. Ia aI [a EYl]
3. a;J Ka;+l [i < n]
4. anJ Jal
6. [a EYl]
8. K; aj ajK; U< n, i sk]
Klan an K
[i s k]

12. [i < k]
Commands 1 to 5 arealmostthe same as in N
except that instead of the stopdot '. , we
findthe subsidiary letter 'K'. The workof the algorithm beginswithcommand 13 that pre-
fixes theinputwordf E jIO with'!'. Commands 1 to7 present thesuccessor off prefixed by
'Ka' .The nextcommands serveto countthe occurrences of an in theconsidered word. ITit
is just k; command 11closes the work. In othercases,command 10or 12erases or K;
wherei < k. Then, bycommand 13, werepeatthe procedure, nowapplied to the successor of
f. Aftera finite numberofrepetitions wesurely findthewordinvolving just k copies of an'
The existenceof this algorithmproves the lemma in Sect. 4.5. This lemma was
used in showing that inductive classes are enumerable. Thus, our results in Sect. 4.5
are reinforcedby referringto the algorithmsNm; and N[k]
5.3 Deciding Algorithms
In Sect. 4.5, we gave a defmition of decidability of a class of words. Now, the decida-
bility of a class can be effectively demonstratedby means of an appropriatenormal al-
gorithm.-Assume that z is an alphabet, jI' is an alphabet (jI jil'),
w is a fixedjil' -word, N is a normal algorithmover jil' applicableto all x-words, and
fejilo if E F ::> N(f) = w) .
Obviously, in this case F is a decidable subclass of jI
, and N may be called the de-
cidingalgorithmof F. .
We give an example of a deciding algorithm. In Sect. 4.3, we met the alphabet
of the maximal first-order language
jIMF ={(, ),1 , 0, X, <p, 1t, =, -, ::), V}
and a canonical calculus K
defining the class of formulas, Form, of this language.
We construct a normal algorithm N
which for all f e (jIMF)O decides whether
f E Form. We need three subsidiary letters 'z', 'c', and 'q' : thus, N
will be an al-
gorithmover the alphabet
f}J =jIMF U {z, C, q} .
: 1. u t
3. Vx Vz
4. c
5. <pl c
6. Ol<p oe
7. OlC oc
8. oo<p<p oe
9. ooce oe
10. ooec oc
11. oocc oc
12. ooc C
13. <po<p <p
14. <p c
16. 1tOC q
18. (c =c) q
19. -q q
20. (q o q) -+ q
21. Vzq q
22*. 0 -+0
Our algorithm is applicable for all $-words; and, hence, for all J'tMrwords. This is as-
sured partly by the last command which is trivially applicable for all words, and partly
by the fact that none of our commands has an output longer than its input, moreover,
except commands 3,4, 14, 17, the outputs are shorter than the inputs, but these excep-
tional commands are applicable in a finite number of cases. Hence, starting with a
wordf E J't
, we get sooner or later shorter words.
If the input word f is really a formula then our algorithm will transform it into
the letter 'q'. Let us check this statement. Command 1 reduces the connected occur-
rences of i-s to a single t. Command 2 erases t after x, and command 5 replaces '<pt' by
the subsidiary letter 'c' (thus, indexed variables and names disappear), Occurrences of
x after 'Vwill be changed to 'z', and its other occurrences to 'c', by commands 3 and 4.
The remaining t-S will be erased by commands 6, 7, and 15. Commands 8 to 13 reduce
the arities of name functors and predicates (the final step for predicates is embedded
into command 16). Commands 8, 13, and 16 erase the occurrences of '0'. By com-
mands 8, 10, 12, 13, and 14, occurrences of <p will disappear. If commands 1 to 16 are
not applicable, the starting formula was transformed into a $-word in which the, letters
'0', 'r', '<p', and 'x' do not occur . Commands 15 to 18 transform the atomic subformu-
las into ' q' , and 19 to 21 eliminate the characters '-', ':J', ''V' , 'z ', and the parentheses.
Thus, the starting formula is transformed into ' q' , and command 22* stops the work.
Now we have to prove the converse of the statement just verified, namely, that if
NMF(J) = q thenf must be a formula. For this aim, let us consider 'z' as a variable, 'c'
as a name functor of arity 0 (i.e., as a name), and 'q' as a predicate ofarity 0 (i.e., as
an atomic formula). By this enlargement of the mentioned categories, the notion of term
and formula will be enlarged, too. Let us call quasi-formulas the formulas taken in
this extended sense. By checking our commands in N
, we see that by applying them
in the converse direction - i.e., by changing the roles of input and output - we get al-
ways quasi-formulas from quasi-formulas. Thus , whenever C is a command of our al-
gorithm, and C(g) is a quasi-formula then so is g. Hence, if NMF(J) = q thenfmust be a
quasiformula. However, if f E (J't
) , it must be a formula. - By these, we have:
It is easy to modify our algorithm in such a way that non-formulas will be transformed
into a fixed word. For this purpose, we need three new subsidiary letters, say, a, ~ and
'Y. Omit command 22* at the end of N
and add the following new commands:
22. ~ b ~ b ~
23. a q ~ ~ q
24. a ~ 'Y
25. 'Yb ~ r
26. Y ~ ~ a
27. 0 ~ a ~
[b E $]
[b E $]
If none of the commands 1 to 21 is applicable to a word f E '.13 then command 27 pre-
fixes it with a ~ Then, by command 22, ~ goes at the end of the word, and if we get
just 'aqW then command 23 stops he work. In the contrary case, commands 24, 25,
and 26 erase the letters of f, yielding finally a. Thus, if the algorithm consisting of
commands 1 to 27 will be denoted by N*, we have: iffe (J1MF)O then
N*(f) = q iff E Farm, and N*(f) = a otherwise.
To give another example of a deciding algorithm, assume that K is a calculus over an
alphabet JI consisting only of input-free rules of form
(1) (k? 1)
where Xtt .. . ,Xk are (different) K-variables, and Co, CIt .. . .c, are J1-words, Co and Ck
may be empty but the other ones (if any) must not be empty. We let:
F= if EJlO & K Hf}.
We construct a decision algorithm for the class F.
In case K consists of a single rule of form (1), the following algorithm with the
subsidiary letters a , <Xo, ah .. . ,a b ak+l will suffice:
N[if] : 1.
a i C; ~ c, a i+I [0 ~ i < k]
o;a ~ aa, [a e j[, 0 < i ~ k]
ak Ck ~ Ck ak
Ck a k ~ Ck a
a, ~ 0 [0 s i s k]
o ~ ao
It is left to the reader to check that
N[if] if) ="fa" if f e F, and N[if] (f) =f otherwise.
If K contains more than one rule of form (l), we can construct for each of these rules a
deciding algorithm. However, we can ''unify'' these algorithms into a single one. As-
sume that the number of rules is n > 1. Then their form is as follows:
(l sj ~ n, kij) ? 1)
Now we need the subsidiary letters a, ~ j and aji for 1 ~ j ~ n and 0 ~ i ~ kij). Our
deciding algorithm is as follows:
N[if,n).: 1.
. aji cji -4 Cjiaj,i+l
ajja aajj
aj,klj) Cj,klj) "7cj,klj)aj.klj)
cj.k(j)aj,klj) -4. cj.Jc(j)a
Cl.ji -4
-4 Cl.j+I ,O

0-4 Cl.1.O
[1 n, 0 s i < klj)]
[1 j n, 0 < i s k(j), a E 51]
[l s n]
[l n]
[1 < n, 0 i klj)]
[l ae51]
[1 < n]
[0 i s k(n)]
N[if,n] if) = "fa" if f e F, otherwise N[if.n] if) =f
5.4 Definite Classes
5.4.1. DEFINITION. Let 51 be an alphabet and F c 51. We say that F is adefinite
subclass of 51 iff thereexist $, w, and Nsuch that
(i) $ is an alphabet, 51c $, w E $0,
(ii) N is a normal algorithmover $ applicableto all 51-words, and
(iii) fe )to ife F N(f) = w).
By our definition, definite classes are always decidable ones. What about the
converse of this statement?If we have some decision procedure for a class of words,
does there exist a normal deciding algorithmfor that class? The positive answer is a
consequence of the following statement:
5.4.2. Markow's Thesis. If 51 is any alphabet and M is any algorithm over 51 then
there is a normal algorithm N such that M and N are equivalent with respect to 51
(cf. Def. 5.1.1).
The content of this thesis was foreshadowed already in Sect. 5.1. Ibid., it was argued
that no rigorousproof is possiblefor this statement.
Remark. It is provedthat Markov's thesis is just as strongas Church's Thesis accord-
ing to which everyeffectively calculablefunction (in number theory)is a general recur-
sive one. (See KLEENE 1952, 60; MENDELSON 1964, Ch. 5.)
Our next theorempreparesthe wayprovingthat all definiteclasses are inductiveones.
5.4.3. THEOREM. Assume that N is a normal algorithm over an alphabet $, 51 S;; '13,
and N is applicable to all 51-words. Then there are C, K, and f1 such that
(i) C is an alphabet, '1J c C, J.i E C-'1J,
(ii) Kis a canonical calculus over C, and
(iii) for all x E jIO and Y E '1J,
N(x) = Y ~ K 11+ xuy.
Property (iii) can be expressed succinctly as follows: Calculus K represents algorithm
N. Thus, our theorem can be re-phrased as:
If a normal algorithmis applicableto all words of an alphabet then it is rep-
resentableby a canonical calculus.
Proof. Let the commands of N be C
.. . ,C
(in this order). Firstly, we con-
struct a subcalculus to each command.
In defining the subcalculus K, connected to command C, (l ~ i ~ n) , we have
to distinguish two cases:
(a) C; is of form "u, ~ v;" or U j ~ v;" where U; * 0. Assume, U; =" where b
, ... ,b
E '1J, and k'? 1. We need in K, the subsidiary letters
, A;o, AilJ' "'J' A ik A i,k+1
and the variables x and y. Then:
: i1.
xA it by ~ xbA il y [bE '1J-{b
} ]
xA ij by ~ xA il by [b E'1J-{b
} , 1~ j ~ k]
xA ij bjy -7 x b ~ ;,j+1Y [1 ~ j ~ k]
xA ij -7 A iOX [1 ~ j ~ k]
xu; A i,k+I Y -7 XUi yA
Comment. We check whether C; is applicable to the word x E '1J. We start with rule i1
and then apply i2 or i4 (putting '0' for x in the first case) to find the letter b
In case b,
is not followed by b
(or, in general, b
is not followed by bj+I), we are looking for b,
again, by rule i3. In the negative case, we get "Alo x" by rule i5 (which expresses the
fact that C; is not applicable to x). In the positive case, we get by rule i6 "xAiy" where y
= C;(x).
(b) C, is of form "0 ~ v;" or "0 ~ v;". Then K, consists of a single rule:
Our planned calculus will be the union of the subcalculi K
, .. .K; and a subcalculus
defined below. The subsidiary letters of K
are Aio, Ai (l ~ i ~ n), M, and u. A
new variable z (beside x and y) will be used, too.
The application of our calculus to a word x E j[0 , runs, intuitively, as follows.
We try to apply K
If it is successful, i.e., if we get let us use the following
(i) xLi xMy if C
is not a stop command,
(ii) xJlY if C
is a stop command.
In case (ii), we are ready(we get "xuy" by a detachment). In case (i), we can repeat our
procedure withy, insteadof x.
If the application of K
to x is unsuccessful, i.e., if we get "Li
ox", we turn to K
withx. This stepis regulated by the following rule:
Lito x xLi
Y xMy
if C
is a non-stopcommand,
if C
is a stop command.,
provided K
is successfully applicable to x. In the contrary case, we must go to K
we have goodluckthen let us applythe following rule:
whereZis Jl or M, according to the case C
is or is not a stop command. - If C
is not
applicable to x, we tumto K
, and so on.
Now we definethe calculus K
by the rules 1 to n+2 below. For all i from1 to
n, the letter Z in rule i is to be substitutedby Jl or M, according to the case C, is or is
not a stop command.
: 1.

LilOX xLi

Li] oX Li
0X xLi
n. Li10X Lin-I,ox ----? xLiny xZy
n +1. xMy ----? yMz xMz
n+2. xMy ----? yJlz xu;
K = K
U ... u K

5.4.4. THEOREM. If j[ is an alphabet, and F is a definitesubclassof j[ then F is an
inductivesubclassofj[o.- In short: Everydefiniteclass ofwords is an inductiveone.
Proof Assume, Fis a definite subclass ofj[. Then, by Def. 5.4.1, there are '13, .
wand N such that j[ ~ '13, WE '130, N is a normal algorithm over '13 applicable to all j[-
words, and
(1) f E F ~ N(j) = w.
By our preceding theorem, there are C, J.L, and K such that '13 c C, J.L E C-'13, and K is a
calculus over C representing N, i.e.,
f E ~ O ~ (N(j) = g ~ K H fJ.Lg).
Thus, in case g = w:
(2) fEj[o ~ (N(j) = W ~ K Hfflw) .
Let us add to K the rule
XJ.LW -7 X.
In this extended calculus K / we have obviously:
By (1), (2), and (3), we get:
which means that Fis, in fact, an inductive subclass of j[
COROLLARY 1. If F is a definite subclass ofj[0 then so is ~ o F; hence, both F and
j[-Fare inductive subclasses o f ~ o
Proof An algorithm deciding F can be modified easily to deciding j[ o_F; see
the example in Sect. 5.3 about the modification of N
into N*.
COROLLARY 2. For any alphabet j[, F is a definite subclass ofj[ iff both F and j[O_F
are inductive subclasses ofj[o.
Proof IfF is definite, then, by Corollary 1, both F and j[-F are inductive.
Now, going to the converse, assume that both F and j[-F are inductive. Then, by Th.
4.5.3, F is decidable, i.e., there exists a procedure - an algorithm ~ for deciding the
membership in F. According to Markov's Thesis, this procedure can be substituted by
a normal algorithm. Hence, F is definite. (Note that this was the first case we had to
exploit Markov's Thesis.)
5.4.5. THEOREM. For all alphabet j[ there exists a class Fc j[ such that F is an in-
ductive but not a definite subclass of j[0. In short: There exist inductive but not
definite classes of words.
Proof. By Corollary 2 of the preceding theorem, it is sufficient to show the exis-
tence of a class F c jlsuch that F is but jlo_Fis not inductive. This was shown in
Corollary 2 of Th.4.4.4 where the class Aut (definable in any alphabet) was used in
the role of F.
Final comments. For a grammatical category of a formal language, it seems to be an
indispensable criterion to be a defmite class. This holds especially for the category of
declarative sentences. In formal languages, the expressions representing sentences are
called, in most cases, formulas. (Cf. JilpL and JilMF in Sect. 4.3.) Thus, the class of
formulas - in a formal language - is to be a definite class.
Now we are in the position to present a full system of logic - by pure syntactic
means. (Up to this point, we have no semantic means.) This will have the form of a
logical calculus. It will involve an inductive defmition of the (syntactic) consequence
relation holding between a class of formulas r and a formula A (expressible by "A is
a consequence of F"). This will be the subject matter of the next chapter.
6.1. What is a Logical Calculus?
A logical calculus consists of a description of the common grammar of a family of
languages and an inductive definition of the syntactic consequence relation. The adjec-
tive 'syntactic' is important here, for, in most cases, there is a possibility to define a
semantic consequence relation (in which case one speaks on a semantic logical system
rather than on a logical calculus).
The usual notation of a syntactic consequence relation is of form "T r A"
where r is a class of formulas (of a certain language) and A is a formula (The sign ' F
is sometimes supplied by a subscript referring to the calculus, e.g., "rQc",) It is read
usually as "A is deducible from T' where r may be called the class of premises. (As
we shall see later on, this relation is similar to the derivability relation "K iH f" used in
the canonical calculi.)
The base of the inductive definition of the relation "T r A" may include the
definition of a class of formulas deducible from the empty class of premises. The for-
mulas of this class may be called basic formulas of the calculus. In most cases, they are
exhibited by means of schemata (called basic schemata). Furthermore, it is postulated
that "T r A" holds, whenever A E T or A is a basic formula.
The inductive rules in the definition of "T r A" are the usual ones: they tell
us that from
how can we get"I" I- A"". They are called rules ofdeduction or simply proof rules.
In general, the definition of "T r A" is divided into two parts: (a) the definition
of basic formulas and (b) the definiton of rules of deduction (including that if A is a
basic formula or a member of Fthen "T I- A" holds).
There are different possibilities of defining "T r A" with the same outcome. To
be more exact, let us assume that for a given family of languages, we have two different
definitions of the deductibility relation, say, ' h' and 'b
We say that these two re-
lations are equivalent iff for all languages of the family, "T h A" holds iff "T b A"
holds. In such a case, we have, practically, different styles of formulation of the same
relation, and different styles of construction or presentation of the same logical calcu-
Now, the literature of modem logic presents us several different styles of fonnu-
lating the same logical calculus. A possible variation lies in choosing the basic formu-
las and the rules of deduction. In general, the less basic schemata we choose, the more
rules we need. As a limiting case, the class of basic formulas may beempty (this is the
case in the systems of natural deduction). Another style is dominant in the systems of
sequent calculus. (For the origin, see GENTZEN 1934.)
Our formulation of the classical first-order calculus QC follows the style in-
troduced by Gottlob Frege (FREGE 1879). We shall define a class of basic formulas
and a single rule ofdeduction.
Remark. In the literature of mathematical logic,theFregeanstyle(of formulation of a logical system) is
usuallycalled Hilbert-style. forgetting thefact thatit wasFregewhoinvented the firstlogicalcalculus and
formulated it just in thisstyle.
6.2 First-Order Languages
We met a language belonging to the family of (classical) first-order languages in Sect.
4.3.2. This language was qualified as the maximal one among first-order languages by
the reason that it contains an infmite supply of name functors and predicates for all an-
ties (i.e., for all numbers of argument places). Other first-order languages - which are
very useful in formulating exact theories - may contain fewer name functors and predi-
cates, perhaps only a finite number of them. This is the reason to give a general defini-
tion of first-order languages.
Of course, every first-order language is to be based on a certain alphabet J'I.. We
shall not determine in advance the letters of this alphabet, for it depends on the richness
of the language . However, we shall give hints in the course of the following definition
with respect to the letters of J'I..
6.2.1. DEFINITION. By a (classical) first-order language [} let us mean a five-
component system of form
[} = (Log, Var, Con, Term, Form )
where the components satisfy the following conditions (i) to (vii).
(i) For some alphabet J'I., Log, Var, Con, Term, and Form are definite sub-
classes of J'I.

(ii) Log, Var, and Con are pairwise disjoint classes.

(iii) Log is the class of logical constants of c', namely:
Log = {(, ), -, o, =, 'V},
the members of Log are called (in their order) left and right parentheses, the signs of
negation, conditional. and identity. and the universal quantifier. - We can assume
that the members of Log are letters of J'I..
(iv) Var is the class of variables of } containing an infmite supply of
words. - We can assume that involves two letters: x and t, and the variables are
words of form "xi" where i is a {t }-word.
(v) Con is the class of (non-logical) constants of L
. In general:
Con =NuP
where N is the class of namefunctors and Pis the class of predicates, of L
j N n P =
0. We assume that for all members of Can, an {o}-word as an arity is associated. If
Con is finite, we can assume that each member of Con is a letter of in this case,
arities need not belong to
(it is sufficient to refer to arities in the metalanguage of
) . In the contrary case, we can assume, similarly as in Sect. 4.3.2, that our constants
are formed from some initial letters followed by arity words and (if necessary)
indices ({t}-words).
(vi) Term is the class of terms of L
In its inductive definition, we need the
auxiliary categories T(a) for all arities a. Members of T(a) may becalled a-tuples of
terms. The simultaneous inductive definition of T(a)-s and Term is as follows:
1. Term. 2. T(0) = {0}.
3. (s E T(a) & t E Term) => "sit)" E Ttao) .
4. (<p EN & cp is of arity a & s E Tta) => "cps" E Term.
(Note that if <p EN is of arity 0 then <p E Term, for <p0 = cp.)
(vii) Formis the class offormulas of L
Its inductive definition is as follows:
1. (1t E P & 1t is of arity a & s E T(a => "1tS" E Form.
(In case a = 0 , 1t E Form- a formula representing an unanalyzed sentence.)
2. s, t E Term => "(s = t)" E Form.
3. A E Form => "-A" E Form.
4. A, B E Form => "(A ::J B)" E Form.
5. (A E Form& x E Var) => "\fxA" E Form.
Formulas introduced by rules 1 and 2 are called atomicformulas.
1. In textbooks of logic, we often find - instead of our prescriptions 2, 3, and 4
in (vi) - the defmition: If <p is an n-ary name functor, and tit ... . t are terms then
"CP(tl) . .. (t
)" is a term. Such a definition is not objectionable provided the "dotting"
in it is eliminable. Our definition of Termin (vi) shows just the eliminability of dotting .
2. According to item 3 of (vi), the arguments of a functor must be surrounded
by parentheses. This might be necessary to avoid ambiguities. However, if the grammar
of L
guarantees that a functor and its arguments are unambiguously recognized by
their grammatical form then these parentheses can beomitted. This was the case in
maximal first-order language (cf. 4.3.2).
3. The classes Log, Var, Con, Term and Form can vary in the different first-
order languages. Hence, in a moreexact notation, we shouldwrite
1 1 l ' 1 1
Log(L), Var(L), Con(L), Term(L), and Form(L ).
However, theomissionof the reference to L
wouldcause a confusion onlyin cases we
are dealingsimultaneously withmore than one (concrete) languages. Thus, in the usual
cases, we do not apply the notation in (1). - Moreover, we can assume that Log and
Var are the same in all first-order languages. The carrier of the variability is the class
Con. Note that Con maybe empty; in this case Term = Var, and all formulas are built
of fromatomiconesof form"(x = y)" wherex, y E Var.
4. The intuitivemeaningof the logicalconstantsand the categories of L
is the
same as given in Sect. 4.3.2 for the maximal first-order language. Comparing with
Sections 2.1, 2.2, and 2.3, we see that the grammatical and logical means of metalogic
are almost totally included into first-order languages. Exceptions are the sentence
functors conjunction, alternation, biconditional, and the existential quantifier. How-
ever, thesemissing operations can beintroduced in first-order languages via contextual
definitions as follows:
(A & B) =df - (A -B)
(A v B) =df (-A B)
(A == B) =df (A B) & (B A)
3xA =df -\7'x-A
[Alternation. ]
[Existential quantification]
6.2.3. DEFINITION. Let L
= (Log, Var, Con, Term, Form) be as in the preceding
definition. We introduce somegrammatical relationsin first-order languages.
(a) We say that B is a subformula ofA if A, B E Form, and A is of form"uBv"
whereu, vej.l (anyof themmaybe 0).
Remark. An inductivedefinitionof this relationmayrun as follows: (i) If A, B E Form, and x E Var then
Ais a subfonnula of A, "-AU, " (A ::::> B)", "(B ::::> A)", and " "i/x A". (ii) If A, B, C E Form, and A is a
subfonnula of B, and B is a subfonnula of CthenAis a subfonnula of C.
(b) If x e Var and A E Form, an occurrence of x in A is said to be a bound oc-
currence of x in A iff it lies in a subformula of A havingthe form"\7'x B". Occurrences
of x in A whichare not boundones in A will be calledfree occurrences ofx in A.
Remark. An inductive definitionof these relations may begin by stating that every occurrence of x is a
free one in an atomicformula, and is a boundone in ""i/x AU. The continuationis left to the reader.
(c) A term is said to beopen if it involves a variable, and it is said to be closed
in the contrarycase.
(d) Aformula is said to be open iff it involves some (at least one) freeoccur-
renceof a variable. and it is said to be closed iff it is not open. By thefree variables
of an openformulalet us meanthe variables havingsomefree occurrences in it.
(e) We say that a formula A is free from the variablex iff A involves no free
occurrences of x. A class of formulas r is said to befree from the variablex .iff for all
A E T, A is freefromx.
(f) Where A E Form, x, Y E Var, we say that y is substitutable for x in A iff
whenever "VyB" is a subformula of A then B is freefromx. A termt is said to be sub-
stitutable for x in A iff everyvariable occurring in t (if any) is substitutable for x in A.
(Somespecial cases: (i) x is alwayssubstitutable for x in A. (ii) IfA is freefrom
x then any termt is substitutable for x in A. (iii) If t is a closedtermthen t is always
substitutable for x in A.)
(g) Assume that the termt is substitutable for the variable x in the formula A.
Then, the metalanguage expression
denotes the formula obtained fromA via replacing all free occurrences of x in A (if
any) by t. The square brackets can be omittedin this notation if A is represented by a
single variable. Note that the use of the notation in (2) always presupposes that t is
substitutable for x in A.
(If A is free fromx, then A tlx = A. If t is closed, then A fIx is always well-
defined(i.e., it "exists").)
6.2.4. Notation conventions. In connection with first-order languages, we shall use
the metavariables A, B, C referring to formulas, x, y, z to variables, and s, t to terms.
Outermostparentheses surrounding formulas will be sometimes omitted. We write
"(A::> B::> C)" insteadof " (A ::> (B::> C ".
6.3 The Calculus QC
We shall denote by ~ Q (Quantification Calculus) the version of classical first-order
calculusexplainedin the following twodefinitions.
6.3.1. DEFINITION: Basicformulas. Givena first-order language 1, the class of its
basic formulas BF is determined by the following twostipulations:
(i) If a formulahas the form of one of the basicschemata (B1) to (B8) below
then it is a basicformula.
(B1) (A::> (B::> A
(B2) A::> (B::> C ::>A ::> B) ::> (A::> C)
(B3) -B ::>-A) ::> (A::>B
(B4) ('v'x A ::> AtlX)
(B5) ('v'x(A::> B) ::> ('v' xA ::> 'v'xB
(B6) (A::> 'v'xA) providedA isfree from x
(B7) (x = x)
(B8) x = y) ::> (A
::> AyIZ
To get basic formulas fromthese schemata, A, B, C are to be substituted by formulas,
x, y, Z by variables, and t by terms of 1.
(ii) If A E BF, and x E Var then "'v'xA" E BF.
Remark. It can be provedthat BFis always a definite subclass of Form(and,
even, of .9l
) .
6.3.2. DEFINITION: Deductibility. Given 1, Fs; Form, and A E Form, we define by
inductionthe relation "A is deduciblefrom T" - in symbols: "T I- A" - as follows:
(i) If A E r u BF then r I- A.
(ii) If r I- (A ::> B) and r I- A then r I- B.
In case 01- A we say that formula A is provable (in QC), and we write briefly" I-A".
Rule (ii) is called modusponens (MP) or sometimesthe ruleof detachment.
Remark. In mosttextbooks of logic, our basicschemata are called axiom schemata, andour basicformu-
las axioms (of QC). This seems to be a wrongusage of the termaxiom. For, in the generally accepted
senseof the word, axioms are basic postulates of a scientific theoryfromwhich all theorems of the theory
follow by means of logic. Are, then, thebasicsentences axioms fromwhich all theorems of QC (or what
else) follow by meansof logic? (which logic?) Eventhe question is a confused one. The most wecan say
is that all provable formulas of QC follow fromthe basic sentences via applications of modus ponens.
Dowe identify theclassof provable formulas withthetheorems of QC? The latternotionis undefined; but
thecentralnotionin QC is the deductibility relationratherthan provability. It is hardto findan acceptable
reasoning in defence of thementioned useof 'axiom'.
6.3.3. An intuitive justification of QC. We gave an intuitive interpretation of the
sentencefunctors negationand conditional (denotedin first-order languages by '-' and
'::>', respectively) in Sect. 2.3, by referringto truth conditions. In Sect. 2.2, the meaning
of the universal quantification was clarified as well. Now let us imagine a nonempty
domain D of individual objects and assume that the members of Var (and Term) refer
to members of D, so that "'v'xA" says: "for all members x in D, A holds". Then, one
can check easily that - accordingto this intuitive interpretation- any formula of forms
(B1) to (B8) is always true (is a logical truth), even independently from the choose of
the domain D. (However, if we are dealing with more than one formula, we must as-
sume the same D with respect to all formulas being used.) Of course, in case (B7) and
(B8), we must exploit the meaning of identity as well. In addition, if A is always true
then so is "'v'xA". Hence, the members of BF are logical truths.
Furthermore, we see that the rule Modus Ponens leads to a true formula from
true ones. Hence, if "T A" holds in QC, and the members of rrepresent true sen-
tences (with resPeCt to a fixed domain D) then the formula A represents a true sentence
(with respect to D). These considerations show that QC is really a logical calculus, a
syntactic formulation of the consequence relation. We can use it with confidence in our
6.3.4. THE CLASSICAL PROPOSITIONAL CALCULUS (PC). By a zero-order (or pure
propositional) language L
let us mean a three-component system (based on a certain
alphabet) (Logo. Ato, Formo) where Logo = {(, ), -, ::J}, Ato is a nonempty class of
words called atomic formulas, and Forms is defined by the two stipulations: (i) Ato
Forms. and (ii) (A, B E Forms ) => ("-A", "(A ::J B)" E Formo). - We met such a
language in 4.3.1.
A logical calculus for zero-order languages is the (classical) propositional cal-
culus, PC. It can be presented as a fragment of QC, based on the schemata (B1), (B2),
(B3) and the rule of Modus Ponens. (Of course, the prescription: (A E BF => "'v'xA" E
BF) is to be omitted here.) - In 4.3.1, the canonical calculus K
defines just the logi-
cal truths of PC.
PC is, in itself, a very weak system of logic. However, it is interesting as a
fragment of QC. For, any first-order language has a zero-order fragment if we define
Ato as containing all the first-order atomic formulas and all formulas of form "'v'xA".
Then all laws of PC are laws of QC as well. In proving PC-laws, we use only the basic
schemata (B1), (B2), (B3), and the rule MP. - This strategy will be applied in the next
6.4 Metatheorems on QC
It was noted already in Sect. 1.1 that the author assumes: the reader has some knowl-
edge on classical first-order logic. Up to this point, this assumption was not formally
exploited. In what follows, we shall give a list of metatheorems on QC without proofs,
assuming that the reader is able to check (at least) the correctness of these statements.
The particular style of presentation of QC given in this chapter, and the metatheorems
listed below will be essentially exploited in the remaining chapters of this essay. Hence,
the reader's assumed familiarity with first-order logic does not make superfluous the
explanations of the present chapter.
6.4.1. Metatheorems on PC. The frrst group of our theorems is based on the basic
schemata (Bl), (B2), and (B3) as well as the rule MP. Then, referring to 6.3.4, these
laws may be called PC-laws. Thus, .they will be numbered as PC.1, PC.2, .. .. Some
of these will get a particular code-word, too. - In the notation, T and r'refer to class
of formulas, and A, B, C to formulas.
PC.l. ir A, Ts; r') r: A.
PC.2. ({A, A:::lB}C r) r B.
PC.3. cr A:::l C) (ru {A} C).
PC.4. A :::lAo
PC.5. (DT) (ru {A} C) (F A:::l C). - Deduction Theorem. The
converse of PC.3.
PC.6. (Cut.) ir A, r'u {A} B) (ru r' B).
PC.7. (ru { -A} -B) (ru{B} A).
PC.8. {--A} A, and A - -A.
PC.9. (Co.po.) (ru {B} A) (ru {-A} -B). (The law of contra-
PC.tO. {A, -A} B.
rc.n. {A, -B} -(A:::l B).
PC.12. {-A:::l A} A, and A -A:::l A.
PC.13. -A A :::l B, and B A:::l B.
PC.14. tr.: {A} B, and ro {-A} B) r B.
6.4.2. Laws of quantification. For the proof of the following laws , one needs the ba-
sic schemata (B1) to (B6). - In the notation, x and y refer to variables.
QC.I. (UG) If F t A, and F is free from the variable x then r VxA.
-Especially: A VxA . (Universal generalization.y
QC.2. Ify is substitutable for x in A, and A is free from y then VxA VyAylx
and VyAylx VxA. (Re-naming of bound variables.)
QC.3. If the name t (i.e., a name functor of arity 0) occurs neither in A nor in
the members of F, and r Atlx then r VxA.
QC.4. VxVyA VyVxA.
QC.5. If Q is a string of quantifiers "VXIVX2 ... Vx
" (n 1) then
{Q(A :::l B), QA} QB. (A generalization of (B5).)
6.4.3. Laws of identity. Now we shall use the full list of our basic schemata (Bl) to
(B8) . - In the notation, s, s and t refer to terms.
QC.6. (t =t).
QC.7. {(s = t), A
} A

QC.8. {(s = t)} (t =s).

QC.9. {(s = s' ), (s'= t) } (s = t).
- - - - - - - - - - - - - - - - -
6.4.4. DEFINITION. Let A be an open formula, and let Xl, ... , X
be an enumerationof
all variables having free occurrences in A (say, in order of their first occurrences in A).
Then, by the universal closure of A let us mean the formula "VXl ... VxnA". - Ac-
cording to QC.4, the order of the quantifiersis unessential here.
6.5 Consistency. First-Order Theories
6.5.1. DEFINITION. Given a logical calculus I and a class of formulas r, we shall
denoteby "CnsI (T)' the class of formulasdeduciblefrom F; i.e.,
Cnsx(r) = {A: r ~ }
We shall be interested in case Iis PC or QC. Obviously:
Cnspc (F) ~ CnsQc (T) .
We say that r is I-inconsistent iff CnsI (F) = Form, i.e., iff everyformulais de-
duciblefrom r. - Finally, T'is said to be I-consistent iff it is not I-inconsistent.
Clearly, if r is PC-inconsistent then it is QC-inconsistent, too. Or, by con-
traposition, if r is QC-consistent then it is PC-consistent as well.
We know from PC.10 that a class of form {A, -A} is PC-inconsistent. We
could to prove that the empty class (or, what is the same, the class BF) is QC-
consistent, but we shall get this result as a corollarylater on.
6.5.2. THEOREM. ru {A} is PC-inconsistent iff r ~ -A. - The proof is left to the
reader: Use PC.12, PC.IO, DT and Cut.
6.5.3. THEOREM. If "A:::> B" E F, and r is QC-consistent then at least one of the
classes ru {-A}, r u {B} is QC-consistent.
Proof (sketchily): Assume, indirectly, that both of the mentioned classes are
inconsistent.Then, by the precedingTh. and PC.11, we have that
r ~ A, (ii) r ~ -B, (iii) {A, -B} ~ -(A:::>B).
From (i) and (iii) we get r u {-B} r -(A:::> B), by Cut. This and (ii) gives - again,
by Cut - r r -(A:::>B), contradictingthe assumption of the theorem.
6.5.4. THEOREM. If "-VxA" E T, r is QC-consistent, and the name t occurs nei-
ther in A nor in the members of r then ru {_At/X} is QC-consistent.
Proof (indirectly). IT ru {_At/X} is inconsistent then rr A
(byTh. 6.5.2),
and r t VxA (by QC.3), contradictingthe assumptionof the theorem.
6.5.5. DEFINITION. The pair T = (L
, r) is said to be efirst-order theory iff L
is a
first-order language and r is a class of closed formulas of c'. The members of r are
said to be the postulates (or axioms) of the theory T, and the members of CnsQc (T )
are called the theorems of T. The theory T is said to be inconsistent iff r is QC-
inconsistent, and it is said to be consistent in the contrary case.
In the limiting case r = 0, the theorems of the theory are the logical truths expressible
in the language L
. According to our intuitive interpretation given in 6.3.3, we believe
that such a theory is a consistent one. (For a more convincing proof, we need some pa-
tience in waiting.)
In the next chapter, we shall introduce a first-order theory that will lead us to a
very important metatheorem on QC. In addition, we shall have an opportunity to show
the application of a canonical calculus in defining a first-order theory.
- -
7.1 Approaching Intuitively
Our aim in this chapter is to reconstruct the content of the hypercalculus H
(see 4.4.3)
in the frame of a frrst-order theory. We shall call this theory CC*. (Here the star '*'
refers to the fact that this is an enlarged theory of canonical calculi. The restricted the-
ory of canonical calculi would be based on H
instead of H
We shall meet this in
The kernel of this reconstruction consists in transforming the rules of H
(closed) first-order formulas which will serve as postulates of CC*.
The transformation procedure will be regulated by the following stipulations (i)
to (viii).
(i) The subsidiary letters of H
are to be considered as predicates of the first-
order language 1* to be defined.
(ii) The variables of H
are to be replaced by first-order variables.
(iii) The letters of the alphabet
j{cc = { a , ~ , ~ , < , * }
are to be considered as names (i.e., name functors of arity 0) of 1*.
(iv) j{cc-words are to be considered as closed terms of }*. Hence, we would
need a dyadic name functor in 1* to express concatenation. However, we shall follow
the practice used in metalogic instead, expressing concatenation by simple juxtaposi-
tion. To do so, we formulate an unusual rule for terms as follows: Hs and t are terms,
"st" is a term.
(v) As the subsidiary letters are considered as predicates, their arguments are to
be arranged according to the grammatical rules o! frrst-order languages: the arguments
are to be surrounded by parentheses, and they have to follow the predicates.
(vi) In some rules of Hj, the invisible empty word 0 occurs as an argument of
some subsidiary letter. In the formulas of frrst-order languages, a predicate symbol
must not stand "alone", without any arguments. Hence, we need a name representing
the empty word; let it be 't}'.
(vii) The arrows ~ in the rules are to be replaced by the sign of conditional
'::>'. According to our convention,
"(A:::> B:::> C)" stands for "(A:::> (B:::> C)",
thus, we need no inner parentheses within the translation of a rule.
(viii) Finally, after applying (i) to (vii) to a rule, let us include the result be-
tween parentheses if it involves some '::)', and prefix it by universal quantifiers binding
all the free variables (if any) occurring in it. (In case of more quantifiers, their order is
unessential, by QC.4 .)
For example, the translations of rules 1, 13, and 16 are:
(13') V'xV'n(K(x)::)R(n) ::)K(x.n
(16') V'x'Vn'Vnt (V(x) ::)I(nt) ::)
The hypercalculus H
consists of 34 rules. (The releasing rule 34* will be omitted
here.) Thus, we get 34 postulates by transforming these rules into formulas. However,
we need some other postulates in order to assure that the system
should behave as a language radix (see Sect. 3.1). This means , in essence, that the
postulates (Rl) to (R6) are to be included into our planned theory.
After these preliminary explanations we can begin the systematic formulation of
7.1.1. DEFINITION. The frrst-order theory of canonical calculi CC* is defined by
where L 1* is a first-order language based on the alphabet
= ((,), t, X, - ,::), =, c, -<, *? I, L, V, W, T, R, K, A, D, F, G, S }
1 -
L * =(Log, Var, Con*, Term", 'Form*}
Con* = N* u P*,
N* = (X, -<, * },
p* ={I,L,V,W,T,R,K,A,D,F,G,S}
(here the members of N* are of arity 0, the predicates D, F, G in p* are of arity 00,
S is of arity 0000, and the other members of p* are of arity 0). - The definition of
Terms, Form* and r* will be given later on.
The total definition of CC* will be given by an enormous canonical calculus 1:*
described in the next section). More exactly: 1:* will defme the class of theorems of
CC*. The basic alphabet of 1:* will be just Jile-, but we shall need several subsidiary
letters printed in bold-face in order to be.distinguishable from the members of p*
which were, originally, the subsidiary letters of H
As variables in 1:*, we shall use
the letters x, y, t, u, v, w, and z. The full class of subsidiarylettersof 1:* is:
s; = {I, V, N, T, P, F, FR, S, BF }.
7.2. The Canonical Calculus L*
The first group of rules of 1:* (from 1 to 29 below) defines the grammar of 1*. Its
subsidiary letters are: I (index), V (variable), N (name), T (term), P (monadic predi-
cate), and F (formula).
1. I 8. N-< 15. PV
2. Ix Ixt 9. N. 16. PW
3. Vxx 10. 17. PT
4. 11. 18. PR
5. Ncx 12. Ty Txy 19. PK

13. PI 20. PA
7. N; 14. PL 21. Pu Tx Fu(x)
22. Tx Ty FD(x)(y) 26. Tx Ty Tu Tv FS(x)(y)(u)(v)
23. Tx Ty FF(x)(y) 27. F-x
24. Tx Ty FG(x)(y) 28. Fx Fy F(x y)
25. 29.
Our canonical calculus 1:* must include thefull proofmachinery of QC. In formulating the
necessary rules,a crucial notion is thesubstitutability of a variable in a formula bya term. As
a preparation ofthisnotion, wedefine therelation "y isfreefromthevariable x" bytherules
30 to 45 below where the subsidiary letter-pair 'FR' represents this relation (in form of
30. Iy xFRxty
31. Vx Iy xtyFRx
32. yFRx
33. Py yFRx
34. DFRx
35. FFRx
36. GFRx
37. SFRx
38. ) FRx
39. (FRx
40. -FRx
41. FRx
42. =FRx
43. Fu uFRx Vy VyuFRx
44. Vx Fu VxuFRx
45. yFRx zFRx yzFRx
These rules tell us that any variable is free from any other variable (30, 31); names,
predicates, and logical symbols - except 'V' - are free fromany x (32 to 42); if a for-
mula u is free fromx then so is "Vyu" (43); "Vxu" is always free fromx (44); and if
two words are free fromx then so is their concatenation (45).
The following four rules regulate substitutions. The new subsidiary letter'S'
occurs here; the meaningof "vSuStSx" is: "we get v fromu by substitutingt for x",
46. Vx ~ Tt ~ tSxstSx
47. yFRx ~ ySyStSx
48. zSuStSx ~ wSvStSx ~ zwSuvStSx
49. Fu ~ Vy ~ vSuStSx ~ tFRy ~ xFRy ~ VyvSVyuStSx
The crucial case is containedin rule 49: If we get v fromu by substituting t for x then
we get "Vyv" from"Vyu" by the same substitution provided t is free fromy and x, y
are different variables. (In case x = y, we get by 47 and 44 that "Vyu" remains intact at
this substitution.) If these provisos are not fulfilled, the substitutionis prohibited. This
is essential in rules 53 and 57 below.
Now the proof machinery of QC is included in the rules 50 to 60 below. The
newsubsidiaryletter-pair 'BF' stands for 'basic formula' .
50. Fu ~ Fv ~ BF (u ::::> (v::::> u
51. Fu ~ Fv ~ Fw ~ BF u ::::> (v w) ::::> u ::::> v) ::::> (u ::::> w)
52. Fu ~ Fv ~ BF -u::::> -v): (v o u
53. Vx ~ Tt ~ Fu ~ vSuStSx ~ BF (Vxu ::::> v)
54- Vx ~ Fu ~ Fv ~ BF (Vx(u::> v)::::> (Vxu::> Vxv
55. Vx -7 Fu ~ uFRx ~ BF (u::> Vxu)
56. BF (x = x)
57. Vx~ Vy ~ Vz ~ Fu ~ vSuSxSz ~ wSuSySz ~ BF x = y)::> (v::> w)
58. Vx ~ BFu ~ BF Vxu
59. u ~ u
60. u ~ (u::> v) -7 v
We continue by enumerating- as input-free rules of 1:* - the special postulates of
CC*, i.e., the formulasof F *. Here we shall applysome notationconventions in order
to make easier the graspingof the content of the postulates. Namely:
(i) First of all, we shall apply the conventions of omitting parentheses (see
(ii) Insteadof the variablesXL, xu,XLU, ... we shall write Xtt Xfa, xs, ....
(iii) "-(s = t)" will beabbreviated to "(s *t)".
(iv) The symbols' &', 'v', '3' will be used sometimes in the sense of the defi-
nitions given in 6.2.2, Remarks4.
The first group of our postulates (from61 to 81 below) will correspondto the
language radix postulates (R1) to (R6), given in Sect. 3.1. Postulates (Rl), (R2), and
(R3) are already includedinto the notionof Term* (cf. the rules 11 and 12). Now we
have to postulatethat theemptywordis differentfromall lettersof Jil
61. (a;t t) . 62. t). 63. (y;t t). 64. (-<:;t t). 65. (.;t t).
Further postulatesconcerningthe emptyword:
66. Vx(xt) = x) 67. Vx(t)x =x)
68. VxVXtXXt =t) ::J x =t) & (Xt = t))
Postulates 61 to 68 assure, among others, that the empty word has no "final letter".
This is half part of (R4). Its other half is expressedin 69:
According to (R5) , words terminating in different letters must not be identical. Con-
cerningour five-letteralphabet Jil
, this gives ten postulates. We write the first and the
last of these:
Fromthese ten postulateswe get (byapplying(B4) and substituting t) for x and Xl) that
the letters of Jil
are pairwise different. The remainingcontent of (R5) is includedinto
80 below:
Half part of postulate(R6) is includedinto 66 and 67. The other half is in 81 below:
The second group of our postulates contains the translations of the rules of the hyper-
calculus H

82. I(t) 83. VX(I(x)::J
84. VX(I(x) ::JL(ax 85. Vx(I(x) ::J
86. 87. VxVXt(W(x)::J L(Xt)::J W(n.
88. rre 89. VxVx.(f(x)::J L(x.)::J T(xx.
90. 'v'x'v'Xt(f(x) ::JV(Xl) ::JT(n.
91. 'v'x(f(x) ::JR(x 92. 'v'X'v'Xl(f(X) ::JR(x.) ::Jso- x.
93. 'v'x(R(x) ::J K(x 94. 'v'xVx.(K(x) ::JR(xt) ::J K(x.xt
95. 'v'x'v'xl'v'xt (L(xt) ::JS(xt)(xt)(x.)(x
96. VxVXt S(-<)(-<)(Xt)(x)
97. (V(x)::J l(x,>::J
98. (V(x) ::JI(x.) ::J
99. VxVXt (V(x) ::> W(Xl) ::> S(Xt)(x)(Xt)(x
100. S(X:s)(X4)(Xt)(X)::>

101. Vx (R(x) ::> ooooo
102. Vx\fXt (R(x) ::> K(xv ::> D(xt*x)(x
103. VxVXt (R(x)::> K(Xl) ::> D(x*xv(x
104. (R(x) ::> K(Xl)::> ::> D(xt*X*X2)(X
106. (D(X)(Xl)::>D(X)(Xl-< T(xv::> D(x)(X2
107. 108. Vx
109. Vx no. Vx
111. "'Ix F(x-<)(x.) 112. VxVXt (F(x)(xv ::> F(X.)(Xl<l
113. 114. VXVXtVX2 (F(x)(xv::> G(xv(X2<l
115. VxVXt (D(x)(Xt) ::> G(x)(Xt) ::> A(xv)
Here the list of rules of 1:* is finished. - Now we can continue the Def. 7.1.1 as fol-
Term* = {x: 1:* Tx} .
Farm* = {x: 1:* Fx}.
F* = {82 ... US}.
The last (irregular) identity is to be understood as saying that the members of F* are
the formulas given by the input-free rules of 1:* from 82 to 115. - We then have:
A is a theorem of CC* (1:* FA & 1:* A) r* A.
7.3 Truth Assignment
We shall introduce a truth assignment of the formulas of the theory CC*, i.e., we de-
fine a dichotomy of true and false formulas. In this definition, we shall refer sometimes
to the transformation procedure described in the beginning of Sect. 7.1 under (i) to
(viii) according to which the rules' of H
are translated into formulas. It is obvious that
this procedure can be applied to all words derivable in H
Let us denote by "Tr(f)" the
translation of the wordf where H
7.3.1. DEFINITION. We define inductively the truth (and the falsity) of formulas -
members of Farm*- as follows.
(a) Closedformulas
(1) A closed formula of form "(s = t)" is true if after deleting the occurrences
of ''6' (if any) both in s and t the resulting words are literally the same; otherwise
"(s = t)" is false.
(2) A closed atomic formula A which is not an identity is true if for some word
J, H
'" J, and Tr(j) =A; otherwise A is false.
(3) "- A" is true if A is false, and it is false if A is true.
(4) "A::> B" is false if A is true and B is false; in all other cases, "A ::> B" is
(5) "TIxA" is false if for some t E Jil
, At/x is false; otherwise it is true.
(b) Open formulas. An open formula is true if its universal closure (cf. Def.
6.4.4) is true, otherwise it is false.
On the basis of this definition, the following statements CC.l to CC.4 are almost
trivially true (the detailed checking is left to the reader).
CC.I. All basic sentences of 1* are true. (See the rules 50 to 58 of E* in the pre-
ceding section.)
CC.2. The postulates under 61 to 81 that refer to identity are all true.
CC.3. All postulates from 82 to 115 are true. (These are the translations of the rules of
If both "A ::> B" and A are true, then B is true (according to (4) of the definition
above). Then, using that MPis the single proof rule in QC, we have:
CC.4. Every theorem of CC* is true.
It is an open question whether all (closed) true formulas are theorems of CC*; we shall
return to this question only at the end of the next chapter. However, we can show that
the true closed atomic formulas are theorems of CC*. This will be detailed in three
CC.5. If H
... f then Tr(j) is a theorem of CC*.
Proof' by induction with respect to derivations in H
Base: f is a rule of H

Then Tr(j) is one of the formulas in the list from 82 to 115 (of the rules of 1:*), i.e., it
is a member of PC. - Induction step (a): Assume that H
'" g, Tr(g) = A, A is a
theorem of CC*, and we get! from g by substituting certain Jilcc-words tl, ... , t1 for
certain Hj-variables Zl, . .. , Zk in g. In A, these H
are replaced by some
first-order variables Xl, .. ,XJco Then, we can assume (using QC.4 if necessary) that A
is of form "TIXl .. TIXl B". Then, Tr(j) is the formula
which is deducible from A by k applications of (B4), and, hence, is a theorem of CC*.
- Induction step (b): Assume that H
'" g -7J, H
'" g where g involves no arrows,
Tr(g -7 f) = "Ql(B::> A)", Tr(g) = "Q2B" where Ql and Q2 are (possibly empty)
strings of quantifiers of form "TIX". Assume that these formulas are theorems of CC*.
It is clear that by appropriate choosingof the variables, we can assumethat Q2 is a part
of Q1 (use, if necessary, QC.2), and, by using(B6), we can replaceQ2by Q1' Thus, we
can assumethat our formulas are of form
Fromthese we get "QI A" by QC.5. To get Tr(f) which is of form"Q3 A" we apply
(B4) and QC.2 (if necessary) to omit the superfluous quantifiers and re-name the
boundvariables. Thus, Tr(f) is a theoremof CC*.
CC.6. If A is a true closedatomicformula but not of form "(s =t)" then A is a thea-
Proof. According to item (2) of our truth assignment, there is a wordf such
that H
.. f, and Tr(f) = A. Then, by CC.5, A is a theorem of CC*.
CC.7. If "(s = t)" is closedand true, it is a theoremof CC*.
Proof. According to item (1) of our truth assignment, we get froms and t the
same term c by deletingthe occurrences of (if any). Now, "(c = c)" is obviously a
theorem of CC* (see QC.6). The omitted occurrences of can be placed back by
using the postulates 66,67, and the basic schema(B8). Hence, "(s = t)" is a theorem
CC.8. If A is a true closedatomicformula thenA is a theorem of CC*. - This is the
summary of the previous two statements.
The formula ' (ex is obviouslyfalse, hence, by CC.4, it is not a theorem of CC*.
Thus, not all formulas are theorems of CC*. In other words:
CC.9. Theory CC* is consistent. (Cf. Def.s 6.5.1 and 6.5.5.)
COROLLARY. The empty class offormulas - or the class BF of basic formulas - is
7.4 Undecidability: Church's Theorem
Let us pose the question: Is it possible to find a procedure - an algorithm - by which
we wouldbeableto decidefor everyformula of .L1* whetherit is a theoremof CC*?
If we had such a procedure then we wouldbe able to decide, amongothers, for
all formulas of form"A(t)" wheret is an whether it is a theorem of CC*.
Now, r* r A(t) iff H
" At (byCC.4 andCC.8). However,
" At iff t EAut
(cf. 4.4.3). Hence, in the presence of a decision procedure, we would beable to decide
for all numerals (i.e., {a}-words) whether it is an autonomous one. By Th. 4.4.4, the
class of non-autonomous numerals (J'o-Aut) is not an inductive class. Then, by Th.
5.4.4 and its corollaries, Aut is not a definite class, and, according to Markov's Thesis
(see 5.4.2) it is not a decidable one. Hence, the answer to our question turned to be a
negative one: no normal algorithm can decide the theoremhood in CC*, and, if we ac-
cept Markov's Thesis, no decision procedure exists for the class of theorems of CC*.
Summing up:
7.4.1. THEOREM. Theory CC* is undecidable in the sense that the class of its theo-
rems is not a definite subclass of its formulas.
Since "A is a theorem of CC*" means the same as "T* A" we get from the result
above that in QC, no general procedure (or, at least, no normal algorithm) exists to de-
cide the relation "T A". Of course, we can imagine a general decision procedure as
a schematic one that can be adjusted in some way or other to all particular first-order
languages. To be more unambiguous, we can state that no decision procedure exists for
the maximal frrst-order language (see 4.3.2 and the first paragraph of Sect. 6.2.). For,
this maximal language includes 1*; hence, if we had a decision procedure for the
former then it would be applicable for the latter as well.
Let us realize, furthermore, that "r * A" tells the same as "P61 :::) ... :::)P115 :::)A"
where P6h .. . ,P
are the members of r* (the formulas in 1:.* enumerated from 61 to
115) - according to the Deduction Theorem (see PC.5). Thus, it follows from the un-
decidability of "T * A" that the class of provable formulas of 1* is undecidable.
The same holds, afortiori , for the class of provable formulas in the maximal first-order
language. Summing up:
7.4.2. THEOREM. (Church's Theorem.) In QC, there exists neither a universal pro-
cedure (representable by a normal algorithm) for deciding the deductibility relation
t"r A") nor for recognising the provable formulas ( It A").
This theorem was first proved by Alonzo Church (CHURCH 1936) in another way than
the one applied here.
Obviously, this undecidability theorem holds for all larger logical calculi includ-
ing QC.
In some first-order languages there exist decision procedures for the deductibil-
ity relation; obviously, this does not contradict our theorem. For example, if a first-order
language involves only names and monadic predicates as (nonlogical) constants then it
is decidable. The same holds for zero-order (i.e., propositional) languages.
* * *
The investigations of the present chapter showed us an interesting example for the
defmition of a first-order theory by means of a canonical calculus, and, in addition, pre-
sented a very important metatheorem on the first-order calculus QC.
8.1 The Formal Theory CC
In Sect. 7.3, we saw that every theorem of CC* is true (see CC.4). If the converse
holds too (up to the present point in this essay, this question was not yet answered)
then the identity
(1) {A: A is a true formula of CC*} ={A: A is a theorem of CC*}
holds true. Using that for any formula A, exactly one of the pair A, "- A" is true, it
follows from (1) that for all formulas A, either A, or "- A" is a theorem of CC*, or, as
we shall express this property, the theory CC* is complete with respect to negation.
8.1.1. DEFINITION. A first-order theory Tis said to be complete with respectto nega-
tion- briefly: neg-complete - iff for all formulas A of T, one of A, "-A" is a theorem
An inconsistent theory is, trivially, neg-complete. Thus, the problem of neg-
completeness is an interesting one only for consistent theories, such as CC*. Intui-
tively, we can say that a consistent and neg-complete theory grasps its subject matter
Since any consistent class of formulas can serve as a basis (postulate class) of a
consistent theory, it is not surprising that many consistent first-order theories are not
neg-complete. As we shall see later on, this is the case even with CC* that means that
identity (1) does not hold. Moreover, there are surprising cases of neg-incomplete theo-
ries, theories which are irremediably neg-incomplete in the sense that any consistent
enlargement (with new postulates) of the theory remains neg-incomplete. Such a theory
is especially interesting if we can give a truth assignment of its formulas according to
which all its theorems are true, and, in addition, our intuition suggests that the postu-
lates of the theory characterize exhaustively the notions represented by the constants of
the language of the theory.
We shall give an example of such a surprising frrst-order theory. It will be a
fragment of CC*, let us call it CC.
The intuitive background of CC* is the hypercalculus H
In CC, we shall
rely on Hz, instead. Let us remember that Hz defines the notion of a canonical calculus
and the derivability in a canonical calculus. The additional notions defined in H
the lexicographic ordering, the Gooel numbering, and the autonomous numerals. These
considerations show that the theory CC that will be based on the hypercalculus Hz is
the smallest first-order theory of canonical calculi whereas CC* is one of the.possible
enlargements of it. (However, CC* was useful in demonstrating the undecidability of
QC.) According to our intuitions, 8
regulates exhaustively the notionsinvolved in it.
This gives the (illusory) hope that the theoryCC basedon 8
will be neg-eomplete.
The formulation of CC is simple enough: we get it by certain deletions from
CC* (cf. Def. 7.1.1).
8.1.2. DEFINITION. The first-order theoryCC is defmedby
CC = (10, r
where .10 is a first-order language basedon the 23-letteralphabet
= {(,), t, x, -,::>, =, 'if, ~ a, ~ ~ <, *, I, L, V, W, T, R, K, D, S},
.10 =(Log, Var, Con-; Terms, Forms)
Cono = Nou Po,
and No = N* (cf. Def. 7.1.1), and
Po = {I, L, V, W, T, R, K, D, S}.
The definition of Terms, Forms, and To will be givenbelowby means of a canonical
calculus 1:.
Remark. Up to this point, CC differs from CC* merely by the omissionof the predi-
cates A, F, and G. We shall see that Terms = Term", and Forms ~ Form*.
Again, the full description of CC will be givenby a canonical calculus1:.
8.1.3. DEFINITION. The canonical calculus 1: is that fragment of 1:* (cf. Sect. 7.2)
whichwe get from 1:* by omittingthe rules 20, 23, 24, 35, 36, and 107to 115.
In referring to the rules of : ~ we retain their original numbering given in 1:*. The
omittedrules are just those involving the omittedpredicates A, F, and G. - The sub-
sidiaryletters and variablesin 1: are exactlythe same as in 1:* (see at the end of Sect.
Terms = {x: 1:'" Tx} = Term*.
Forms = {x: 1:'" Fx} ~ Form*.
To ={61 ... 106} u {SUD}
i.e., the members of To are the formulas enumerated from 61 to 106 as input-free
rules in 1:*, and a further postulatedenotedby 'SUD' which will begiven in the next
chapter (in section9.2]. Furthermore:
A E Forms =? (To ~ A;)1:'" A).
The truth assignment defined in 7.3.1 can be applied to Forms (since Forms c
Form*), of course, by referring to H
insteadof H
in item(2). Thus, we can speak
on true andfalseformulas of CC. Furthermore, the statements CC.1 to CC.9 (in Sect.
7.3) hold - mutatis mutandis - for CC as well. The truth of the additional postulate
SUD will be shownin section9.2. Hence:
8.1.4. THEOREM. Theory CC is consistent.
8.2 Diagonalization
We know that any alphabet can be replaced by the two-letter alphabet 51} = {o, ~ }
(see Th. 3.2.2). Let us consider the 32-letteralphabet 5I
v Sc which is the full al-
phabet of the canonical calculus 1:. Givena translation of this alphabet into51} , we can
extendthis to get a translation of 1: as a single word of the five-letter alphabet J(cc =
{o, ~ , ~ , -<, .}, replacing the 1:-variables by ~ , ~ ~ , ~ ~ ~ , ... , and the arrow ~ by
'-<', and using '.' between the translations of the rules of 1:. Given this extended
translation, let us denote the translation of a word f by (f]A E 5t
where the square
bracketswill be omittedif f consistsof a singleletter or a metavariable.
Now, 1: is translatedinto a single wordof J(cc , say
1:'" = cr.
Then, in the hypercalculus H
, the word 'Kc' is derivable. Moreover, if 1: ~ f, then
r is derivable in o whichmeansthat
Now let us remember the translating function Tr introduced at the beginning of
Sect.7.3 which translates words derivable in H
into formulas of CC*. Let us re-
strict Tr to H
by whichits applications result CC-fonnulas. We thenget:
Tr(aDf"') =D(a)(fA) EForms
wherer is a closedterm(i.e.,r E 5t
0). It follows from(1) - by our truth assign-
ment - that "D(a)(fA)" is a true atomicformula, and, according to CC.8, it is a theo-
remof CC. Summingup:
~ ~ ~ ~ - ~ - - - ~ - - - - - - - -
8.2.1. LEMMA. If 1: f, then Fo r D(a)(f') - where o =1:".
Now assume that A e Forms, A" =a eJ'ltO, B =Aa/x, X e Var, B" =
b e J'llo , B is a closed formula, and F
r B. (The stipulation that B is closed is, in
fact, unessential . For, Fo r B implies that B is true (cf. CC.4 in Sect. 7.3); and, if B is
not closed, this means that its universal closure is true.) Then, the following words are
derivable in 1::
(2)' FA, BSASaSx
Hence, by our previous Lemma, the following atomic formulas are theorems of CC:
(3) D(cr)(F"a), D(cr)(bS"a[SaSxY'), D(a)(b).
Then, obviously, their conjunction is a theorem of CC as well. Let us abbreviate this
conjunction by "Diago(a/x,b)".
(4) Diagda/x,b) =df (D(a)(F"a) & D(cr)(bS"a[SaSx] " ) & D(cr)(b.
Here the term 'Diag' reminds us that the formula B is, in a sense, a diagonalization of
A: we get B from A via substituting its own translation A" for the variable x. The sub-
script 'a' reminds us that our procedure depends essentially on the canonical calculus
1: (whose translation is a). However, in what follows, we omit this subscript whenever
no misunderstanding can arise by its omission.
8.2.2. THEOREM. Whene ver a, be J'llo, Diagia/x.b}is a theorem of CC ifffor some
Ae Forms, A"= a, [A
] " = b, and the (closed) formula Aa/x is a theorem ofCC.
Proof. Half part of this statement is proved above. Concerning its other half,
assume that F
r Diagia/xb). Then, by CC.4 (cf. Sect. 7.3) "Diagia/x.b]" is true.
With respect to its definition under (4), we have that all the atomic formulas in (3) are
true, and, by CC.8, they are theorems of CC. According to our truth assignment, this
means that the following words are derivable in Hz:
Since a is just the translation of 1:, a word is derivable in o iff it is a translation of a
word derivable in 1:. Thus, there must be words A and B such that the words in (2) are
derivable in 1:. Then, really, B = Aa/x,and Fo r B.
Now let us consider the open formula
and assume that
]" =a.
By our preceding theorem:
(5) (To I- 'tXt'tXfa -Diag(xlxltJ2 (To I- Diag(arfx,b
With respect to the basic schema (B4), we have:
(6) (To I- 'tXI'tXfa -Diag(aO'/xl
J2) ) (To I- -Diag(aclx,b
From (5) and (6) we get immediately:
(7) (To I- Diag(ao/x,b
(To I- -Diag(ao/x,b
Since CC is a consistent theory, we have that
(8) 'Diag(aolxtb
' is not a theorem of CC.
Then, by (5):
(9) ''tXI'VXfa is not a theorem of CC.
However, we can show that this formula is true. For, let us consider the conjunction
that is abbreviated by ' Diag(ao,/XttXfa}' (cf. (4 :
By (9), L ... Eo does not hold, thus (according to QC) L'" A
cannot hold either.
and D(a)(ao) are false atomic formulas .
According to our 'definitions of A
and Eo , the following words are derivable in L:
(Concerning the second word, take into consideration that Xl is not a free variable of
Ao-) Then, the following atomic sentences are true:
By the rules 30 to 49 (of 1:.), the substitution of variables by terms - represented by the
four-place subsidiary letter S - is uniquely determined in the sense that if 1:. ... vSuStSx
then the word v is uniquely determined by the words x, t, and u. This means that the
second conjunct of (10) is true iff
Xfa is replaced by b
and Xl is replaced by X, or
Xfa is replaced by ao and Xl is not x
However, in both cases, the third conjunct of (10) is false (as we have seen above).
Henceforth, (10)- that is, - is false by any substitution of the variables
Xl and xt; hence, its negationis "always" false; consequently, its universal closure
... is true. Then, its negation is false. ByCCA, no falseformula
is a theorem of CC. Hence:
(11) ' - VXlVxt -Diag(aolxt,XtJ' is not a theorem of CC.
With respect to (9) and (11) (andDef. 8.1.1)it is provedthat:
8.2.3. THEOREM. Theory CC is not complete with respect to negation. - In other
words: The class oftheoremsof CC is aproper subclass of the trueformulas of cc.
8.3 Extensions and Discussions
To investigate the possible generalizations of Th. 8.2.3, let us look over carefully its
proof. It is very important here the diagonalization procedure, i.e., the introduction of
the schema"Diag(T(a/x,b)". Sinceo =L", it is exploitedhere that CC is defined by a
canonical calculus :E. Reference to the hypercalculus H
is unavoidable. Furthermore,
the truthassignmentto Forms was exploitedin the proofof Lemma8.2.1, ofTh. 8.2.2,
and of the statements in (8) and(11) of the preceding section.
Now assume that T is a frrst-order theory including CC (in the sense that all
theorems of CC are theorems of 1). Assume, further, that T is defined by a canonical
calculusK. Then, we can defmethe translation of K into the alphabet Jit
; say, K"= k.
Retainingour truth assignment with respect to the formulas of CC, we can prove the
analogueof Lemma8.2.1: If K f then "D(k)(f')" is a theorem of T. Furthermore,
we can introduce the diagonal schema "Diag, (a/x,b)", using k instead of o every-
where. Now, the proofof the analogue of Th. 8.2.2 is unproblematic, if we assume, in
addition, that no falseformulas of CC are theorems of T (fromthis, the consistency of
T follows obviously). For, take into consideration that "Diag, (a/x,b)" E Forms (i.e.,
that it is a formula of CC), and, hence, if it is a theorem of T, it cannot be false. Also,
the explanations from (5) to (11) in the preceding section, mutatis mutandis, remain
correct. As a final consequence, we get that theory T is incomplete with respect to ne-
gation. - Let us summarise theseobservations:
8.3.1. THEOREM. Let Tbeafirst-order theory satisfying theconditions (i) to(iii) below:
(i) CC is a subtheory of T (every theorem of CC is a theorem of 1).
(ii) The classof theorems of T is definable by meansof a canonical calculus.
(iii) No false formula of CC (in the sense of the truth assignment defined in
Sect. 7.3) is a theoremof T (hence, Tis consistent).
Then: T is incomplete with respect to negation.
COROLLARY: Theory CC* (Ch. 7) is incomplete with respect to negation.
Remark. Condition (i) can be fulfilled by means of a translation procedure from the
language of CC to the language of T satisfying some obvious provisos. '
What may be the reason of the neg-incompleteness of CC? Our postulates translating
the rules of Hz seem to give an exhaustive report on canonical calculi and on deriva-
tions in them. However, we can be suspicious with respect to our postulates corre-
sponding to the language radix postulates (Rl) to (R6). We noted in Sect. 3.1 that the
supply of these postulates is incomplete: they do not determine uniquely the class of JiI.-
words (where JiI. is an alphabet). We should have the further postulate (R7) - but we
abandoned it because it involves a quantification over classes. Thus, we can suspect
that the neg-incompleteness of CC is caused by the incompleteness of our formulation
of the notion of language radices.
However, we can try to formulate the content of (R7). Let us consider the fol-
lowing rule that can be added to 1::
(1) Fx-? tSxSt'}Sx -? ySxSx<xSx -? -? -? vSxSx-< Sx -?
wSxSx.Sx t & Vx(x:J (y & z & u & v & w :::> Vx x).
(Here x, y, t. Z, .u. v, w are 1:-variables.) To grasp more easily the content of this rule,
let us assume that x is a monadic open formula having some free occurrences of the
variable x; let us write "<p(xt for x, and "<p(s)" for r/X where s is any term. Then we
have that t. y, Z, u; v, w are just the formulas
<p(t'}), <p(x<X), <p(x-<), <p(x.),
respectively. Then, the final output of the rule under (1) can be written as follows:
(2) <p(t'}) & Vx(cp(x) :J (cp(x<X) & & & cp(x-<) & <p(x.:::>
Vx <p(x).)
Its content is: If cp is a monadic predicate which
(a) holds for the empty word, and
(b) whenever it holds for an a then it holds for all words getting from
a by suffixing it a letter of Jil.
then <p holds for all
This seems to bethe content of (R7) applied to Jil.
Is this correctly included in
the rule (1)? The assumption that x is a monadic open formula involving x is missing
from (1).
However, if x is free from x, the output of (1) is as follows:
x & Vx(x:J (x & x & x & x & x) :J Vx z],
and this is a harmless logical truth. Hence, it is sufficient to assume that x is free from
all variables other than x. This means that - that is, t - is a closed formula.
Now, we can enlarge 1: by new rules to define closeness. Let C be a new
subsidiary letter expressingthe predicate 'is closed'. We need a list of input-free rules
tellingthat the lettersof j[cO (see 8.1.2) - exceptx and 'if - are closed, e.g.,
C-, C'6, ce, CL, ... ;
this means21 rules.Then, we finishby the following tworules:
~ y ~ Cxy,
Fu ~ Vx ~ vSuS'6Sx ~ Cv ~ C'ifxu.
Finally, we can includeinto (1) the input "Cr".
Despiteof all our efforts, enlarging1: and CC in the wayoutlinedabovegives
a theory for whichTh. 8.3.1 is applicable, that is, the enlargedtheory remained incom-
pletewithrespect to negation. Hence, we can suspectthat the forceof the schema(2) is,
nevertheless, less than that of (R7).To understand this situation, takeinto consideration
that a monadic predicate defmes a subclass of J'lcc (the class of words of which the
predicate holds true). We can define as many subclasses of j[cc as many monadic
open formulas exist in the language 10. Schema (2) deals just with such formulas,
Now, it may happen that j[cc
has more subclasses than as many monadic predicates
are expressible in 10 . However, theexact meaning of this conjecture can be explained
only in set theory (seeCh. 10) where, in addition, its truthis provableas well.
It was Kurt Godel who gave the first (non-trivial) example of a formal theory
that is incomplete with respect to negation (GODEL1931). He showed that if a theory
includesthe arithmetic of naturalnumbers, and no false formula of arithmetic is among
its theorems then the theory cannot be complete with respect to negation. This result is
cited in the literature of metamathematics as Godel's First Incompleteness Theorem. In
Godel's proof, a fragment of the theory of recursive functions playedthe role analogous
to the roleof Hzin our approach.
Theorem8.3.1 is, obviously, an analogue of Godel's First Incompleteness Theo-
rem. Its peculiarity - whichdeserves some attention - is that it refersto no mathemati-
cal theory. Whereas Godel's proof is arithmetically based, our approach is purely
grammatically based. It conforms the motto: Keep aloof metalogic from arithmetic (in
general: from mathematics) as long as it is possible. - But this is not possibleto the
very last - as we shall see in Ch. 10.
In this chapter we shall show that although the consistency of the theory CC is ex-
pressible in its language by means of a formula, this formula is not a theorem of CC.
9.1 Preparatory Work
The consistency of CC can be expressed by the formula
(1) Cons.; =df 3x(D(cr)(FJ\x) & ~ D c r x
meaning that for some u, 1: ... Fu but not 1: ... u.
If both FA and A are derivable in 1: then (and only then) A is a theorem of
CC. This leads us to define the schema of theoremhood
(2) Thora) =df (D(a)(FJ\a) & D(a)(a.
In what follows, the subscript '0' will be omitted both in (1) and (2).
9.1.1. LEMMA. If 1: ... f ~ g ~ h, and the words f and g involve no arrows then
(3) r 0 ~ D(a)(f')::> D(a)(gJ\) ::> D(a)(hJ\).
Proof. By Lemma 8.2.1, it follows from our assumptions that
(4) F 0 ~ D(a)(f'-< gA-< hJ\).
Rule 106 of 1:* is a postulate of CC (a member of r 0) from which we get by appli-
cations of the basic schema (B4) (of QC) that
(5) Fo ~ D(a)(f')::J D(a)(f'-< g"-< h")::> T(fA) =:l D(cr)(g"-< h").
Furthermore, ''T(f')'' is obviously true, and , hence, by CC.8 (see in Sect. 7.3)
(6) ro ~ T(f').
We get from (4), (5), and (6) - by PC - that
(7) Fo ~ D(a)(f')::> D(a)(gJ\-< h").
Again, we get from the postulate under 106 that
(8) Fo ~ D(a)(g")::> D(a)(g"-< h") ::> T(g") ::> D(a)(hJ\)
(9) F
~ T(g").
From(7), (8), and (9) we get by PC:
which was to be proven.
We see that if h involvesan arrow- i.e., if h/: is of form"h
" - we can
continueour proof to get "D(a)(h
) :::> D(cr)(h
) " insteadof "D(a)(h
) " provided h,
involvesno arrow.Thus, we can extendour Lemmafor rules of 1: containingmorethan
two arrow-free inputs.
Furthermore, our result is independent of the fact whetherthe rule in questionis
an original one - i.e., listed in the presentation of 1: - or is a derived rule of 1:. To
mentionjust a derivedrule of 1: which will beimportant in the following discussions,
let us considerthe followingone:
(l0) 1: + Fu vSuStSx Fv.
Obviously, we get a formulafroma formulavia substitution. Thus, if the two inputs in
(l0) are derivablein 1: then the output "Fv" must be derivablein 1: as well.
9.2 The Proof of the Unprovability of Cons
Let us considerthe followingabbreviations (introduced partlyin Sect. 8.2):
=df VXIV'X2 - ao =df A
e, =df b
=df B
Co =df Diag(aoIx,b
); Co =df Co"
Let us recall the mainresults of Sect. 8.2:
(1) (F
Co) :> (Fo B
(2) (Fo Co) => (Fo -Co)
(3) None of B
, -B
is a-theorem ofCC.
Our proof will be detailedin several steps.
Step 1. Since 1: + we have by Lemma 8.2.1 that
We know that here the word b
is uniquely determined by the words ao, ao" and x.
Henceforth, the following conditional is true:
To accept this formula as a theorem of CC, the following auxiliary postulate is suffi-
Since To K(o) we have that (4) follows from SUD (Substitution Uniquely Deter-
mined) by QC; thus, (4) is a theorem of CC.
Remark. Theintroduction of the auxiliary postulate SUD wasmentioned in Def. 8.1.3 already.
Wecouldformulate a moregeneral version of SUD, e.g.,
However, thepresent version suffices ouraims. - If someone objects toSUD, he/she canomitit fromL and
Fa;the results of Chapter 8 remain correct without SUD as well. However, SUD is indispensable in the
present chapter (except if you finda proofof (4) without usingSUD; this possibility is not ab ovo ex-
A particular case of (l0) of the preceding section is:
1: ... FA
vSAoSaoSx Fv
(where v is a 1:-variable) . Then, by Lemma 9.1.1:
From (4) and (5) we get by PC:
(6) To (D(cr)(F"ao) & & :)
& (b
= &
Here the antecedent is and the consequent yields ''D(o)(F''b
) &
)" , by the basic schema (B8) of QC. The latter formula is - by (2) of the pre-
ceding section - just "Th(bol'. Hence:
To Th(b
Then, by QC:
(taking into consideration that 'Th(b
r is free from x, Xl, and x,a). Here the antecedent
is just the negation of BOo Thus, our fmal result is that:
(7) To -Bs: Th(b
Step 2. By (1), if one of Co, B
is deducible in 1: thenso is the other. Thus, we
have the following derivedrules:
Then, by Lemma9.1.1 and PC, weget easily:
Fo D(a)(b
== D(cr)(co)
Withrespectto the definition of 'Th' (see(2) in the preceding section) we thenhave:
Fromthis and (7) we get:
(8) Fo -e, Th(Co ).
Step 3. By (2), if Co is derivable in 1: then so is its negation. This yields the
derived rule:
Then, similarly as in the previous step, we have:
(9) F
Th(Co ) Th( _/\ co).
Step 4. By PC, froma pair (of formulas) A, "-A" , anyformula is deducible.
Hence, we have the derivedrule:
1: .. Fu U F-u -u Fv v,
Then, usingthe generalization of Lemma9.1.1 and applying the definition of'Th' , we
Fromthis it then follows by QC:
Fo (Th(co) '& tu- co)
Herethe consequent is exactlythe negation of 'Cons' (see(1) in Sect. 9.1). Hence:
(10) F
(Th(co) & Thi-> co) - Cons.
Step 5. We get from(8), (9), and (10), by PC, that
Fo -B
-Cons .
Or, by contraposition:
Fo Cons B
Hence, if 'Cons' would be a theorem of CC then so would be B
. By (3), B
is not
a theorem of CC. Consequently, 'F
Cons ' does not hold. Our aim was just to
prove this statement.
Our result can be extended to certain enlargements of CC. The conditions are the same
as in Th.8.3.1.
The metatheorem just proved is an analogue of Godel' s Second Incomplete-
ness Theorem which states that the consistency of Number Theory is unprovable -
although expressible - within Number Theory.
Concluding remarks. We have finished our work on the pure syntactic means of
metalogic. However, every system of logic is defective without a semantical foundation
. - at least according to the views of a number of logicians (including the author). Thus,
if the question is posed, ' How to go further in studying metalogic?' the natural answer
seems to be, 'Tum to the semantics!'. Now, a logical semantics which is best connected
to our intuitions concerning the task and applicability of logic can be explained within
the frames of set theory.
Set theory is a very important and deep discipline of mathematics. We need
only a solid fragment of this theory in logical semantics (at least if we do not go far
away from our intuitions concerning logic). Fortunately, set theory can be explained as
a first-order theory. After studying its most important notions and devices, we can in-
corporate a part of this theory into our metalogical knowledge, and we can utilize it in
developing logical semantics.
Our next (and last) chapter will give a very short outline of set theory as well as
some insights on its use in logical semantics. We assume here (similarly as in Ch. 6)
that the reader had (or will) take a more detailed course in this discipline - our expla-
nations are devoted merely to give the feeling of the continuity in the transition from
syntax to semantics.
Let us mention that another field of logical semantics is the algebraic seman-
tics. This is foreign to the subject matter of the present essay, for, in the view of the
author, it does not help us to understand the truly nature and essence of logic. However,
it is a very important and nowadays very fashionable field of mathematical logic pre-
senting interesting mathematical theorems about systems of logic.
Chapter 10 SET THEORY
10.1 Sets and Classes
10.1.1. Informal introduction.
The father of set theory was Georg Cantor (1845-1918). It became a formal
theory (based on postulates) in the 20. century, due to the pioneering works of Ernst
Zermelo and AbrahamFraenkel (quoted as 'Z-F Set Theory'). Further developments
are due to Th. Skolem, J. v. Neumann, P. Bernays, K. Godel and many other mathe-
maticians. (On the works of Cantor, see CANTOR 1932.)
The intuitive idea of set theory is that some collections - or classes - of individ-
ual objects are to be considered as individual objects - called sets- which can be col-
lected, again, into classes which might be, again, individuals, i.e., sets, and so on.
Briefly: the operation of forming classes can be iterated; and classes which can be
members of other classes are called sets. Thus, according to this intuition, sets are in-
dividualized classes. Then, an important task of set theory is to determine which
classes can be individualized (i.e., considered as sets).
Now, formal set theory gives no answer of such questions as 'what are classes?'
or 'what are sets?'. Its universeof discourse is the totality of sets, and most of its pos-
tulates deal with operations forming sets from given sets. There exist different formula-
tions of (the same) set theory.
In most formulations, set theory is presented as a first-order theory whose single
nonlogical constant is the dyadic predicate 'E' (' is a member of), and the possible
values of free variables are assumed (tacitly) to be sets. Thus, in the formula "x E y",
both x and y are sets; members of sets (if any) must be, again, sets. Moreover, identity
of sets is introduced via definition:
(1) (a =b) <=>df 'Vx((x E a) =(x E b)).
From this "a = a" - and "'Vx(x = x)" is deducible; hence, the basic formula (B7)
of QC is omitted. The same holds for (B8); instead of the latter, a postulate called
axiomof extensionality is accepted:
(a = b):::) 'Vxa EX):::) (b EX.
In QC, "3x(x = x)" follows from "'v'x(x = x)". This means that the domain of indi-
viduals is not empty. Hence, we need not a postulate stating that there are sets (for, in
this approach, everything is a set).
In logical semantics, it is advantageous to assume that there are domains - i.e.,
sets - whose members are not sets but other type of individual objects (e.g., physical or
grammatical objects). By this, we shall depart a little from the usual formulation of set
theory sketched above. The main peculiarity of our approach is "to permit individuals
other than sets, these will be called primary objects, briefly: primobs. Of course, they
will have no members. To differentiate between sets and primobs, we need a monadic
predicate 'i', where "i(x)" represents the open sentence 'x is a set', and "--i(x)" tells
that x is a primob. We cannot omit the identity sign '=' from the supply of our logical
constants, for, if we try apply the definition (1) for primobs we get that all primobs are
identical with each other. Thus, we shall use the full machinery of QC, retaining (B7)
and (B8) as well. - Note that we shall not prescribe the existence of primobs, we want
only to permit them.
After these preliminary discussions let us return to the systematic explanation of
our set theory.
10.1.2. The language of set theory. To avoid superfluous repetitions, it is sufficient
to fIXthat in the language of set theory, the class of nonlogical constants is
Con ={i , E}
where i is a monadic and E is a dyadic predicate. No name functors are used - al-
though several such ones can be introduced via definitions. Thus, Term = Var.
Notation conventions in our metalanguage. We shall use lower-case Latin let-
ters (a, b, c, x, y, z) as metavariables for referring to the object language variables (x,
Xt, xu, .. .). The logical symbols &, v, ==, 3 introduced via definitions (see 6.2.2, Re-
marks 4) will be used sometimes. The convention for omitting parentheses (see 6.2.4)
will be applied, too. We write "(x E y)" instead of "E(X)(y)", "(x y)" and "(x :I; y)"
instead of "-(x E y)" and "-(x = y)", respectively. The expressions "qJ(x)" and
"tp(x,y)" refer to arbitrary monadic and dyadic open formulas, respectively.
We do not want to list all postulates of set theory in advance. Instead, we shall
present postulates, definitions and theorems alternatively, giving a successive construc-
tion of the theory. Now, if T, denotes the class of postulates of our set theory, we shall
write "Ir A" instead of "T, A" in this chapter. Most theorems - including postu-
lates - will be presented by open formulas; these are to be understood as standing for
their universal closures.
10.1.3. First postulates:
(PO) 3x(x E a) ::> i(a).
(PI) i(a) ::> j(b) ::> Vxx E a) == (x E b ::> (a = b).
(According to our conventions, (PO) stands for "Va(3x(x E a) ::> i(a", and (PI) is to
be prefixed with 'VaVb' .)
" 99
(PO) says that if something has a member, it is a set. (But it does not state that
every set has a member.) By contraposition:
-Sea) -3x(x e a),
i.e., primobs have no members. - (PI) tells us that if two sets coincide in extenso
(containing the same members) then they are identical. Here the conditions sea) and
s(b) are essential, without them all primobs would be identical.
Before going further, we shall extend our metalanguage.
10.1.4. Class abstracts and class variables. As in Sect. 2.5, we introduce class ab-
stracts and class variables, with the stipulation that in the class abstract
{x: qJ(x)},
rp(x) must be a monadic open formula of the language of set theory. Then, class ab-
stracts and class variables (A, B, C) will be permitted in place of the variables in
atomic formulas (that is, everywhere in a formula except in quantifiers), but these oc-
currences of class symbols will be eliminable by means of the definitions (01.1) to
(01.6) below. Thus, the introduction of these new symbols does not cause an extension
of our object language; it gives only a convenient notation in the metalanguage.
(Note that the class of monadic open formulas is a definite one; hence, the same
holds for the class of class abstracts which is the domain of the permitted values of our
class variables .)
The six definitions below show how a class symbol is eliminable in atomic
a e {x: qJ(x)} :>df tpta).
(A = B) :>df 'Vxx e A) == (x e B)).
(a =A) :> (A =a) :>df sea) & 'v'xx e a) == (x E A)).
(A e b) :>df 3aa = A) & (a E b)).
(A E B) :>df 3a(a = A) & (a E B)).
seA) :>df 3a(a =A).
We get from (Dl.3) that
Ir -Sea) (a *A)
(primobs are not classes).
By (01 .6), a class is a set iff it is coextensive with a set, Hence, "-s(A)" means
that the extension of A coincides with no set. In this case, A is said to be a proper class.
10.1.5. Proper classes. Set theory would be very easy if we could assume that every
class is a set (as Cantor thought it before the 1890's). As we know today we cannot
assume this without risking the consistency of our theory. Here follow the definitions of
some interesting classes:
lnd =df {x: (x = x)}.
o =df {x: (x:;e x)} .
Set =df {x: s(x) }.
Ru =df {x: .o(x) & (x x)}.
lnd and Set are the classes of individuals and of sets, respectively. 0 is the empty
class. Ru is the so-called Russell class: the class of "normal" sets (which are not mem-
bers of themselves). Except 0, all these are proper classes. It is easy to show this about
Ru. - Assume, indirectly, that j(Ru). Then there is a set, say r, such that
Vxx E r) == (j(x) & (x x).
Then, by (B4) of QC, we get
(r E r) == (j(r) & (r r ,
which implies that " - j (rt, contradicting our indirect assumption. Hence:
(Th. 1.1) Ir -j(Ru),
i.e. , Ru is a proper class. Since Ru Set c lnd, we suspect that Set and lnd are
proper classes, too. (Proof will follow later on.)
Thus, there "exist" proper classes . This - seemingly ontological - statement
means merely: We cannot assume, without the risk of a logical contradiction, that for
all monadic predicates " rp(x)" of the language of set theory there is a set a such that
"Vxx E a) == rp(x)" holds.
Remark. The Russell class Ru was invented by Bertrand Russell in 1901.
(See e.g. RUSSELL 1959, Ch. 7.) The existence of proper classes was recognised (but
not published) by Cantor some years earlier (see CANTOR 1932, pp. 443-450). These
recognitions led to the investigations of fmding new foundations for set theory.
10.1.6. Further definitlons and postulates. - From now on, our treatment will be
very sketchy.
Let us introduce an abbreviation for the simplest class abstractions:
(D1.11) a
= df {x: (x E a)}.
By (D1.3) and (D1.1) we have:
(a = == (j(a) & Vx(x E a) == (x E a).
From this we get by QC:
(a == j(a).
That is, any set a coincides with the class a
Thus, all sets are classes (but not con-
versely). By this theorem, all definitions and theorems on classes hold for sets as well.
On the other hand, if a is a primob, a
has no members, and, by (01.2), it c0-
incides with the empty class 0 :
(Th.1.3) -j(a)::::> (a
= 0).
In what follows, we shall use all notions and notations introduced for classes in Sect.
2.5 - see especially (4), (5), (7), (9), (10) and (11) in 2.5. In case of sets, we speak on
sub- and superset instead of sub- and superclass, respectively.
We define the unionclass of a class A - in symbols: "u(A)" - as follows:
(01.12) u(A) =df {x: 3yx E y) & (y E A}.
Note that by (PO), " (x E y)" implies "i(y)". Thus, if no set is a member of A then u(A)
=0. Particularly:
-j(a)::::> = 0)
Now we can formulate two further postulates:
and u(0) = 0.
j({a, b})

[Axiom of pairs.]
[Axiom of union.]
We omit the proof of the following consequences of these new postulates:
(Th.l .5)
If a, b are sets, we can write "u(a)" and "a u b" instead of and "a
u bE ",
Let us introduce provisionally Zermelo's postulate:
(Z) j(a
(J A)
which will be a consequence of the postulate (P6) introduced in the next section. Its
important consequences (without proof):
(Bc ::::> j(B).
It follows from (Th.l .6) that Set and Ind - as superclasses of Ru - are proper classes.
The set corresponding to 0 is uniquely determined and is called the empty set.
In set theory, it represents the natural number 0, hence, we shall use '0' as its proper
name. However, the use of '0' in formulas is eliminable by the following contextual
(D1.13) ~ 1 } ::>df 3a(s(a) & ~ a & Vx(x ~ a.
We define the power class of a class A - in symbols: "po(A)" - by
(D1.14) pl(A) =df {x: sex) & (x
~ An.
(The denomination is connected with the fact that if A has n members then po(A) has 2
members.) - Our next postulate:
(P4) [Axiom of power set.]
This states that the power class of a set a is, again, a set, called the power set of a.
Our next postulate - among others - excludes that a set could be a member of
(P5) (aE;f. 0) ~ 3xx E a) & (x
l'I a
= 0 . [Axiom of regularity.]
Its important consequences:
(Th. l. 9)
I ~ (a E b) ~ (b ~ a).
I ~ (a ~ a).
These mean that the relation fe' is asymmetrical and irreflexive (cf. Def. 10.2.4).
10.2 Relations and Functions
10.2.1. Ordered pairs. An ordered pair (or couple) is an (abstract) object to which
there is associated an object a as its distinguished (or first) member, and an object bas
its contingent (or second) member. Such a pair (couple) is denoted as "(a,b}"; the case
b = a is not excluded. This seems to be an irreducible primitive notion that can be
regulated only by means of the postulate:
(1) ( {a,b} = (c,d' ::> a =c) & (b = d) ).
However, in set theory, there is a possibility of representing (or modelling) ordered
pairs. Within set theory, this representation has the form of a definition:
(D2.1) (a,b) =df {{a}, {a,b}}.
This definition satisfiesthe postulate under (1). Notethat (a,a) is reducible to, o{{a}}.
Furthermore, if a, b e Ind then ~ ja, b. - In what follows, we shall deal with or-
deredpairsonly withinset theory.
In defining classes of ordered pairs, we have to use class abstracts of form
(2) {z: 3x3y z = (x,y)) & If/(x,y) ) }.
Let us agreeto abbreviate (2) by
{(x,y) : tf/(x,y)}.
The Cartesian product of the classesA and B - in symbols: "A x B" - is definedby
A x B =df {(x,y) : (x E A) & (y E B) }.
A (2) =df
A x
It is easy to show that a
x bE c po(po(a
u b1). Then, by (Th.1.5), (P4), and
(fh.l.6), we have that:
The class of all orderedpairs, Orp, is
(02.4) Orp =df Ind x Ind.
10.2.2. Relations. A class of ordered pairs is a possible (potential) extension of a dy-
adic predicate. Such a predicate expresses a relation (cf. Sect. 2.1). This is the reason
that in set theory, subclasses of Orp are called relations (although, in fact, they are
onlypotential extensions of relations). We introduce the metalogical predicate 'ReI' by
(02.5) Rel(A) ~ f A c Orp.
In the following groupof definitions, let us assumethat R is a relation.
(D2.6) xRy ~ f (x,y) E R,
Dom(R) =df {x: 3y(xRy)},
Im(R) =df {y: 3x(xRy)} ,
Ar(R) =df Dom(R) u Im(R).
HereDom(R), Im(R), and Ar(R) are said to bethefirst domain, the second or image
domain, and the area orfield, respectively, of the relationR. A relation R maybe con-
sideredas a projection fromDom(R) to 1m(R).
The restriction of a relationR to a classA - denotedby "RJ,A" is defined by
(02.7) RJ,A =df {(x,y) : (x E A) & xRy } =R (l (A x Im(R).
If aRb holds, we can say that b is an R-image of a. The class of all R-images of
a will be. denoted by "Rcz::ta}". We extendthis notation to an arbitrary class A in the
placeof {a} :
(02.8) RCCA =df {y: 3x(x E A) & xRy)}.
If everymember of Dom(R) has a single R-image then R is said to be afunc-
tion. The metalogical predicate 'Fnc' is defmed by
(02.9) Fnc(R) <=>df Rel(R) & 'Vx'Vy'VzxRy & xRz):J (y = z.
Nowwe canformulate Fraenkel's postulate of set theory:
(P6) Fnc(R):J i(R
a ~
tellingthat the R-image of a set is a set, provided R is a function.
Now, let "Id -1.A " be the class {(x,x): XE A}, the identity relation restricted to
the class A. Obviously, Id-1.A is a function. Then, by (P6), we have:
~ iId-1.A)cc a ~
However, (I d-1.A)CC a
= a
(l A. hence:
whichis exactlyZermelo's postulate (Z) in 10.1.6.
. If R is a function, we can write
"R(a) = b" instead of "aRb".
10.2.3. Further notions concerning relations. The following definitions are useful in
logical semantics.
Changingthe twodomainsof a relationR we get its converse denoted by"R
(02.10) R
=df {(x,y): yRx}.
The relative product of therelations R and S- denoted by "RIS" - is defined by
RIS =df {(x,y): 3z(xRz & zSy)}.
A function is said to be invertibleiff its converse is, again, a function.
The classof all functions fromB intoA - denoted by ,JlA" - is defined by
(02.13) BA =df if: i(f) & Fnc(f) & i f ~ B'x A) & Dom(f) = B}.
It (i(A) & i(B)) ::> i(BA).
~ ~ = {OJ).
It (A;t 0) ::> (A 0 =0).
1 =df {OJ.
This is the set -theoretical representation of the natural number 1. Using it, (Th.2.4) can
be writtenas
It ~ = 1).
10.2.4. DEFINITION. The relation R is saidto be
reflexive iff Vxx E Ar(R)::> xRx),
irreflexiveiff Vx(-xRx),
symmetricaliff VxVy(xRy::> yRx),
antisymmetrical iff VxVyxRy & yRx)::> (x = y)),
asymmetricaliff VxVy(xRy::> -yRx),
transitiveiff VxVyVzxRy & yRz) ::> xRz),
connectediff VxVy(x E Ar(R) & (y E Ar(R)& (x;t y))::> (xRyv yRx)),
an equivalence iff it is bothsymmetric and transitive,
a partial orderingiff it is reflexive, antisymmetric, and transitive,
a linear ordering iff it is irreflexive, transitive, andconnected.
Note that
(Ris symmetric and transitive) ==:> Ris reflexive,
(Ris asymmetric) ==:> Ris irreflexive,
(R is irreflexive and transitive) ==:> Ris asymmetric.
10.3 Ordinal, Natural, and Cardinal Numbers
The successorof an individual a - denoted by "a+" - is definedas follows:
Natural numbers in set theoryarerepresentable by the following definitions:
(03.2) o =df 0, 1 =df 0+ = {OJ, 2 =df 1+ = {O, I},
3 =df 2+ = {O,1,2}, 4 =df 3+ = {O,1,2,3},
and so on. Intuitively: any natural number n is the set of natural numbers less than n.
Or: if a natural number n is defined already then the next natural number is defined as
n+ = n u in}.
Howcan we define the class of all natural numbers?This workneeds a series of
further defmitions.
10.3.1. Ordinal numbers.
We say that the relation R well-orders the class A - in symbols: "R.Wo.A" -
iff R-LA is connected, its area is A, and everynonempty subset a of A has a singular
membernot belongingto Im(R-La). In details:
(03.3) R.Wo.A :)df VxVy(x E A) & (y E A) & (x *" Y::J (xRyv yRx &
Va(i(a) ::J (a
k A) ::J (a 0) ::J 3xx E a) & -3y(y E a) & yRx).
One can prove that if R.Wo.A then R-LA is a linear ordering.
(03.4) A class A is said to be transitive iff
Vxx E A) ::J (i(x) & x
k A.
Let us denoteby 'Eps' the relation 'is a memberof' , i.e.,
(03.5) Eps =df {(x,y): (x E y)}.
Now, all the sets 0, 1, 2, 3, 4 in (D3.2) are well-ordered by Eps and are transitive.
(D3.6) A class A is said to be an ordinal iff it is transitiveand Eps. Wo.A.
Ordinals whichare sets will be called ordinal numbers. Their class, On, is definedby
(D3.7) On =df {x: i(x) & x
is an ordinal}.
10.3.2. The following statements can be proved:
(1) Everynonempty class of ordinal numbers has a singular member with re-
spect to the relationEps.
(2) Every memberof an ordinal is an ordinal number.
(3) On is an ordinal.
(4) Everyordinal other than On is a memberof On.
(5) On is a proper class, i.e., -B(On).
(6) The successorof an ordinal number is an ordinal number (i.e., (a EOn) ::J
(a+EOn .
We shall use lower-caseGreekletters - a, P, r - referringto ordinal numbers.
An ordinal number other than 0 mayor may not be a successor of another ordi-
nal number; if not, it is called a limit ordinal number. Now, the class of non-limit ordi-
nal numbers, On" is defmedby
(D3.8) On, =df (a: (a =0) v 3f3(PEOn) & (p += a)},
is the class of limit ordinal numbers.
10.3.3. Natural numbers. - In set theory, natural numbers are represented by those
members of On
which, starting from 0, are attainable by means of the successorop-
eration. Thus, the definition of the classof naturalnumbers, co, is as follows:
Now, co is provedto be an ordinal. Hence, eitherco = On, or else co E On. Mathemat-
ics cannotbe devoid of the following postulate:
(P7) i(co).
Fromthis it thenfollows that co is a limit ordinalnumber.

(D3.10) (a < /1) ~ (a E /1); (a ~ fJ) ~ a <fJ) v fa =fJ) .
We omit the details howthe full arithmetic of natural numbers - including arithmetical
operations - can be developed in the frames of set theory. The essenceis that accepting
set theoryin metalogic, we can use the notions of arithmetic as well.
By induction on co, we can definethe notionof ordered n-el-tuple (n ~ 2) as an
orderedpair whosefirst memberis an orderedn-tuple. We agreein writing
Similarly, we definefor n > 0
10.3.4. Sequences. Where a is an ordinal number, by an a-sequence let us mean
any function defined on a, i.e., a memberof alnd. If s is an a-sequence, and P< a
thenthe s-imageof Pis calledthe P-th member ofs. The usual notation:
(sP}fkrx .
Sequences defined on a natural number (a member of co) are calledfinite sequences.
The singleO-sequence is O. If co ~ a (a EOn), the a-sequences are calledtransfinite
sequences, whereas co-sequences are saidto be (ordinary) infinite sequences.
If S =(Si}i<n and T ={ti}i<Jc are finite sequences (n, k E co) then their con-
catenation canbe definedby
S (1 T =S u {{n+i, x}: (i,x) E T}
(here '+' denotesarithmetical addition whichis definable in set theory).
10.3.5. Finite and immite sets
(i) Sets a, b are said to be equivalent or ofequal cardinality - in symbols:
"a == b" - iff thereexists an invertible functionj' such that Dom(f) = a, and lm(f) = b
(or conversely). This relationis an equivalence on the class Set.
(ii) A set a is said to be
finite iff for somen E ro, a == n,
denumerably infinite iff a == c.o,
denumerable iff for some a co, a == a.
(iii) We say that set a is of smallercardinality that set b iff for some b/C b,
a == b' , but a == b does not hold. As Cantor showed, everyset a is of smaller cardinal-
ity than po(a). Applying this theorem to co we get that there are non-denumerable
infinitesets (e.g., po(c.o.
10.3.6. Cardinal numbers. If a E On, the class
A = {p: <p EOn) & p== a}
has a minimal member ao that will be calledthe cardinal number of each memberof
A (hence, of a, too).
In general, an ordinal number a is saidto be a cardinal number iff
't://3(/3 EOn) & p== a)::J a fJ),
and if a is a cardinal number then it is said to be the cardinal number of any set a of
which a == a holds. (Thus, if a and b are of equal cardinality - in the sense of 10.3.5
(iii) - and theyhave a cardinalnumberthentheyhavethe same cardinalnumber.)
Does any set havea cardinalnumber? The investigations of Zermelo lead to the
result: if there exists an invertible function which well-orders a set a then there is an
ordinal number a such that a == a, and, hence, the cardinal number of a is the cardi-
nal numberof a. To provethat each set can be well-ordered, Zermelo neededthe axiom
ofchoice (AC), the final postulate of set theory:
(P8) (i(a) & 't:/xx E a) ::J3y(y EX ::J
3f(Fnc(f) & (a Dom (j) & 't:/xx E a) ::J (f(x) EX).
This postulateis rarelyusedin logical semantics.
The arithmetic of ordinal and cardinal numbers coincide in the finite (natural
numbersare cardinalnumbers, too) but bifurcatein the infinite(the simplestexample:
c.o + == co, hence, co + is not a cardinal number.
Remark. We posed the question at the end of Sect. 8.3: Why is it impossible to en-
large theory CC into a neg-complete first-order theory? Now, set theory gives an an-
swer. The class JtcO is denumerably infinite (if we assume that it is a set). Then, its
subclass, the class of all monadic predicates definable in [}o is a denumerable one.
However, the truth domain of such a predicate is a subclass of Jt
. Then, the class of
all possible truth domains is 0) which is not a denumerable class (taking into
consideration that Jt
== (0). That is, there are more possible extensions of monadic
predicates than as many such predicates are expressible in a first-order language.
If the postulates from (PO) to (P7) form a consistent system (what we hope but
do not know) then it remains consistent by adding either (P8) or the negation of (P8).
In this sense, (P8) is independent from the other postulates of set theory. The same
holds for the so-called generalizedcontinuumhypothesis (GCH) which is proved to be
independent even from (P8); GCH tells that if a is an infmite cardinal number (00 a)
then no cardinal number exists between a and the cardinal number of po(a). The case
a =00 is the original hypothesis - its truth was believed by Cantor.
Remark. Our (sketchy) treatment of set theory follows the style of TAKEUTI &
ZARING 1971 - except the permission of primobs (which is missing in that work).
10.4 Applications
10.4.1. Inductive definltlens ,
We say that a class A is closed with respect to the relation R iff for some n >0 I A(n) c
Dom(R), and A.
Let a be a set, b a, and let a beclosed with respect to the relations Rt , ,
(k 1). A class C is said to be an (b, R
, . , Rid-inductive class iff b C, and C
is closed with respect to RJ, ... , Ri . By our assumption, a is a (b, R
... ,R
) -
inductive set.
Now, the intersection of all (b, Rt, ... , Rid-inductive subsets of a, i.e.,
Co =df {x: Vc c a & cis (b, R
, ... ,Rt}-inductive) (x E c)}
is obviously the smallest (b, Rt , .. . ,Rid-inductivesubclass of a. Since Co a, we have
that Co is a set. We can say that Co is inductivelydefined by the conditions (b,RJ, ... ,Rid
where b is the base of the induction, and Rt, ... ,R
are the inductive rules (cf. Sect.
1.4). - Now we see how inductive definitions can betransformed into explicit defini-
tions in set theory - of course, if some conditions are fulfilled.
10.4.2. Reconstruction of syntax in set theory.
Let Jil be a (finite) alphabet. We can assume that the members of Jil are primobs in our
set theory. Sincethe membersof Jil can be enumerated, we get by (P2) and (P3) that Jil
is a set.
Jil-words consisting of n letters can be represented as n-sequences of letters
(members of nJil). Concatenation of words can be expressed as concatenation of se-
quences (cf. 10.3.4). Then, the empty word 0 is represented by the empty sequence,
i.e., by O. Finally, the class of Jil-words can be definedby
=df u{x: 3nn eO & (x e ')i[}.
By referringto the postulates (P3), (P6), and (P7), we get Ir j(Jil
Also, canonical calculi can be represented in set theory. We omit here the de-
tails, but we showan examplein reconstructing a rule of a canonical calculus. Assume
that the rule in questionis of form:
and the variables occurring in it are replaced by the frrst-order variables XI , . .. , X

Consider the function G defined on (Jiloi

) with Im(G) =Jil
G =df {(uJ, ... , ui , v): 3x
xj, ... ,xn eJilO) &
(UI =il') & & (Uk =Ik1& (v = g' }
whereiI" ... ,Ik', and g' representthe words iI , ... ,Ik and g, respectively, in set the-
ory. - Now, Jil
is closed with respect to the function G. It then follows that if K is a
canonical calculus over jI/ the class of jI-words derivable in K can be defined by an
inductivedefinition of set theory(in the sense of 10.4.1).
Not surprisingly, theory CC (see Ch. 8) can be embedded- by some transla-
tion - into set theory. It then follows that if set theory is consistent, it cannot be com-
plete withrespect to negation.
10.4.3. First-order semantics (in nutshell).
Let } = (Log, Var, Con, Term, Form) be a first-order language (cf.Sect.6.2). By an
interpretation of } let us meana couple Ip =(V,p) where V is a nonempty set and
p is a functionwith Dom(p) =Con satisfying the following conditions:
(i) If qJ is a name (i.e., a namefunctorwitharity0) thenp(qJ) e V.
(ii) If qJ is a name functorof arityn > 0 then p(qJ) is a function fromcf
) to V.
(iii) If tris a predicate of arity0 thenp(1t) e 2 = {O,l}.
(iv) If 1tis a predicateof arityn > 0 then p(1t) is a function fromcf
) to 2.
s, t E Term I(s = = 1 iff Islv= (and 0 otherwise).
A E Form => I-Alv = 1 - IAlv .
A, B E Form => I(A:::> B)lv =0 iff IAlv = 1 and =0 (and 1
By a valuation of the variables - associated to lp - let us mean a function v E' VarU =
V(U). If v E V(U), x E Var, and u E U, we denote by "v[x: u] " the valuation which
differs from v (at most) in v[x: u](x) = u. That is:
v[x: u](Y) = v(y) if y is other than x, and v[x: u](x) = u.
We define the semantic value of any term and formula of 1 as determined by lp and
v. The semantic value of an expression A, according to lp and v, will be denoted by
". However, the superscript dp will be omitted for it is constant throughout in
the following inductive definition.
(1) X E Var => lr], = vex).
(2) If fP is a name functor of arity 0 then l(fP.Av = p(rp).
(3) If s is an n-tuple of terms (n 1), and t E Termthen ls(t)lv = {Islv
(4) If rp is a name functor of arity n > 0, and s is an n-tuple of terms then
Irpslv = p(fPXlslv)'
(5) If nis a predicate of arity 0 then Inlv=pix).
(6) If n is a predicate of arity n > 0, and s is an n-tuple of terms then Iml
otherwise) .
(10) (x E Var & A E Form) => l'Vx Alv= 0 iff for some u E U, u] = 0
(and 1 otherwise) .
Let r be a class of formulas, lp an interpretation, and v a valuation associated to lp.
We say that the pair (lp, v) satisfies r iff
f\ A(A E r => IAI/p = 1),
and we say that r is satisfiable iff there is a pair {lpJ v) that satisfies r. Furthermore,
we say that sentence A is a semantic consequence of r - in symbols: "T A" - iff r
u {-A} is not satisfiable. (In case F 0, we write A", and say that A is a logical
truth (or a validfarmula) of first-order logic.
Metatheorems. (a) QC is sound with respect to first-order semantics in the
sense that r A => r 1= A. - (b) QC is complete with respect to first-order se-
mantics in the sense that r A => r A.
Proof of (a) is simple enough, whereas the proof of (b) needs some work.
* * *
Closure. From now on, the definition of a logical system, after describing its language
(or a family of languages) can be continued by formulating its semantics, using set-
theoretic notions and operations. (This means that a part of set theory is included into
our metalogic.) Later on, we can investigate whether there exists a logical calculus
which is sound - and perhaps complete - with respect to our semantical system. This is
the semantics-motivated way of logical investigations.
Georg Cantor, Gesammelte Abhandlungen (hrsg. E. Zermelo). Berlin
Rudolf Carnap, Introduction to Semantics and Formalization of Logic. Harvard Univ.
Press, Cambridge, Mass.
Alonzo Church, "A note on the Entscheidungsproblem." The Journal of Symbolic
Logic 1, 40-41,101-102.
S. Fefennan, "Inductively presented systems and the formalization of metamathemat-
ics." In: D. van Dalen et al. (eds.), Logic Colloquium '80. North Holland, Amsterdam,
FREGE 1879
Gottlob Frege, Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache
des reinen Denkens. Halle.
Gerhard Gentzen, "Untersuchungen tiber das logische Schliessen." Mathematische
Zeitschrift 39, 176-210,405-431.
GODEL 1931
Kurt Godel, "Uber formal unentscheidbare Satze der Principia Mathematica und ver-
wandter Systeme." Monatshefte jUr Mathematik und Physik 38, 173-198.
David Hilbert, "Die Grundlagen der Mathematik." Abhandlungen der Hamburger
Mathematische Seminar, Bd. VI (1928), 65-85.
S. C. Kleene, Introduction to Metamathematics . Van Nostrand, Princeton/Amsterdam.
A. A. Markov, T'eori'a algorifmov. (Russian.) Moscow.
MARTIN-Lop 1966
Per Martin-Lof, Algorithmen und zufiillige Folgen. Erlangen. (Unpublished manu-
E. Mendelson, Introduction to Mathematical Logic. D. Van Nostrand Co., Inc. Prince-
ton etc.
Ch. Morris, Foundations of the Theory ofSigns. Chicago.
POST 1943
E. L. Post, "Formal reductions of the general combinatorial decision problem." Ameri-
can Journal ofMathematics 65, 197-215.
POST 1944
E. L. Post, "Recursively enumerable sets of positive integers and their decision prob-
lems." Bulletin ofthe American Mathematical Society 50,284-316.
Bertrand Russell, My Philosophical Development. George Allen & Unwin Ltd., Lon-
RUZSA 1988
I. Ruzsa, Logikai szintaxis es szemantika, I. ('Logical syntax and semantics', in Hun-
garian.) Akademiai Kiad6, Budapest.
RaymondM. Smullyan, Theory ofFormal Systems. PrincetonUniv. Press, Princeton.
G. Takeuti and W. M. Zaring, Introduction to Axiomatic Set Theory. Springer, New
.>t (alphabets), 25
.>to, j[1' 29
j[c:O' 86
, 29,33
, 39
, 89
A(2) , A(n), 103, 108
algebraic semantics, 97
algorithm, 47, 51-54
applicability of , 51, 53, 54
commands of, 51.54
deciding, 58
normal, 52-55
questions of, 51
steering of, 51
alphabet , 2, 25
alternation, 15, 69
antecedent, 16
antisymmetrical relation, 106
AT, 104
area (field) of a relation, 104
argument places (of functors), 7
arity, 41,68
associativity, 17, 24, 26
asymmetrical relation, 103, 106
atomic formulas
in first-order languages, 42,68
in propositional languages, 39, 72
a-tuples of terms 68
Aut, 46
autonomous numerals, 46
autonym sense, 28
Jil-words, 25
axiom, 71, 75
of choice, 109
of extensionality, 98
of Fraenkel, 106
of pairs, 102
of power set, 103
of regularity, 103
of union, 102
of Zermelo, 102
of 0), 108
(Bl) .. . (B8), 71
, 89
base of induction, 31
basic formulas/schemata, 66, 70
Bernays, Paul, 98
BF, 70
BF, 78,79
biconditional, 15,69
blocking (of an algorithm), 53,54
bound occurrences of variables,
Co, 94
CC.l ... CC.9, 82-83
CC ,85-86
CC*, 76-77
logical, 1, 64-65
Cantor, Georg; 98, 100, 101, 109, 111
cardinality, equal, 109
cardinal numbers, 109
Camap, Rudolf, 5
Cartesian product, 104
categories of a language, 30
CCal, 44
characters, 2
Church, Alonso, 61, 83, 84
Church's Theorem, 84
Church's Thesis, 61
class abstracts, 21, 100
notation, 20
variables, 21, 100
classical first-order logic, 1,40,
see also first-order calculus
closed formula/term , 69-70
sentence, 10, 12
closure (of inductive definition) 31
Cns , 74
coincidence of class extensions, 22-23
commands (of an algorithm), 51, 53,54
commutativity, 17, 24
with respect to negation, 85
with respect to semantics, 1, 112
computability, 52
Con, 68
Cono, 86
Con*, 77
concatenation, 8,25, 108
conditional, 15-16,69
conjunction, 15, 69
connectedrelation, 106
connectives, 17
Cons., 93
consequencerelation, 66, 112
consequent, 16
consistency, 74,93
constants (in first-order languages), 68
continuumhypothesis, 110
contraposition, 73
converseof a relation, 105
co.po., 73
Cut, 73
of a class of words, 47,50,58
of QC, 84
decidingalgorithms, 58
deductibility, 66, 71
DeductionTheorem, 73
definiendum, 19
definiens, 19
definite classes, 61-64,
definition, 1-20
contextual/explicit, 20
inductive, 31
denumerablesets, 109
denumerably infinite sets, 109
derivability (in canonical calculi), 36
derivedrules (of canonical calculi), 94
detachment, 18, 36, 71
Diagda/x.b), 88
diagonalization, 87-90
differenceof classes, 24
disjoint classes, 24
disjunction, 17
distributivity, 24
Dom, 104
domainsof a relation, 104
dyadicfunctors, 7
numerals, 29, 33
emptyclass, 23
set, 103
word, 3,25
enumerability, 47
Eps, 107
equality(versusidentity), 9
equivalence, 17
as relation, 106
of algorithms, 52
of logical calculi, 66
of sets, 109
existentialquantifier, 11,69 .
sentence, 13
expressions (of a language), 3
extension(of a predicate), 21
F, 78
Feferman, S. F., 42
fieldof a relation, 104
finite means, 1
sets, 109
first-order calculus, 2, 66-70,
see also QC
language, 66-69
maximal, 41, 67
semantics, 111
theory, 74
Fnc, 105
Form. 68
Formo .86
Form*, 77,8]
FormpL, 39
formal language, 2
formula, 39-39,42,68
atomic, 39,68
foundational problemof logic, 1-2
Fraenkel, Abraham, 98, 105
Fraenkel's postulate, 105
freefroma variable, 70
groupoid, 28
free occurrences of variables, 12, 69
Frege, Gottlob, 67
function, 105
invertible, 105
functor, 7, 8
as automaton, 7
GCH, 110
generalized continuum hypothesis, 110
Gentzen,Gerhard, 67
GOdel, Kurt, 46,85,92,97,98
Godel's First Incompleteness Theorem,
SecondIncompleteness Theorem, 97
Godel numeral, 46
, 43
, 45
Hilbert, David, 1, 67
homogeneous functor, 7
hypercalculi, 41-46
I, 78
Id, 105
ideal objects, 3
idempotence, 24
identity, 9,23,67, 73
in set theory, 98
'iff', 16
1m, 104
implication, 17
inconsistency, seeconsistency
Ind, 101
indices, 39,41,68
indirectproof, 18
inductiveclasses, 36-37
definition, 31
in set theory, 110
rules, 31
inferences, 18
infinitesets, 109
of a functor, 7
ofarule, 33
of a command, 53
instantiation (of a universal
or existentialsentence), 13
interpretation (of ,1), 111
intersection of classes, 24
invertible function, 105
Ip, 111
irreflexive relation, 103, 106
justification of QC, 71
juxtaposition, 28
, 40
, 42
, 39
KIeene, S. C., 52, 61
,1*, 77
,10, 86
(Ll), 2
(L2), (L3), 3
, 40
lambdaoperator, 20
first-order, 40, 67
maximal, 40-42
formal, 2,4
meta-, 4
natural, 2-3
object, 4
propositional, 38-40, 72
radix, 25-28
used, 4
laws of logic, 1
letters, 2, 25
lexicographic ordering, 45
limit ordinal numbers, 107
Log, 67
logic, the possibility of,
logicalcalculus, 66
functors, 8, 15-16
systems, 1
truth, 9,40, 71, 75, 112
logics, 1
Markov, A. A., 53, 84
Markov's Thesis, 61
Martin-Lof, Per, 42
mathematical logic, 1
maximalfirst-order language, 40,67
meaningful expressions, 3
membership relation, 21
Mendelson, E. M., 53,61
metalanguage, 4
metalogic, subjectmatterof,
instruments of, 7
metamathematics, 1
on PC, 73
on QC, 72-73
modusponens, 1, 18, 71
modustollens, 18
monadic functors, 7
Morris, Ch., 5
N(f/g), 55
, 57
N[if], N

] , 60, 61
, 58
name, individual, 7, 41
closed/open, 12
functors, '7, 41
openexpressions, 14
namingwords, 4
natural deduction, 67
natural language, 1, 2
numbers, 3, 106, 108
negation, 15,67
neg-complete (theory), 85
Neumann, J. von, 98
non-limitordinalnumbers, 107
non-logical functors, 8
non-stop commands, 53, 54
normal algorithm, 52, 54
notation conventions
in first-order languages, 70, 79
in inductivedefinitions, 33
in presentinga language, 28
in set theory, 99
NumberTheory, 1,97
numerals, 33, 46
autonomous, 46
object language, 4
occurrences of a variable, 12
in a formula, bound/free, 69
On, OnI, Onu, 107, ~
openformula, 70
sentence, 10, 12
term, 69
operations, 8
, orderedpairs, 103
n-tuples, 108
ordering, linear/partial, 106
well-, 107
ordinal, 107
numbers, 107
Orp, 103
output,of a functor, 7
ofa rule, 33
Po, 86
(PO), (PI), 99
(P2), (P3), 102
(P4), (P5), 103
(P6), 105
(P7), 108
(P8), 109
parentheses, 11, 17, 67
PC.1 ... PC.14, 73
phonemes, 3
po, 103
polyadic functors 7
Post, Emil L., 42
postulates on languages, 2, 3
of a theory, 75
of CC*, 79-81
of CC, 86
power class/set, 103
pragmatics, 5
predicates, 8,41,68
premises, 66
primary objects, 99
primobs, 99
procedure, 47,49
pronouns, 10
proof rules, 66
proper class, 100
properties (of individual objects) 8
propositional calculus, 72
logic, 38-40
effectivenneffective, 12
existential/universal, 11
vacuous, 12
quantifiers, 11, 67, 69
quasi-quotation, 14
QC, 66, 70-71
QC.l ... QC.9, 73
questions of an algorithm, 51
quotation sign, double, 14
simple, 4
(Rl) ... (R6), 26
(R7), 27
recursive functions, 52
reflexive relation, 17, 106
Rei, 104
relations, 103-106
relative product, 105
re-naming of bound variables, 13, 73
representability (of an algorithm
by a canonical calculus), 62
restriction of a relation, 104
R-image, 105
rules of a canonical calculus, 36
releasing, 39
. of deduction, 66
Russell, Bertrand, 101
Russell class, 101
, 43
c I
satisfiability, 112
scope of quantification, 11
semantical foundation (of a logical
system), 97, 111
semantic consequence, 66, 112
value, 112
semantics, 5, 97, 111
semiotics, 5
sentence, declarative, 7
closed/open, 10
functors, 8, 15-16
sequences (finite, infinite), 108
sequent calculus, 67
Set, 101
set theory, 1, 2, 98-100
Skolem, Thoralf, 98
Smullyan, Raymond, 42
soundness, 1, 112
steering (of an algorithm), 51
stop command (of an algorithm), 53
subclass, 22
proper, 23
subformula, 69
subsidiary letters, 35,37,54,56
substitution of free variables
in formulas, 70
in sentences, 12
successor, 47-48, 57
in set theory, 106
SUD, 86-87, 95
superclass, 22
symmetricalrelation, 17, 106
syntax, 5
reconstruction of, in set theory, 111
tacit quantification, 14
Takeuti, G., 110
Term, 67-68
Term*, 77,81
Terms , 86
terms (in first-order languages),42,68
a-tuples of, 68
ThJa), 93
theory, first-order, 75
of canonical calculi, 76-77
Tr, 81
transfmite (mathematics), 2
transitiveclasses, 107
relation, 17, 106
triadic functors, 7
truth and falsity, 16
truth assignment (in CC*), 81-82
conditions, 16
domain (of a predicate), 21
Turing machines, 52
undecidability (of QC), 84
union class, 102
unionof classes, 24
universalclosure (of a formula), 74
generalization, 73
universalquantifier, 11, 41, 67
sentence, 13
used language, 4
vacuousquantification, 12
validformula, 112
valuationof variables, 112
Var, 67-68
variable, 10, 68
boundand free occurrencesof, 12
in canonical calculi, 36
binding operators, 12
V(U), 112
well-formedexpressions, 3, 30
well-ordering, 107
words (of a language), 3,26
Zaring, W. M., 110
Zermelo, Ernst, 98, 102, 105, 109
Zermelo's postulate, 102
zero-orderlanguage, 72
Z-F Set Theory, 98
(Symbols composed from Latin letters aretobefound inthe alphabetic INDEX.)
8,25,108 53
= 9 # 55
9, 79 r 65
Ax 11
Vx 11
16,39,67 3 69
& 16,69
v 16,69
16 1:* 77

19 r
19 1: 86
{x: 21 a 87
E 21,99 fA
22,99 i 98,99
{ah .. . , an} 22
22 a
c 23 u(A) 102-
0 23, 101 o(empty set) 103
u 24 (a,b) 103
24 AxB 104
A-B 24 RJ,A 104
25 IfIA .
25 R
-7 33,36,53
RIS 105
A 105
36 1 = {OJ 106
:::> 39,67
t 39 (J)
1t 39 108
x 41
0 41 (V,p) 111
'if 41,67

l; 43
L _
1.1.1.The extensional typetheory
1.1.2. The grammar of the EL languages
1.1.3. Semantics for EL languages
1.1.4. Somesemantical metatheorems
1.1 .5. Logicalsymbols introduced via definitions
1.1.6. The generalized semantics
1.2.1. Definitionof EC
1.2.2. Someproofs in EC
1.2.3. EC-consistent andEC-complete sets
1.2.4. The completeness of EC
2.1.1 . Montague'stypetheory
2.1.2. The grammar of IL and IL+
2.1.3. The semantics of IL and IL+
2.1.4. The generalizedsemantics of IL
2.2.1.Defmiton of IC
2.2.2. The modallawsof IC
2.2.3. IC-consistent andIC-complete sets
2.2.4. Modal alternatives
2.2.5.The completeness of IC
2.3.1. Afragment of English: L
2.3.2. Translation rulesfromL
2.3.3. Reduction of intensionality: meaning postulates
2.3.4. Somecriticalremarks
This Notes contain only the most important technical parts of the lectures held
by the author.
In this Notes, thecanonical symbols of set theory will be used where it is necessary.
The emptyset is denoted by '0', and the set of natural numbers by '0)'. An expression of
form "{x: rp(x) }"refers tothesetof objets x suchthat rp(x) where lp stands for somepredicate.
Theset ofjunctions from a setBintoa setA willbedenoted by,48A".
In speaking about a formal language and its expressions, a metalanguage will
be used whichis commonEnglishaugmented by some terms and symbolsof set theory,
by other symbols introduced via definitions, and by isolated letters (sometimes by
groups of letters) used as metavariables. (The detailed use of metavariables will be
explained in due course, preceding their actual applications in the text.) In speaking
about a particular expression, say, about a symbol, we shall include it in between
(simple) invertedcommas, e.g.
However, symbols will be mentioned sometimes autonymously (omitting the inverted
commas) if this does not lead to a confusion. We often have to speak about compound
expressions of an object language; in such a case, we shall use schemata composedof
metavariables and some symbols of the object language. These schemata will be in-
cluded in betweendouble invertedcommas(servingas quasi quotational marks), e.g.
"((A &B) ::J. C)".
(Here A, B and C are metavariables referring to certain expressions of an object lan-
guage.) Double inverted commas will be omitted if the schema is bordered by some
symbol introducedin the metalanguage.
Definitions will be, in most cases, inductive ones. A class of expressions will
be definedusuallyas the smallest set satisfyingcertainconditions. Among these condi-
tions, there is one - or there are some - serving as the basis of the inductive definition
prescribing that some set(s) defined earlier must be included in the definiendum. The
other conditions prescribe that the definiendum must be closed with respect to certain
operations (in most cases, syntactic operations) applied to its members. - Identity by
definitionwill be expressedby '=df"
Proofs will be, in most cases, of the same inductive nature. To show that all
membersof a set defmedinductively have a certain property it is sufficientto showthat
(i) the members of the basis set (of the inductivedefmition) have that property and (ii)
the propertyis hereditary via the operationsmentioned in the inductivedefinition. This
proof method will be called proof by structural induction if it is about a set of gram-
matical entities.
The symbol stands for 'if and '(::)' or 'iff 'for 'if and only if'. 'Def.' and
'Th. ' are abbreviations for 'Definition' and 'Theorem', respectively.
Budapest, April 1992.
1. 9{uzsa
We shall introduce a semantical system of the full type-theoretical extensional logic
We shall use to' (omicron) and 'l ' (iota) for the types of (declarative) sentences and
(individual) names, respectively. A type of an extensional functor will be of form
It a(p)" where p refers to the type of the input (i.e., the argument) and a refers to the
type of the output (i.e., the expression obtainedby combiningthe functor and its argu-
ment). The full inductive definition of the set of extensional types, EXTY, is as fol-
a, p E EXTY "a(p)" E EXTY.
If Pconsists of a single character (0 or r), the parentheses surrounding it will be
omitted. We shall use' a ', Ip', and Iy' as variablesreferringto the members of EXTY.
Insteadof It a(p)" we write sometimes "ap ".
SystemEL deals with the grammar and the semantics of a family of type-theoretical
extensional languages. DEFINITION. By an EL languagelet us mean a quadruple
[,0:1 =(Log, Var, Can, Cat}
Log = { (, ), A, = }
is the set of logical symbols of the language (containingleft and right parentheses, the
lambdaoperator A, and the symbol of identity);
Var = U
a e EXTY
is the set of (bindable) variables of the language where each Var(a) is a denumerably
infmiteset of symbolscalledvariables of type a;
Can = U
a e
EXTY Con(a)
is the set of (nonlogical) constants of the language where each Con( a) is a denumer-
able (perhapsempty) set of symbolscalled constants of type a;
all the sets mentioned up to this point arepairwisedisjoint;
Cat =U ae EXTY Cat(a)
is the set of the well-formed expressions - briefly: terms - of the language where the
sets Cat(a) are determined by the grammatical rules (GO) to (G3) below. For a E
EXTY, Catia) maybe calledthe a-category ofLr:<!.
In formulating these rules - and even in the later developments - we shall use
as metavariables upper-case Latinletters(A, B, C etc.)referring to arbitrary terms and
lower-case latinletters(x, y, z etc.)referring to variables. In a rule (or in a definition, in
a theorem) the first occurrence of any metavariable will be supplied with a type sub-
script indicating the typeof terms the metavariable refersto. (Type variables occurring
in a typesubscript refer to arbitrary types.)
Grammatical rules:
(GO) Var(a) u Con( a) s:Cat(a).
(Gl ) Aap(Bp) E Cat( a). [Read: If A E Cati ap) and BE Cat(p) then HA(B)" E
Cat( a).]
(G2) "('A.xpA
)" E Cat( ap) .
(G3) "(A
),' E Cat(0).
We writesometimes "('A.x.A)" insteadof "(AxA)" for the sakeof easierreading. DEFINITION. (i) Anoccurrence of a variable x in a termA is said to be a bound
occurrence of x in A iff it lies in a part of form "(Ax.B)" of A . An occurrence of x in A
is said to be afree occurrence of x in A iff it is not a boundoccurrence of x in A.
(ii) AtermA is said to be a closed oneiff no variable has a freeoccurrence in A.
AtermA is said to be an open one iff it is not a closedone.
(iii) The termA is said to be free from the variable x iff x has no free occur-
rencesin A.
(iv) A variable Xa is said to be substitutable by the term B
in the termA iff
whenever "(Ay.C)" is a part of A involving someoccurrence of x whichcountsas a free
occurrence of x in A thenB is freefromy.
(v) Bythe result of substituting Bafor xain A let us meanthe termA / obtained
fromA via replacing all freeoccurrences of x byB provided x is substitutable by B in
A. We shall use the notation
forA/ . (Thesquarebrackets will be omitted in caseA consistsof a singlecharacter.) In
using this notation, we assumealways the fulfilment of the proviso whichassures that
the freeoccurrences of variables in Bremainfreeonesin A/ as well.
(vi) We shall denote by "C[B/A]" the term obtained from C via replacing a
(single) occurrence of A - not preceded immediately by 'A.' - by B, provided A and B
belongto the samecategory. This syntactic operation will be calledreplacement.
Throughout in this section, letLt)(! be anyEL language. DEFINITION. By an interpretation ofL ~ let us meana couple
Ip = (U, p)
whereU is a nonempty set andp is a function defined on Con such that
A E Con( a) ==:- ~ E D( a)
whereD is a function defined on EXTY suchthat
D(o) ={O,I}, Dit ) = U, and D(afi) =D(jJ) D(a).
(Here '0' and '1' stand for the truth valuesFalse and True, respectively.) Given U, the
function D is uniquely determined by theseprescriptions. D(a) is said to be the domain
offactual values of type a..
Afunction v defined on Var is said to be a valuation (of variables) joining to Ip
X E Var( a) ==:- v(x) E D( a).
If x E Var( a) and a E D(a), we denoteby "v[x:a]" the valuation which differs from v
(at most) in v[x:a](x) =a. That is: if y is other thanx, then v[x:a](y) =v(y). DEFINmoN. Givenan interpretation Ip of L ~ , we shall define for all termsA
E Cat andfor all valuations v joiningto Ip, thefactual value ofA according to Ip and v
- denoted by "IAI/p" - by the semanticrules (SO) to (S3) below. (In the notation, the
superscript 'Ip' will be usually omittedwhenever Ip is assumed to be fixed.)
Semantic rules:
(SO) If x E Var, lxl, = v(x). If CE Con, ICl
(SI) 1Aaj1(BP)l
v(lBl v
(S2) 1(A.xpAtJlv is thefunction rp E D(aft) suchthat
b e D(fi) ==:- ~ b = IAlv[x:b].
In other words, for all b E D(fi),
1(A.x.A)llb) =IAlv[x:bj".
(S3) I(A
=BaJ l, =1, if IAl
,. and0 otherwise. LEMMA. The factual values IAI/p are uniquely determined by the rules (SO) to
(S3), and, if A E Cat( a), then IAI/p E D(a).
Proof' by structural induction using the semantic rules. (The details are left to
thereader.) DEFINmoN. Let rbe a set of sentences (Fc: Cat( 0, Ip an interpretation and
v a valuation joiningto Ip.
(i) We say that the couple(Ip, v) is a model of riff for all A E I: IAI/p='1.
(ii) ris said to be satisfiableiff rhas a model, and ris said to be unsatisfiable
iff rhas no model.
(iii)The sentence A is said to be a semantic consequence of T - in symbols: "T
~ A" - iff everymodel of r is a model of {A}.
(iv) Sentence A is said to be valid (or a logical truth of EL) - in symbols:
~ A" - iff A is a semantic consequence of the emptyset of sentences.
(v) TermsA and B are saidto be logically synonymousiff ~ (A = B).
Note that if r is unsatisfiable thenfor all sentences A, r ~ A,. and if ~ A then
for all r, r ~ A.
Throughout in this section, a language L ~ and an arbitrary interpretation Ip forL ~ is
Let us denote by "FV(A) " the set of variables havingsome free occurrences in
the termA. Then: LEMMA. If the valuations vand v/coincideon FV(A), then IAI/p = IAlv'/p .
Proof. Our statement is obviously true if A is a variable or a constant. If A is of
form"B(C)" or H(B= C)" thenuse the induction assumption that the lemmaholds true
for B and C, and take into consideration that in these cases, FV(A) =FV(B) u FV(C)
(anduse the rules (S1) and (83). Finally, if A is of form"(A.xaBp)" then,
FV(A) = FV(B)- {x}.
If v and y' coincide on FV(A) then for all b E D(b), v[x:b] and v' [x:b] coincide on
FV(B),' thus, by inductionassumption,
IBlv[x:b] =IBlv'{x:b] .
Then (usingthe rule (S2, for all b e D(ft):
(b) =IBlv[x:b] =IBlv'[x:b] =I(AX.B)lv'(b)
whichmeans that I(AX.B)l
COROLLARY. IfAis a closedtermthenfor all valuations v and v ~ IAl
= I l v ~ LEMMA. If for all valuations v, lAaI v =I BuJv, thenfor all valuations v, ICl
' (Cf. (vi) of Def.
Proof. For the sake of brevity, we writeX'instead of "X[B/A] ". Our statement
holdstrivially if Ais not a part of C, or Cis A. If Cis of form"F(E)" or H(F=E)" then
use the induction assumption that IFl
=IF' l, and IEl
=IE' l, (for all v). If C is of
form"(AXpE)" then C'must be "('Ax.E')". Using that for all v, IEl
=IE' l, we have
that for all v andfor all b E D(ft),
IElv[x:bJ =IE"lv[x:bJ
whichmeansthat for all valuations v, I(Ax.E)l
= 1(A.x.E")l
=BaJ then i= (C =C[B/A]).
Let us emphasize a furthercorollary: THEOREM. The lawof replacement. If i= (A
= BaJ and i= Cathen
1= C[B/A). LEMMA. Ifxp is substitutable by BPin the termA thenfor all valuations v:
IBlv =b e D(P) 1.4/ l, =IAlv[x:bJ'
Proof. IfA is freefromx thenA/ is the sameas A, and(byLemma1.1.4.1)
= IAlv[x:b].
Nowassumethat A is not freefrom x. Thenwe use, again, structural induction
onA. If A is x thenA/ is B, and, trivially,
IBlv =b =Ixlv/x:b] .
The cases whenA is of form UF(C)" or U(C=E)" are left to the reader. Now let us
consider the caseA is of form"('Ay.C)". Theny '* x (for"(Ax.C)" is freefromx), and B
must be freefrom y (forB is substitutable forx in "('Ax.C)" ). Hence, if y E Var(y) then
for all c E D(y),
IBlv[y:cJ = IBl
(byLemma1.1.4.1). Then, for all c E D(y),
1('Ax.C/ )I
(c) =le
Iv[y:, ] = IClv[y:cllx:b] =1('Ay.C)lv[x:b](c),
byinWcticit asswDptiCll
whichyieldsthat 1('Ay.C/)I
v/x:bJ' THEOREM. The lawoflambda-conversion. If x is substitutable by B in A then
1= ((Axp.Aa)(Bp) =A/.
Proof. We haveby (S1) and(S2) thatif IBl
= b then
= I(Ax.A)l
= IAlv[x:bJ'
According to theprecedig lemma(usingtheassumption on the substitutability):
:b] =IA/ l,
=IA/ Iv
for all interpretations andvaluations. Our statement follows trivially fromthis fact.
We defme first the sentencesi and J" called Verum and Falsum, respectively:
i =df Ap()p) = (Ap.p; J, =df Ap()p) = (Ap. i .
[Showthat l i ~ l p =1 and IJ,I/p =0, for all Ip and v.]
We continueby introducingnegation, 1_' :
Then -A =df(Ap(J,=p(A). By the law of A-conversion, the right side is logically syn-
onymousto "(J, =A)". Hence, the contextual defmitionof'-' is as follows:
-A =df (J, =A).
The explicitedefmitionof the universal quantifier"'i/ a (of type a) is:
Its contextual definitionis:
(Here the type subscript aof"'i/ can be omitted.)We can introducethe usual notationby
=df "'i/(Ax.A) [= Ax.A) = (AX. t ].
The definition of the conjunction ' &':
(Df.&) & =d f (Apo(Aqo"'i/fro[P= (f(p) =f(q))] .
[For the sake of easier reading, we applied here a pair of square brackets instead of the
"regular" parentheses. This device will be applied sometimes later on.] We shall write
the usual "(A & B)" instead of "&(A)(B)". Thus, the contextual definition of' &' is as
(A & B ) =df vt.oo(A = (f(A) = fiB)))
whereA and B must be free fromthe variablef.
[Show that our definiton of '&' satisfies the canonical truth condition of the
The further logical symbolswill be introduced via contextual definitions only:
(Df. o)
(Df. v)
o ~ B
=df - (A & - B)
v B
) =df - (-A & - B),
(We do not need a new symbol for biconditional since"(A = B )" is appropriate to ex-
press it.)
(Df. )
=df- V'x-A.
) =df "....(A = B).
It follows from a result of KURT GbDEL (1931) that there exists no logical calculus
which is both sound and complete to our semantical system EL (i.e., a calculus in
whichasentence A is deducible froma set of sentences riff r FA holds in EL). How-
ever, via following the method of LEONARD HENKIN 1950, it is possible to formulate a
generalized semantics - briefly: a G-semantics - in sucha waythat the calculus EC -
to be introduced in 1.2 - proves to be sound and complete with respect to this G-
The present section is devoted to formulate the G-semantics of the EL lan-
guages. The semantics introduced in Ll.Lmay be distinguished by callingit the stan-
dard semantics. DEFINITION. By a generalizedinterpretation - briefly: a G-interpretation - of
a languageLo:! we meana tripleIp = ( V, D, p) satisfying the following con"ditions:
(i) Uis a nonempty set.,
(ii) D is a function defined onEXTY suchthat
D( 0) ={0,1}, Dtt) = U, and D( ap) b D(fJ) D( a).
(iii) p is a function defined on Consuchthat
CeCon(a) ~ ~ C e D a .
(iv) Whenever v is a valuation joining to Ip (satisfying the condition v(x
D( a) ), the semantic rules (SO) to (S3) in Def. applicable in determining the
factualvalues(according to Ip and v) of theterms of Lo:!.
Comparing G-interpretations and standard interpretations (defined in
one sees the main difference in permitting 'c' insteadof '=' in the definition of D( ap).
However, the domains D( ap) must not be quite arbitrary and 'too small' ones: the re-
striction is contained in item(iv).For example, to assurethefactual valueof the term
U(Axa(X= x) " the domainD(oa-) must containa function (/J suchthat for all a e D(a),
f./J(a) = 1hold. DEFINITION. Consider Def. Replace the term 'interpretation' by 'G-
interpretation', and prefix 'G-' before the defined terms. Thenone gets the defmition of
the following notions:
a G-model of a set of F,
G-satisfiability (andG-unsatisfiability),
G-consequence- denoted by r FG A - andG-validity ( FG A),
Since every standardinterpretation is a G-interpretation we have the following
ris satisfiable ::::> T is G-satisfiable,
ris G-unsatisfiable ::::> ris unsatisfiable,
r FoA ::::> r FA,
FoA ::::> FA.
We have proved some important semantic laws in Section 1.1.4. Fortunately,
their proofs were based in each case on the semanticrules (SO) to (S3) which remained
intact in the G-semantics, too. Hence: THEOREM. All logical laws provedin the standard semantics- in section 1.1.4
- are logical laws of the G-semantics as well.
The most important laws - which will be used in 1.2 - are the law of the re-
placement ( and the law of the lambda-conversion ( .
The calculus EC introducedbelow will be a pure syntactical system joining to the se-
mantical system EL. Our presuppositions here are: the extensional type theory, the
grammar of the Lei\1 languages (includingthe notational conventions), and the defini-
tions of the (nonprimitive) logical symbols (as i, J.., - , 'V, &, etc.). (See 1.1.1, 1.1.2,
1.1.5.) EC will be based on five basic schemata (E1)...(E5), and a single proof rule
called replacement.
Basic schemata:
(E1) (A a = Aa)
(E2) (if aft) &foofJ)) = 'Vpo.f(p) )
(E3) x
=Ya) ::J if aJx) =f(y))
(E4) (fap= gap) = 'Vxpff(x) = g(x)])
(E5) 'Axp A
)(Bp) = A/ )
Herethe metavariables f, g, x, y refer to variables and A, B to arbitrary terms of a formal Ian-
(Of course, in (E5), it is assumed that the termB is substitutable forx inA.)
By a basic sentence (of Lei\1) we mean a sentenceresulting by a correct substi-
tution of terms of L ~ into one of the basic schemata. (A substitution is said to be a
correct one if the lower-case letters are substituted by variables of the indicated types
and the upper-caseletters are substitutedby terms of the indicatedtypes.)
Proof rule. Rule of replacement - RR. From "(A
= ~ and Co to infer to
Proofs in EC. By a proof we shall mean a nonempty finite sequence of sen-
tences such that each member of the sequenceis either a basic sentence, or else it fol-
lows fromtwo precedingmembersvia RR.
A sentence A (of . ~ is said to be provable in EC - in symbols: "fee A" -
iff thereexists a proof in EC terminatingin A.
(As one sees, our definitions are language-dependent. In fact, we shall be inter-
ested, in most cases, in the proofs of sentence schemata rather than of singular sen-
tences (of a particular language).)
In what follows, we shall omit the subscript 'EC' in the notation' fee', writing
simply ~ instead. (The distinction is important only if we are speaking of different
The notion"A is a syntacticconsequence of the set r of sentences" (or "A is de-
duciblefromr") will be introducedin Section 1.2.3.
It is easy to see that all basic sentences are valid in the semantical systemEL.
Furthermore, by Th., the rule RR yields a valid sentence from valid ones.
If ~ A then ~ A.
Let us realize that the above statement holds not only for our standard seman-
tics of EL but evenfor the generalized semanticsexplainedin 1.1.6, in consequence of
Th. Consequently: THEOREM. The soundness ofEC with respect to the generalized semantics of
EL. If ~ e A then FG A.
To prove the converse of this theorem, we need first to prove some theorems
about the provability in EC.
1.2.2. Some proofs in EC
In this section, we shall provesome metatheorems about the provability in EC.
Some of these theorems state that a certain sentence, or a sentence schema is provable
in EC, and someothers introducederived proof rules.
At the beginning, the proofs will be fullydetailed. A detailed proof will be dis-
played in numbered lines. At the end of each line, there stands a reference between
.square brackets indicating the provability of the sentence/the schema occurringin that
line. Our references will have the following forms:
'ass.' stands for assumption occurringin the formulation of the theorem.
Reference to a basic schema or to a schema provedearlier will be indicated by
the code of the schema (e.g., '(E2)', 'E3.2', etc.), A reference of form "Df. X" (e.g.,
'Df. \/ ', 'Df. :::J') referstothe definitionof thelogicalsymbolstandingin the placeof 'X' .
A reference of form"kim" in line numberedby n states that line n follows by a
replacement (RR) accordingto the identity standing in line k into the schema in line m.
Instead of k or m, we use sometimes codes of schemata proved earlier. We shall refer
often to the basic schema (E5) - the identity expressing A-conversion- , in this case
we writesimply 'A' in the placeof k.
References to derivedproofrules will be of form"RX: k" or "RX: k.m" where
"RX" is the codeof the rule and k, m are the line numbers (or codes) of the schema(ta)
to whichthe rule is to be applied.
Later on, the proofs will be condensed, leavingsomedetailsto the reader.,
Outermost parentheses will be sometimes omitted. Notethat instead of "(A 0 ~
o ~ Co" we write"A ~ B ~ C'.
Proofs from (E1)
El.I. ~ (A =B) ~ ~ (B =A). - Proof:
1. ~ (A = A) [(El)]
2. ~ (A =B) [ass.]
3. ~ (B= A) [211]
COROLLARY. If ~ (A= B) and ~ Cothen ~ C[B/A). In what follows, we shall
refer to this rule as to our basic ruleRR.
El.2. ~ (A = B) and ~ (B =C ~ r (A =C). - Proof:
1. ~ (A = B) [ass.]
2. ~ (B =C) [ass.]
3. ~ (A = C) [1/2]
EI.3. ~ t [by Df.i and (EI)].
EI.4. ~ (A
=i) ~ r A
- Proof:
1. ~ (A =i) [ass.]
2. ~ i [El.3]
3. ' ~ A [l/2]
EI.S. ~ Ao=B
and r A ) ~ r B. - Proof:
1. ~ (A = B) [ass.]
2. ~ A [ass.]
3. ~ B [112]
Theseresultswill be used, in most cases, withouta particularreference.
Proofs from (E5)
ES.I. ~ (A ='B) ~ (AxC = B,c) (provided, of course, that C and x belongto
the samecategory, and C is substitutable for x bothin A and B). - Proof:
1. ~ (A = B) [ass.]
2. ~ Ax.A)(C) =(Ax.A)(C [(EI)]
3. ~ Ax.A)(C) = (Ax.B)(C [1/2]
4. ~ Ax.A)(C) =Axc) [(E5)]
5. ~ Ax.B)(C) =B
) [(E5)]
6. ~ (A
= B
) [4,5/3]
Note that line 6 comprises two applications of RR. In what follows, steps such
as 2 and3 will be contractedinto a singlestepwiththe reference[l/(El)].
COROLLARIES: We get from (E2) and (E4) by E5.1 that:
(E2*) (FaAi) & F(J, = 'Vpo' F(p) [whereFis free fromp].
(4*) (Fap=Gap) ='Vxp [F(x) =G(x)] [two applications; F and G must be
free fromx].
To apply this device to (E3), remember that "(A ::> B)" is an abbreviation for
"(J, =(A & -B". Hence, E5.1 is applicable to (E3) as well. By three applications we
(E3*) (A
= B
) ::> (Fat (A) = F(B.
E5.2. 'Vx.A
( (A/ =i) and A/). - Proof:
1. (Ax.A) = (Ax. T) [ass. and Df. 'V]
2. (Ax.A)(B) = (Ax.i)(B) [1/(EI)]
3. (A/ = i) [A /2, twice]
4. A/ [EI.4: 3]
[Theprovisos are analogousto those of E5.1.]
Proofs from (E4)
E4.1. A =A) =T). - Proof:
1. Ax.A) = (Ax.A = 'Vx[(Ax.A)(x) = (Ax.A)(x)] [(E4*]
2. 'Vx[(Ax.A)(x) = (Ax.A)(x)] [1.5: I,(El)],
3. 'Vx(A =A) [A /2, twice]
4. A =A) =T) [E5.2: 3]
E4.2. ('Vx.i =T). - Proof: H'Vx.i" is "(Ax.i) = (AX.i)". Now apply E4.I.
E4.3. (- J, = T). - Proof' '- J,' is 'J, = J,'. Apply E4.1. .
E4.4. ('Vpo'P = J,). - Use that "'Vp.p" is "(Ap.p) = (Ap.i)", and the latter is
J" by its definition.
E4.5. 'VPJ.p =i) = J,. - Proof:
1. Ap.p) =(Ap.T) = 'Vp(p =T) [(4*) and A]
2. J, ='Vp(p = i) [Df. J, 11]
Completeby usingEI.l.
Proofs from (E2)
E2.1. i &i) = i). - Proof: In (E2*), let F be "(Apotr, and apply A-
conversions. At the right side, use 4.2 .
E2.2. (( i & J,) =J,). - Proof' In (E2*), let F be "(Apop)". Apply A-
conversions. At the right side, use E4.4.
E2.3. 'VpJ..(i &p) = p). - Proof: Let F be "(APJ..(i & p) =p"in (E2*).
After A-conversions, we have:
t&i) =i) & i&J,) =J,)]= 'Vpi & p) =p).

(i = i) & (J, = J,) [byE2.1 and E2.2],
'--.r-----J ' .
,i B: 1 [by E4.1, twice],
i [by E2.1].
Complete by usingEI.I andEl.4.
E2.4. r & A
) =A). [From E2.3, by E5.2.]
E2.S. (( J, =i) =J,). - Proof: Let F be "(Ap(P =i" in (E2*). Use A-
conversions, E4.1, E2.4, and (at the right side)E4.5.
E2.6. (- t =J,). [Df.-1E2.5.]
E2.7. 'VpJ(P =i) =p). - Proof: In (E2*), let Fbe "(Ap(P =i) =p)".
AfterA--eonversions, use E4.1, E2.5, andB2.1. Complete as in the proofofE2.3.
E2.8. A
=i) = A). [FromE2.7, by E5.2.]
E2.9. 'VPo (- - p = p). - Proof: In (E2*), let F be "(Ap(- -p = p". Use
E4.3 andE2.6.
E2.10. (- - A = A). [FromE2.9, by E5.2.]
R'V. A
'VxlJ'A. - Proof:
1. (A = i) [fromthe ass. andE2.8]
2. (( Ax.A) = (Ax.i [l 1(E1)]
3. 'Vx.A [Df. 'V: 2]
R&. ( A and B) (A &B). - Proof:
1. (A =T) [ass. andE2.8]
2. (i & B) = B [B2.4]
3. (A &B) = B [1/2]
4. (B = i) [ass. and E2.8]
5. (A & B) = t [4/3]
6. (A & B) [B2.8/5]
atJ,. If A, B E Cat(o), p E Var(o), Api, and A
J, then A/ .
Proof: In (E2*), let Fbe "(Ap.A)". Use the assumptions, R&, andE5.2.
R::J . ( (Ao::J B
), and A) B. - Proof:
1. (A = T)
2. (J, = (A & -B)
3. (J, =(i & -B
4. (J, = -B)
5. (-J, =--B)
6. (i = B)
7. B
[ass. and E2.8]
[ass. and Of. ::J]
[E2.4/3 ]
[E2.10, E4.315]
- -
Proofs from (E3)
EJ.t. (A
) ::J (B =A). - Proof: In (E3*), let F be "(Axa.(B =x"
whereA andB arefreefromx. After A-eonversions we have:
(A =B)::J (B =A) =(B =B).
Complete by usingE4.1 andE2.8.
E3.2. (I::J I). - Proof: In (E3*),let A andB be I, and let Fbe "(Apo-I)".
We have(byA-conversions) that:
1. I- (I = I) ::J (I = I)
2. I::J 1 [E4.11l]
PuttingJ, forA, we get analogously:
E3.3. I- J,::J t [Cf. E2.5.]
E3.4. A
I. (Verumex quodlibet.) - FromE3.2 andE3.3, by RI J,.
E3.5. I- J,::J J, . - Proof: In (E3*), let Fbe"(Apo-p)", and let A and B be J,
and t, respectively. Use E2.5.
E3.6. I- J,::JA
. (Exfalso quodlibet.) - FromE3.3 andE3.5, by RI J,.
E3.7. I- Ao::J A. - FromE3.2 andE3.5, by RI J,.
Notethat by Df. ::J,
By E4.3, E2.l, andE2.5, this reducesto:
On the other hand, E3.5andE2.8yield:
Similarly, E3.2andE2.8yield:
From(a) and (b) we get by R 1J, that:
E3.8. I- (Ao::J J,) =-A.
We get analogously from(a) and(c) that:
E3.9. I- (I::J A
) =A.
It follows fromE3.8. that
I- -(Ao::J - i) =--A.
Usingthat (A ::J -I) =df -(A &I), this meansthat:
E3.10. I- (A
&- I) =A.
[E3.4, E3.9, E2.8]
The Propositional Calculus (PC)
PCI. r Ao::JB
& (B::JA::J (A =B). - Proof"
1. r A::J i) & (i ::J A ::J (A = i)
~ ~ '--v----J
i & A ),::J A
A ::J A [E2.4, E3.7]
2. r A::J J,) & (J,::J A =(,1, = A)
~ ~ '--v----J
~ ~ & i ~ -A
[E3.4, E3.9, E2.8]
- A = - A [E3.l0, (El)]
3. r (,1, = A) ::J (A = J,) [E3.l]
4. r A::J ,1, ) & (J, ::JA:::l (A = J,) [2/3]
5. r A::J B) & (B::JA::J (A = B) [Ri,1,: 1,4]
PC2. r (A
=(B =A). - Proof"
1. r [A=B)::J (B=A & B=A) ::J (A=B ] ::J [(A=B) = (B=A)] [by PC1]
2. r (A = B) ::J (B = A) [E3.l ]
3. r (B = A) ::J (A =B) [E3.l]
Now use R& and R ::J.
Consider the proof of Pel. In line 1, the main '" :::l' can be replaced by ,"=1.
(Why?)In line 2, '"(,1, =A)' can be replacedby '"(A =,1,)' (accordingto Pe2). Fromthese,
one gets by Ri J, that:
PC3. r Ao::J B
&(B ::>A =(A =B).
PC4. r (A = B ) ::> (A ::> B). - Proof"
1. r (i = B ) ::J (i :::l B)
'--v----J ~
B ::J B
2. r (,1, = B)::J (,1,::> B)
'--v----J ~
[E2.8, E3.9]
- B::J i "[E3.6, E3.4]
Completeby usingat J,.
The followinglaws can be provedanalogously: substitute i and J" respectively,
for A, and use at t.
PCS. r Ao::J Bo::J A.
PC6. r (Ao::J B
Co) = A::J B)::> A::J C).
PC7. r (- Bo::J - A
=(A::J B).
By PC3 (and R::, the two latter laws can be weakenedas follows:
PCS. r (A
Bo::J Co) ::> (A ::J B) ::J A ::J C.
PC9. t (-B
::J A ::> B.
Knowingthat PC:5,8,9and R::J are sufficientfor the foundation of PC, we have
that EC containsPC. Note that PC3 and PC4 assure that the identitysymbol '= ' acts,
betweensentences, the part of the biconditional. In what follows, we shall refer by 'PC'
to the laws of the classical propositional calculus.
Laws of Quantification (QC)
QCl. 'VpoC'vxa(P A
=(P 'Vx.A). - Proof" Showthat
'Vx(p A) = (p 'Vx.A)
is provablewith i and J. insteadof p. Then use (E2*). - COROLLARY:
QC2. 'VxJ..C
-::;) C 'Vx.A), providedC is freefromx.
Using that A) [PC] and usingR'V, we get fromQC2:
QC3. A0 'Vxll'A provided A is freefromx.
QC4. 'VxaA
-::;) A/. - Proof"
1. ((h.A)=(h.i -::;) [(A/ro-f(B(h.A)=(Af/(b(Ax. i)] [by (E3*)]
[Notethat U(A/ro-f(Bu,)" E Cat(o(oa)).]
2. h .A)=(h .i ) h.A)(B)=(Ax.i )(B [A11 ]
3. 'Vx.A (A/ = i) [Df. 'Vand A/2]
4. 'Vx.A:::> A/ [E2.8/3]
QC5. 'VxJ..A
-::;) 'Vx.A 'Vx.B. - Proof"
1. 'Vx(A B) (A B) [QC4]
2. r 'Vx.A A [QC4]
3. r ('Vx(A B) & 'Vx.A) -::;) (A & (A -::;) B [PC: 1,2]
4. r (A & (A B -::;) B [PCl
5. r ('Vx(A B) & 'Vx.A) B [pC: 3,4]
6. r 'Vx[('Vx(A -::;) B) & 'Vx.A) -::;) B] [R'V: 5]
7. r ('Vx(A B) &'Vx.A) 'Vx.B QC2, 6]
8. r 'Vx(A B) 'Vx.A 'Vx.B [pc: 7]
The laws QC4, QC5, QC3, (EI), (E3*), andR'V - togetherwith the laws of PC
- are sufficient for the foundation of the frrst-order Quantification Calculus QC. Hence,
all laws of QC are provablein EC, with quantifiablevariablesof any type (in contrast
to QC wherethe quantifiable variablesare restrictedto type z).
QC6. If the variableyp does not occur in the termA
then r = (AypAl ). -
Proof" Let A" be Ai , andnotethat - owingtoour proviso - [A"l/ is the sameas AxZ
1. r (Ay.A ")(zP) =[A"] / [(E5), with suitablez]
2. r (Ay.A")(z) =A/ [from1, for [A "]/ =A
3. r (h.A)(z) =A
4. r (Ax.A)(z) =(Ay.A")(z) [3/2]
5. r 'Vz[(h.A)(z) = (Ay.A")(z)] [R'V : 4]
6. (ALA) = (Ay.A") [(E4*) /5]
This is the lawof re-naming bound variables.
Finally, we provea generalization of (E3) which will be useful in the next Sec-
(E3+) r (Ap = Bp ) ::J (CutiA) = C(B. - Proof: In (E3*), let F be
"('J....x{iC(x) = C(B) ))" (whichbelongsto Cat(ofJ. After A-conversions we have:
r (A =B) ::J([C(A) = C(B)] =[C(B) = C(B)])
By FA.! andE2.8, we get the requiredresult.
1.2.3. EC-consistent and EC-complete sets DEFINITION. A sentenceA is said to be a syntactic consequence of the set of
sentences r(ordeducible fromn-in symbols: "T ~ c - iff
T is emptyand r A, or
T is nonemptyand there exists a conjunction K (perhaps a one-member one) of
some membersof F such that r K::JA.
[Provethat r A ~ F r A , for all r. - Prove that ru {Co} r A iff
F r C::J A - this is the so-calledDeduction Theorem.] DEFINmON. A set ris said to be EC-inconsistent iff T r J,; and ris said to
be EC-consistent iff r is not EC-inconsistent.
[Provethat tFis EC-inconsistent) :> (for all sentences A, F r A) :> (for
some A E T, r r .... A). - Prove that if ris EC-consistent and r r A then ru {A}
is EC-consistent as well.] DEFINmON. A set ris said to beEC-complete iff
(i) ris EC-consistent;
(ii) T is 3- complete (existentially complete) in the sense that whenever
" E F then for some variableYa, A/ E r,
(iii) r is maximal in the sense that if o ~ r then ru {A} is EC-inconsistent. THEOREM. If the membersof F are freefromthe variablev . X
does not oc-
cur in the sentence A/ t and F r A/ t then T r Vx.A .
Proof. By the assumption, r K::J A/ where K is a conjunction of some mem-
bers of T, and K is free fromy. Then, by RV, QC2, and R::J, r K::J 'fIy.A/We get by
QC6 that r K::J 'fIx.A, that is, r r 'fIx.A. THEOREM. Every EC-consistent set of sentences is embeddable into an EC-
complete set. More exactly: If ro is an EC-consistent set of sentences of a language
Lao:! then there exists a language Lo:! and an EC-complete set T of sentences of Lo:!
such that ro ~ r.
Proof For each a E EXTY, let Var'(a) be a sequence of newvariables, and let
Lr;t;! be the enlargement of Loo:J containing these new variables. Let (En)n E iV be an
enumeration of all sentences of form"3x
" of the extended languageLr;t;!. Starting
withthe givenset ro, let us definethe sequence of sets (rn)n E iV by the schema:
=Tn if F; u {En} is EC-inconsistent; andotherwise
+1 =T; u { En , En}
where, if En is "3x
" then E is A/ where y is the first memberof Var'(a) occur-
ringneitherin the members of F; nor in En .
LEMMA. If T; is EC-consistentthenso is r
Proof This is obvious in case r
= r
In the other case, F; U {3x
} is
assumed to be EC-consistent. Now assume, indirectly, that F; U {3x.A, A/ } is EC-
inconsistent, i.e., that
F; U {3x.A} ~ ...Al .
Using that T; U {3x.A} is freefromy, weget by the precedingTh. that
U {3x.A} ~ 'v'x. r-A .
Since"3x.A" is "-'v'x....A" , we have that F; U {3x.A} is EC-inconsistent whichcon-
tradictsthe assumption.
Continuing the proofof the theorem, let us define
Showthat riV is EC-consistent (usingthat in.the contrary case, for some n, r
be EC-inconsistent) and3-complete.
Nowlet (C
)n E aJ be an enumeration of all sentences of Lr;t;!. Let us define the
sequence of sets r ~ n E iV by the following recursion:
r ~ + l = T'; if T'; U {C
} is EC-inconsistent, andin the contrary case:
Obviously, for all n, T'; is EC-consistent and 3-complete. Consequently, the same
Furthermore, Fos; r. Finally, showthat r is maximal (use that if A
~ I: then for
somen, A is C; , and F'; U {Cn } is EC-inconsisteent).
143 THEOREM. Assume that r is an EC-completeset of sentences. Then:
(i) r A A E r.
(ii) {(A =B), C} r UC[BIAj" E F.
(iii) If the termA
occursin a memberof r thenfor somevariableXa r
=xa)" E r.
(iv)If "(Cf1/J = Df1/J)" E r thenfor all variables xp "(C(x) = D(x) " E r .
Proof. Ad (i). This follows fromthe maximality of F, usingthat rv {A} is EC-
consistent. - Ad (ii), By (E3+), (A =B) ::J (C =C[BIAJ). Nowuse PC and (i).
Ad (iii). By (i), "(A = A)" E r. By contraposition of QC4 we have that
(A =A) ::J 3Ya(A =y), and, hence, "3y(A =y)" E r. By the 3-eompleteness of I:
for someX
, "(A =X
)" E r .
Ad (iv). By (E4*), (C =D) =VYp(C(y) =D(y) (withy such that C and D
are free fromy), and by QC4, r Vy(C(y) =D(y)::J (C(x) =D(x) ), with arbitrary x
Complete by using(i).
1.2.4. The completeness of EC THEOREM. If r is an EC-complete set of sentences then there is a G-
interpretation Ip = { V, D, p } and a valuation v suchthat
(l) for all A E F, IAI/P =1.
Proof. - Part I: The definition of Ipand v.
We shall define D and v by induction on EXTY.
(a) For p E Var(o), we defmev(p) simply by
v(p) = 1, if p e T; and v(p) = 0 otherwise.
Knowing that i and J. are terms of ()(!, we haveby (iii) ofTh. for somePo
and qo,
"(i =p)" E r and "(J. =q)" E r.
Since r i , we have that pErr and, hence, v(p) = 1. If q E r then, by (ii) of the Th.
just quoted, J, E r which is impossible (by the EC-consystency of Ty; hence, q r ,
and v(q) =0. Thus, we can define:
D(0) = to, I}.
Assumethat "(Po =qoJ" E r, andv(p) =1, that is, p e T: Then, q E F; too, and v(q) =
1. Assumingthat v(q) =1, we get analogously that v(p) =1. Hence: "(p0=q0)" E r iff
v(p) =v(q).
(b) Let (Zn}n E (l) be an enumeration of Var( z). Let us definethe function rp by
rp(Zn ) =k for somek n, "(Zn =Zk )" E r, and for all i < k,
" (Zn =Zj )" r [i, k, n E (J) ]
(In other words: let fP{zn) be the smallest number k such that "(z, = Zk )" E r. Note
that "(Zn =z, )"E r. Then fP{zo) =0.) Nowlet U =D(l) be the counterdomain of J,
i.e., U=D(l) ={k E (J) : for somex E Var(l), fP{x) =k }.
We thendefinev for members of Var(l) by vex) =df rp(x).
[Usingthe symmetry and the transitivity of the identity, showthat "(r, = YI )" E r iff
vex) = v(y).]
(c) Nowassume that D(a), D(j3) are defined already, v is defined for the mem-
bersof Yarea) u Var(ft>, andthe following twoconditions holdfor r E {a, fJ} :
(i) If C E D(r) thenfor somex
vex) = c.
(ii) "(X
= Yr)" E r iff vex) =v(y).
(Notethat by(a) and (b), theseconditions holdfor rE {o, l }.) Nowwe are go-
ingto definev for the members of Var(a(p]) as well as the domainD(a(p).
Forf E Var(afJ), a E D(a) , b E D(j3), we let:
v(f)(b) =a iff for someXa , YP suchthat vex) =a, v(y) =b, "(f(y) =x)" E r.
Using (i) and (ii), it is easyto provethat v(f) is a (unique) function from D(j3) to D(a).
- We define:
D(afJ) = { rp E D(jJ)D(a): for somef ap, v(j) = qJ} c D(jJ) D(a).
Nowprovethat (i) and (ii) holdfor r = a(p). (For (ii), use (iv) of Th.
By (a), (b), and (c), the definition of D and v is completed.
(d) If C E ConCa) then for some Xa , "(C = x)" E r . We then defineP(C) =
vex). Showthat this definition is unambigous.
By these, thedefmition of lp is completed.
Part II: The proofof(1).
(A) We prove(1) firstly for identities of form"(B
= Ya)". If B is a variable or
a termof form "f(x)" or a constant, then (1) holds according to the defmition of lp and
v. In other cases, B is a compound termof form"F(C)", " (Ax.C)" , or U(C = D)". We
shall investigate one by one these three cases. Meanwhile, we shall use the induction
assumption that (1) holdstruefor sentences whichare less compound ones than the one
(AI) If U(FatiCp) =Ya)" E r then - by (iii) of Th. - for some vari-
ables fap and xs. "(F = f)" E F. and "(C = x)" E r. Furthermore, by the EC-
completeness of F, "({(x) =y)" E r. Then, by the definition of lp and v,
(2) I(f(x) =y)l
We can assumethat
(3) I(F =f>/v =1 and (4) I(C =x)l
for F and Care less compoundterms than"F(C)". From(2), (3), and (4) it then follows
I(F(C) =Y)/v =I
whichwas to be proved.
(Thefurthercases will be less detailed.)
(A2) If "'AxfJC
= YaP )" E r then - by (iv) of Th. for all zp,
"'Ax. C)(z) = y(z" E r, and, by 'A-conversion,
(5) "(C/ =y(z) ) E r.
However, for all zpthere is an U
E Var(a) suchthat
(6) "(y(z) = u
)" E r.
By (5) and (6), we have that for all z E Var(/J) thereis an U
E Var(a) suchthat
)" E F. By inductionassumption, I(C
= U
=1, and, furthermore,
l(y(z) =U
=1. Henceforth: I(C/ =y(zl
=1, that is"
(7) /'Ax. C)(z) = y(z) )/v = 1
for all zp. Remembering that for all b ED(jJ) there is a zpsuch that v(z) = b, we get
from (7) that for all b E D(jJ), I(AX.C)l
(b) =v(y)(b), which means that I(Ax.C)l
v(y), and, hence: 1('Ax.C) =y)l
(A3) If "C
= D
) =Po)" E r thenfor someXa, Ya: "(C =x)" E T;
H(D= y)" E F; and
We can assumethat
(9) I(C = x)l
= 1
"(x =y) =p)" E r.
and (10) I(D =y)l
= 1.
Now v(p) is 1 or O. If v(p) = 1 then "(p = tr E r, and, by (8), "(x = y) E r, that is,
vex) =v(y). This and (9) and (10) togetherimplythat
(11) I(C=D) = p)l
On the other hand, if v(p) = 0 then "(p =J,)" E r, and, by (8) again, ; (x =y) E r,
hence, vex) ;I:. v(y). This and (9) and (10) togetherimply(11).
(B) Secondly, we prove (1) for identities of form "(B
= C
)"where both B
and Cmay be compound terms, If"(B =C)" E r thenfor somexj andy., , "(B = x)" E
r, "(C = y)" E r, and "(x = y)" E r. By (A), we havethat
I(B= x)l
=1, I(C = y)l
= 1, and I(x =y)l
whichyieldthat I(B = C)l
(C) Finally, if the sentence A is not an identitythenA E r implies that
"(A =i)" E r. Since IAl
thiscase reducesto the preceding one. THEOREM. If the set r is EC-consistent then r is G-satisfiable. This follows
fromTheorems and The completeness ofEC with respect to the G-semantics. If r A
then r hr: A.
Proof. Assumethat r A. Then I" = r u {- A} is G-unsatisfiable, and, by
contraposing the preceding theorem, F' is EC-inconsistent whichmeansthat
r u {... A} J,. Then, by the Deduction Theorem, we have that r (- A ::> J,) which
reduces to r A. THEOREM. (LbWENHEIM-5KOLEM.) If the set r is G-satisfiable then r is
"denumerably" satisfiable in the sense that r has a G-model Ip ={U, D, p} with a
valuation v suchthat eachD( a) is at most denumerably infinite.
Proof. Note firstly that if r is G-satisfiable then r is (For if
r J, then ... continue!) Then, by Th., r is embeddable into an EC-
complete set I" , and, by the proof of Th., I" has a G-model in whichthe cardi-
nality of each D(a) is not greater than the cardinality of Var(a) (whichis; of course,
denumerably infinite). That is, I" has a "denumerable" G-model. Since r s:I", this is
a G-model of r as well.
The sourcesof this chapterare the following works:
R. MONTAGUE, Universal Grammar, 1970.
R. MONTAGUE, The ProperTreatment of Quantification in Ordinary English, 1973 - briefly:
D. GAllIN, Intensional and HigherOrderModalLogic, 1975.
We shall presentthe essenceof the most important parts of thesewritings, of course, without lit-
eral repetititons. The resultsof Part 1 will be utilizedextensively.
Montague uses the basic symbols t, e, and s - t for truth value, e for entity, s
for sense - in his type theory. The type of a functor with the input type a and output
typef3is denotedby "( The full inductive definition of his types is as follows:
t and e are types.
If a, p are types, "( is a type.
If a is a type, (s,a)" is a type.
Here t and e correspond to our type symbols 0 and l, respectively (cf. 1.1.1), and
correspondto our f3(a). .Finally, "(s,a)" is the type ofexpressions naming the
sense (or the intension) of an expression of type a. It is presupposed here that there
exist terms naming intensions (senses) of terms. (For example, if A is a sentence, the
term "that A" is a name of the sense (intension) of A; or if B is an individual name -
say, 'the Pope' - then "the concept of B" is a name of the sense (intension) of B.) Note
that the isolated's' is not a type symbol.
However, we shall not use Montague's original notationfor types. Instead, we
shall follow our notation introduced in Section 1.1.1, of course, with suitable en-
largements. Thus, our inductivedefmition of the set of Montagovian types- denoted
by 'TYPM' - runs as follows:
o, lE TIPM,
=> "a(p}'ETYPM,
p E TYPM =::) "(/J)s" E TYPM.
Again, if Pconsists of a single character(0 or z), the parentheses surrounding it will be
omitted. Furthermore, we write usually"(P'l" instead of "(/J)s" [except when it occurs
in a subscript], and instead of "P'lt " we write simply "(/J)5S "[e.g., d', (55, (Ol)5SSS,
The unrestricted iterations and multiply embedded occurrences of's' may
provoke somephilosophical criticism, but let us put asidethis problem presently.
The semantical systemIL is introduced in Universal Grammar. In PrQ, systemIL
is extended by the introduction of tenseoperators; this extended system will be called
IL+. DEFINITION. By an IL language let us meana quadruple
L (i ) = (Log, Var, Con, Cat)
Log = { (, ), A., =, A , v }
is the set of logical symbols of the language (containing left and right parentheses, the
lambdaoperator A. , the symbol of identity, the intensor A, andthe extensor v);
Var = U
is the set of variables of the language whereeach Var(a) is a denumerably infinite set
of symbols calledvariables of type a;
Can =UaeTYPM Con(a)
is the set of (nonlogical) constants of the language where each Con(a) is a denumer-
able(perhaps empty)set of symbols calledconstants of type a;
all the sets mentioned up to this point are pairwise disjoint;
Cat =U ae TYPM Cat(a)
is the set of the well-formed expressions - briefly: terms - of the language where the
sets Cat( a) are inductively defined by the grammatical rules (GO) to (G5) below.
For a E TVPM, Cat( a) maybe calledthe a-category of L{i) .
[Thenotational conventions will be the sameas in 1.1.2.]
Grammatical rules:
(GO) Var(a) u Con( a) ~ Cat(a).
(G1) "Aa/1(BpJ" E Cat( a).
(G2) "(AxpAa) E Cat(ap) .
(G3) "(A
= ~ E Cat(0)
(G4) "AA
" E Cat( as).
(G5) "vA
E Cat(a).
Let us enlargethe set of logical symbols Log by the symbols 'P' and 'F' (called
past andjUture tense operators, respectively) and let us add the rule (G6) to the gram-
(G6) "PAo", "FAd' E Cat(o).
By these enlargements, we get the grammar of the IL+ languages. [In PrQ, some sym-
bols introducedvia definitions in Universal Grammar, are treated as primitive ones;
but we do not followhere this policy.]
Remark. Montaguespeaksof a singlelanguage of IL, and, hence,he prescribes that each Con( a)
must be (denumerably) infinite. We follow the policy of Part 1 in dealingwith a family of languages the
members of whichmaydifferfromeachotherin havingdifferent setsof theirnonlogical constants. DEF1NITION. (i) Free and bound occurrences of a variablein a termas well
as closed and open terms are distinguished exactlyas in EL (cf. (i), (ii) and (iii) of
(ii) The set of rigid terms of I}) - denotedby'RGD' - is definedby the fol-
(a) Vars: RGD; "AA
" E RGD.
(b) Fap, Bp E RGD F(B) E RGD.
(c) A E RGD "(Mp A)" E RGD.
(d) A, B E RGD "(A = B)" E RGD.
(e) In case IL": AoE RGD "P(A)", UF(A)" E RGD.
In other words: rigid terms are composed of variables and terms of form"AA" via ap-
plications of the grammatical rules (G1), (G2), and (G3). A motivation of the adjective
'rigid' will be givenin the next section.
(iii) A variableXa is said to be substitutableby the termB
in the termA iff
(a) whenever "(Ay.C)" is a part of A involving some occurrences of x which
counts as a freeoccurrence of x in A thenB is freefromy; and
(b) if a free occurrence of x in A lies in a part of form"AC' (or - in case of IL+ -
"P(C)" or "F(C)") of A then B is a rigid term.
(iv) The result of substitutingB
for Xa in A - in symbols: "[A]/" - and the
replacement ofA
by B
in a term C - denotedby "C[BfA]" - are definedexactly
as in EL (cf. (v) and (vi) of Def.
2.1.3. THE SEMANTICS OF IL ANDIL+ DEFINITION. (a) By an interpretation of an IL languageI./) let us mean a
Ip =(U, W, D, a )
where U and Warenonemptysets; D is a function defmedon TYPM such that
and a is a functiondefinedon Con such that
(2) C E Con(a) o(C) E Int(a) =df WD( a).
(b) By an interpretation of an IL+ languagewe mean a sixtuple
Ip = (U, T, -c, D, a}
where U, and T are nonemptysets, < is a linear ordering of T, D is the same as in
(l) except that
) =I D(a) where 1= WxT,
and a is as in (2) except that Int(a) =I D(a).
(c) Afunction v defmedon Var is said to be a valuation joining to Ip iff
X E Var(a) => vex) E D( a) .
The notation"v[x:al" will be used analogouslyas in EL (cf. Def.
Comments. W is said to be the set of (labels of) possible worlds, Tis the set of
possible time moments, and < represents the 'earlier than' relation between time mo-
ments. I = Wx T is said to be the set of indices. For a E TYPM, D(a) is the set of
factual values and Int(a) is the set of intensions, of type a, respectively. DEFINmoN. Given an interpretation Ip of an IL or an IL+ language If), we
shall define for all terms A E Cat and for all valuationsv joining to Ip, the intensionof
Aaccordingto Ip and v - denoted by "IIAII /p "- by the semantic rules (SO) to (S6)
below. Accordingto our definition, if A E Cat(a) then
will be satisfied where I =W in the case of IL, and I =W x T in the case of IL+
Hence, II AII /p is defmed iff for all i E I,
IAI/p =df IIAII /p (i) E D( a)
is defmed. We shall exploit this fact in our definition. The object IAI/p may be called
the factual value of A, according to Ip and v, at the world index i. - In what follows,
the superscript"Ip' will be, in most cases, omitted.
(SO) If x E Var, !xlvi =vex). If CE Con, II CII v =o(C).
(SI) IFafJ(Bp)l
(IBlvd. _
(S2) I(AXp A
)Ivi is the function <p E D(ap) such that
b e D(P) => <P(b) =IAl
: bJ.i
(S3) I(A
= Ba)vi = 1 if IAl
= IBl
and 0 otherwise.
(S4) l"Alvi = IIAllv'
(S5) IVA
(i). [Notethat if lA
a s
E I D(a) then IAl
(i) is defmed
(S6) [Onlyfor IL+ .]
Jl, (w.t) =1 if for some ( < t. IAl
{w.t') =1, and 0 otherwise;
(w.t) =1 if for some t' > t, IAl
(w.t/ =1, and 0 otherwise. LEMMA. (A) The intensions IIA II /p and the factual values IAIv/P are uniquely
determined by the rules (SO) to (S6), and, if A E Cat(a) then II AII /p E Int(a) and (for
all i E l) IAl
/pE D( a). - (B) If A E RGD then II AII /p is a constant function on I,
that is, for all i. j E I, IAl
' (Thisfact motivates the name rigid terms.)
Proof' by structural induction, using the semantic rules (SO) to (S6) in case
(A), and using the conditions(a) to (e) ofRGD (see (ii) of Def. case (B). DEFINITION. Let r be a set of sentences U'; Cat(0), Ip an interpretation, va
valuationjoining to Ip, and i E I an index.
(i) We say that the triple (Ip, v, i) is a model of r iff for all A E I' , IAIv/P = 1.
(ii) r is said to be satisfiable [unsatisfiable] iff rhas a model [has no model].
(iii) The notions semantic consequence, validity, and logical synonymity are
definedliterallyas in EL (cf. (iii), (iv), and(v) ofDef. LEMMA. If FV(A) is the set of variables having some free occurrences in A,
and the valuationsv and v / coincideon FV(A) then II AII v =II AII v' .
Proof. Similarly as in Lemma (put IAl
instead of IAl
taking into consideration that FV("A), FV(vA), FV(P(A and FV(F(A are the same as
FV(A). LEMMA. Givenany interpretation, if for all valuations v, IIAall
for all valuations v, II CII v = II C[BIAJII v
Proof' similarlyas in Lemma1.1.4.2. - Acorollary: THEOREM. The law of replacement.
If (A
= and Co then C[BIA. THEOREM. The law ofdeleting VI\. (VI\A = A ).
Proof. By (54),
II\Alvi =IIA II v '
II\Alv;(i) = IIAII v(i) = IAl

= II\Al
(i) = IAl
'\ '\
by (S5) by (3)
Our theoremfollowsfromthis fact.
Note that "(I\VA
=A)"is not valid. For "l\vAas" is a rigid term, and, hence,
II "vAII \I E Int(as) is a constant function on I whereas II AII \I E Int(d) might be a non-
constant functionon I.
152 LEMMA. Ifxp is substitutable by BP in the termA then for all valuations v:
=bE D(P) IA/l
[ x: bl.i
Proof. The proof method is the same as in EL (see Lemma The new
cases not occurringin ELare as follows:
Case (G4): A is offonn "AC". Then (assuming that x has some free occurrences
in C) B must be a rigid term, that is,
(4) for allj E I, IBl
= b.
Our induction assumption (writing C' for "C/"):
that is,
for alIj E I,
II c: II v = II CII v{x: bi
= IClv[x: bi.j
Case (G5): If A is offonn rc;: then use the induction assumption that
=IClv[x: su
which implies that for all j E I:
(J) = IClv[i ibl.i (j) .
=IC' I
(i) =ICl
bi .i (i) =IV Clv[x: bl.i
Case (G6): A is of form "P(C
)" or "F(C
)" . As in Case (G4) , B must be a
rigid term, i.e., (4) holds. Thus, we can use the induction assumption:
for all t E T,
Then, we get by (86) that
IC' Iv (w.t ) = ICl
: bJ.(w,t)
IP(C/ )I
= 1P(C)l
[ x: b),i ;
and similarlyfor F.
153 THEOREM. The lawof lambda conversion. If x is substitutable byB in A
rup A
)(Bp) = A/ ).
Proof' analogouslyas in Th.
Logical symbols introduced via definitions
.We adopt fromEL the definitionsof the symbols
I, J" -, 'V, 3, &, v, ::)
without any alteration. (Cf. Section 1.1.5.)Note that I and J, are rigidsentences.
As new logical symbols, we introduce the signs of the necessity0 aI1cfj}ossibil-
ity 0 , by the followingcontextual defmitions:
(Df. O)
(Df. 0)
=df ("A = "I),
oA0 =df - 0 - A.
One sees immediatelythat "DA" and "OA" are rigid sentences.
[Interestingly enough, a correct explicit definitionof ' 0' is impossible.]
The truthconditions of " DA" and "OA" are obvious:
=1 iff for all j E I, IAl
=1 iff for somej E I, IAl
and Bas are rigid terms then
Proof Our assumptions on rigidity mean that for all i E I, IAl
= rp E D(as)
and for all i E I, IBl
= If!E D( a 5). Then for all j E I, IVAl
= qJ(j), and IVBl
= If/(j).
10(VA = VB)l
= 1 iff for allj EI, rp(j) = If/(]),
i.e., iff qJ = If!, i.e., iff I(A = B)l
; = 1.
= B
) = ("A = "B).
Proof Replace ""A" and ""B" for A and B, respectively, in the Th. (noting
that ""A" and ""B" are rigid terms), and delete v" at the left side of the main '=' (by
The theorems and the lemmas of this sectionhold in both systems IL and IL",
In the next section 2.1.4 and in chapter 2.2, we shall deal with IL only.
Having the standard semantics of IL we are going to introduce the generalized se-
mantics of IL - briefly: the Gl-semantics. The calculusIC to be introduced in 2.2 will
be provedboth sound and complete with respect to this Gl-semantics. (The enlarged
systemIL+ will not be toucheduponhere.)
Our method will be the same as in 1.1.6at the definition of the G-semantics of
systemEL. Hence, we canproceed veryconcisely. DEFINmON. By a generalized interpretation - briefly: a GI-interpretation -
of a language L(i) of IL we mean a quadruple Ip = (U, ~ D, a) satisfying the fol-
(i) Uand W arenonempty sets.
(ii) D is a function defmedon TYPM such that
D(o) ={O,I},
D( afJ) c D(ft D( a),
D(l) =U,
) c WD(a).
(iii) a is a function definedon Con such that
C E Con(a) :=) o(C) E Intia) =df wD(a).
(iv) Whenever v is a valuation joining to Ip (satisfying the condition v(xcz) E
D( a) ), the semantic rules (SO) to (S5) in Def. 2.I .3.2 are applicable in determining the
intensions (according to Ip and v) of the termsof L(i) . DEFINITION. Let Tbe a set of sentences of ~ , Ip a GI-interpretation of L(i) , v
a valuation joining to Ip, and W E W. The triple(Ip, v, w) is to be said a GI-model of T
iff for all A E T, IAl
P = 1. The notions GI-satisfiability (GI-unsatisfiability) , GI-
consequence (r FGI A ), GI-validity (FGI A ), and GI-synonymity are defined in the
usual way.
Also, the variant ofTh. holds: THEOREM. All logical laws proved in the standard semantics of IL - in the
previous section- are logical lawsof the GI-semantics as well.
The most importantlaws- whichwill be usedin 2.2 - are:
The lawof replacement (
the lawof deleting v 1\ (
the lawof lambdaconversion (, and
the law F D(vA
= vB
) = (A = B) whereA, B are rigid terms (
The calculus IC will be based on six basic schemata (II) to (I6) and a single proof
rule. Most of the basic schemataare knownalreadyfromthe calculusEC (cf. 1.2.1).
Basic schemata
(II) (VAA =A)
(I2) =(2)
(13) =(3)
(I4) = (4)
(I6) ot fas =v gas) =if= g)
The notion of a basic sentence (of L(i) ) is defined analogously as in EC (in
Proof rule: Rule of replacement - RR. From "(A
= B
and Co to infer to
"C[B fA]" - exactlyas in EC.
The notionof a proof in IC and the provabilityof a sentenceA in IC - in sym-
bols: "he A" - is defined analogously as the corresponding notions in EC. - In the
notation "he", the subscript 'IC' will be usually omitted (except in cases when mis-
understanding can arise by its omission).
At the end of the preceding section, Th. verifies that the basic schemata
(11), (I5), and (I6) are GI-valid. (In case (16), take into consideration that the variables
f, g are rigidterms.)By the sameTh., the rule RR yields a GI-validsentencefromGI-
validones. The GI-validity of the schemata(12), (13), and (14) can verifyeasily.Hence: THEOREM. The soundness of IC with respect to the GI-semantics of IL. If
he A then FGI A. THEOREM. IC includesEC: If Ice A then he A.
Proof It is sufficientto showthat schema(1) is provablein IC. In fact
1. r (vAA=A) [by I ~ ]
2. r (VA A = A) [same]
3. I- (A = A) [according to line 1, replace" vAA" byA in line 2, usingRR]
In what follows, we can utilizeall laws of EC. The surplus of IC is hiddenin
the schemata (II) and (16). In the next section, we shall prove the modal laws of IC
most of whichare based on (11) and (16).
We shall prove first a generalization of (16) similarly as we proved in EC the laws
(2*), (E3*), and (4*) as generalizations of (2), (E3), and (4), respectively.
Our proof technique will be the same as in EC, and we shall use the same
methodof reference (cf. the conventions introduced at the beginningof 1.2.2).
[Df. 0 and E2.8: 3]
l- (DA
= A). - Proof'
[(EI) andE2.8]
[Df. 0: 1]
[pC: 3,4]
[Ri J,: 2,5]
(16*) IfA, BE RGD then O(vA
=(A =B). - Proof' From (I6)
by E5.1, using that A and B - being rigid terms - are substitutableforf and g, respec-
tively,in (I6).
16.1. O(A
= B
=(AA =AB). - Proof' Use (16*), exploiting that {AA,
AB} RGD, anddelete VA by using (II).
RO - The rule ofmodal generalization. A
OA. - Proof'
1. (A = i) [fromthe ass.]
2. (AA = AA) [(El)]
3. (AA = Ai) [1/2]
4. OA [Df. 0: 3]
11.1. DA
A. - Proof'
1. (AA = Ai):::> Apos .vp)(AA) = (Ap.vp)(Ai
2. (AA =Ai) :::) (vAA =VA T)
3. (AA = A T) :::> (A = i )
4. DA:::> A
11.2. If A
E RGD then O(A
:::> B
) = (A :::) DB). - Proof'
1. (i:::>B) = B [PC]
2. O(i :::) B) = DB [1/ (E1)]
3. (i :::> DB) =DB [PC]
4. O(i :::) B) = (i :::) DB) [31 2]
5. (J, :::> B) [PC]
6. O(J, :::) B) [RO: 5]
7. (J,:::) DB) [PC]
8. r O(J, :::> B) =(J, :::) DB) [pC: 6,7]
9. O(A :::> B) = (A :::> DB) [Ri J,: 4,8]
Onthe last step, we use that in "o(p0:::> B) = (p :::> DB)", Ais substitutable forp, sinceAis rigid.)
11.3. If A E RGD,
1. Ai = Ai) = I)
2. (01 =I
3. OJ,:::> J,
4. J,:::) OJ,
5. (OJ, =J,)
6. (OA =A)
11.4. (ODA =OA), and (OOA =OA).
It follows from11.3 that if A
is rigid then} (- O-A =--A) .and, hence:
11.5. If A
E RGD, (OA= A).
11.6. t (OOA =OA), and t (oDA =OA).
Furthermore, usingthat t OA::> A, and t A::>0A, we havethat:
11.7. t 00A ::> A, and t A::>DOA. (TheBrouwerschemata)
11.8. t D(A::> B)::> OA ::> DB, and t D(A::> B)::> oA ::> <B. - Proof"
1. t (D(A::> B) & DA) ::> B [11.1 twice, andPC]
2. t 0 [(D(A::> B ) & DA) ::> B] [RD: 1]
3. (D(A::> B) & DA) ::> DB [11.2: (2 = 3)]
4. D(A ::> B) ::> DA::> DB [pC: 3]
5. t 0(- B::> - A) ::> 0- B ::> 0- A [by 4]
6. t 0(- B::> - A) =D(A::> B) [PC, (EI), RR]
7. D(A ::> B)::> OA ::> OB [pC: 5,6, andDf. 0]
RD::>. The Lemmon Rules:
t A::>B ==> (t OA::> DB, and t OA ::> oB .J - Proof'
1. t A::>B [ass.]
2. t - B::> -A [PC: 1]
3. D(A ::> B) [RD: 1]
4. 0( - B ::> - A) [RD: 2]
5. r OA::> DB [(3::> 5) =11.8]
6. r 0 - B::> 0 - A [(4::>6) = 11.8]
7. t oA ::> oB [pC: 6, andDf. 0]
Lines 5 and 7 containthe results.
11.9. r (DV'x
=V'x.DA). (Barcan 's schema.) - Proof"
1. r V'xA::>A [QC4]
2. t DV'x.A::> DA [Lemmon: 1]
3. r DV'x.A ::> V'x.DA [RV': 2, and QC2]
4. r V'x.DA::> DA [QC4]
5. t oV'x.DA::> oDA [Lemmon: 4]
6. t ODA::> A [11.7 (Brouwer)]
7. t oV'x.DA::> A [pC: 5,6]
8. t OV'x.DA::> V'x.A [RV': 7, and QC2]
9. t OOV'x.OA ::> DV'x.A [Lemmon: 8]
10. t V'x.DA::> DOV'x.DA [11.7 (Brouwer)]
11.r V'x.DA ::> DV'x.A [pC: 10,9]
12.t (DV'x.A = V'xDA) [pC: 3,11]
COROLLARY: r (03x.A =3x.
A.) - (Prove this!)
The next lawwill be usedin Section2.2.4.
11.10. (- O(B &A) &O(B & C::J O(C & - A). - Proof:
1. r OB & C)::J (B &A::J O(B & C)::J O(B&A) [11.8]
2. r (-O(B&A) &O(B&C ::J -OB&C)::J (B&A [pc: 1]
3. r - B & C) ::J (B &A::J (C & - A) [PC]
4. r - OB & C)::J (B &A::J O(C & - A) [Lemmon:3]
5. r (-O(B &A) & O(B & C::J O(C & - A) [pC: 2,4]
If thereaderis familiar withmodallogic,he/she realizes that the modalfragment of Ie is an SS-
type modallogic.Furthermore, the combination of quantifiers and modal operators yielded a Barcanstyle
systemcharacterized by theschema11.9.
2.2.3. IC-CONSISTENT AND IC-COMPLETE SETS DEFINmoN. (a) A sentenceA is said to be an IC-consequence of the set of
sentencesT(or IC-deducible fromT) - in symbols: "T he A" - iff
Tis empty and he A, or
T is nonempty and there exists a conjunction K of some members of r such
that he K::J A.
(b) Compare (a) with Def., the definition of syntactic consequence in
ECI One sees that the difference is merely in the reference to the calculus IC instead of
EC. Consider Definitions 1.2.3:2-3. Substitute 'EC' everywhere by 'IC'. Then one
gets the definitions of
IC-inconsistent/ IC-consistent sets, and
IC-complete sets. THEOREM. If the members of r are free from the variableYt the variable Xa
does not occur in the sentenceAl , and r he Al then T he Vx.A. - For the proof
see THEOREM. Every IC-consistent set is embeddable into an IC-complete set.
Proof Consider the EC variant of this theorem: Th.,and its proof. The
proof of our present theorem is essentially the same, with obvious modifications. The
starting point is that To is an IC-consistent set of sentencesof a languageLo(i) J and we
shall define an enlargement L
) of L/i) by introducingnew variables in all types. Re-
placein the quotedproofeverywhere:
'Lot)(! , by 'Lo(i) "
'Lt)(! , by 'LPJ ,,
'EXTY' by 'TYPM "
' EC' by 'IC'. THEOREM. If r is an Ie-completeset of sentences then:
(i) r heA => A E I>:
(ii) {(A a= B
), C} r => " C[B/ A]" E r,
(iii) if the termA
occurs in a member of r then for some variableX
)" E r,
(iv) if "(Cup = Dup)" E r then for all variablesxp ,"(C(x) = D(x)" E r.
Proof" See Th.
2.2.4. MODAL ALTERNATIVES DEFINITION. We say that f/J is a modal alternative to r iff
(i) r and f/J are IC-complete sets of sentences(of the same language)and
(ii) for all sentencesA, if "OAIII E r thenA E f/J . LEMMA. If f/J is a modal alternativeto r then for all CoE f/J ,
" OC'Er.
Proof. Since r is IC-complete, one of "<C", u .... oC' must be in r . But
"....o C" E r implies ".... C" E f/J (by (ii) of the preceding Def.) which is impossible if
C E f/J (by the IC-eonsistencyof f/J). Hence, "OC" E r.
Note that condition (ii) of Def. is equivalent to the following condition
(ii'): for all A
E f/J , "oA" E r .- Our lemma proves half part of this equivalence.
Prove the other half! THEOREM. Modal alternativeness (as defined in is an equivalence
relation (betweenIC-eomplete sets), i.e., it is reflexive, symmetric, and transitive.
Proof. (a) Reflexivity. If r is Ie-complete then whenever"OA'" E T; A E r.
(b) Symmetry. Assume that I" is a modal alternativeto r .If "DA" E I" then
"oOA" E r (by the precedinglemma), and, by the Brouwer schema, A E r. By this, r
is a modal alternative to I",
(c) Transitivity. Assume that r /I is a modal alternative to I: ' and r ' is a
modal alternative to r .. we have to show that T" is a modal alternativeto r. If "DA"
E r then "OOA" E F, hence, "OA" E I" and A E F" .
Note that our theorem is not a general lawof modallogic. It holds onlyfor the 85-typemodali-
ties. Our proof exploits essentiallythe factthat 85 is included in IC. THEOREM. If r is an IC-eomplete set and "OB" E r then there exists a mo-
dal alternative f/J to r such that B E f/J .
Proof. (i) Let ic, }nE aJ be an enumerationof all sentences of form"3x.A" of
the given language L (i). We shall defme first a sequence tH; }nE aJ of finite sets, and
we shall denote by K; the conjunctionof all membersof H

= {B}.
If "O(K
& C; )" r then H
=H; .
Now assume that " O(K
&C; )" E r and C; is "3x
" . We can assume here that
K; is free fromx (if not, let us "re-name"it), and so:
(1) (K; & 3x.A) =3x(K
& A) [by QC]
(2) O(K
& 3x.A) = 03x(K
&A) [Lemmon: (1)]
(3) 03x(K
& A) =3x O(K
&A) [Barcan]
By these, "3x.
& A)" E r. Then, by the 3-completenessof r, for someYa
, " O(K
&Al )"E r . Furthermore:
(4) uc; & Al ) (K
& Al & 3x.A) [QC]
(5) O(K
& AI) O(K
& Al & 3x.A) [Lemmon: (4)]
Hence, for someYa, "O(K
&Al & 3x.A)" E r. Choosesuchan Y and define:
=n, U {3x.A, Al }
&3x.A &Al )"). Notethat
(6) for all n E (J), "oK;" E r.
We nowdefine:
By this definition, HO) is 3-eomplete, and, in consequence of (6): if K is any conjunc-
tionof some members of H0) then "oK" E r (for Kbeing finite, it must be a subcon-
junctionof someK;).
(ii) Nowlet (An}nE 0) be an enumeration of all sentences of LV) . We definethe
sequence ( r.P
}n E Q) of sets by the following induction:
= r.P
if for someconjunction K of members of r.P
, "-O(K & An)" E r,
and tP
= tP
U {An} otherwise.
Bythedefinition of tP, whenever K is a conjunction of somemembers of tP,
"OK" E r. Since
I- - K 0 ... K :=; " ...0 K" E r ,
the IC-consistency of F impliesthe IC-consistency of r.P.
(iii) Using that HQ) r.P we have that r.P is 3-complete. To showthat r.P is
IC-complete we haveto showthat if Co tP then tPu {C} is IC-inconsistent.
Assumethat C tP. Then Cis a member of our enumeration (An}n EQ), say, C
is Am . Then C tP
+I whichmeans that for someconjunction K of members of tP
u ... O(K & Am )" E r. On the other hand, if K' is an arbitrary conjunction of some
members of tPthen"O(K & K')" E r. However,
(cf. 11.10), whichmeansthat
(7) for all conjunctions K' of members of F , UO(K' &- -Am)" E r .
Since"-Am" occursin our enumeration too, say, it is A
, "-Am" must be a member of
(for "-O(K' & - Am )" E r is excluded by (7) and the IC-consistency of n.
Hence, "- Am " E cP, that is, cP u {Am }is IC-inconsistent.
(iv) Finally, we have to showthat cP is a modal alternative to r and B E cP .
The latteris obvious, for {B} = H
~ H(J) = cPo ~ cP. Nowassumethat "DA" E r. By
the IC-completeness of cP , oneof A, ; A" must bein cP . If ; A" E cP then "o-A" E
r, contradicting the fact that "DAn E r (andr is IC-consistent). Hence, A E cP . DEFINITION. By an IC-complete family let us meana set Wsuchthat
(i) the members of Ware IC-complete sets of sentences of a common language
(ii) the members of W are pairwise modal alternatives of each other (cf. Th., and
(iii) whenever "OB" EWE W, thenfor somew' E W, B E w ~ LEMMA. Assumethat Wis an IC-complete family. Then: If A is a rigid sen-
tence, andfor someW E W, A E W thenfor all w' e W, {A, oA, DA} ~ w ~
Proof. By n.3, A E W implies that "DA" E w. Using that every w' E W is a
modal alternative to W we havethat A E w', for all w' E W. Then, by the laws Il.3 and
II.5 we get that for all w' E W, {A, oA, DA} ~ w'.
We shall applythis lemmamainly for the cases A is of form Po , "(x
=Ya )"
(where p, x, y are variables), "DC
, or "oCo". THEOREM. If r is an IC-complete set then it is "embeddable" into an IC-
complete family: thereexistsan IC-eomplete family W suchthat r E W.
Proof. Let {OB
}o< n E (J) be an enumeration of all sentences of form"OC' of r.
By Th., for all n > thereexists a W
suchthat B; E W
and W
is a modalal-
ternative to r. Let Wo be r, anddefine:
W= {n E tV: W
2.2.5. THE COMPLETENESS OF IC THEOREM. If W is an IC-eomplete family then thereexists a GI-interpretation
lp = {U, ~ a} and a valuation v suchthat
To showthat Wis complete we have to prove that whenever "<C" E W
E W then for
some k, C E Wk E W. Now assume that "oC" E W
Then "OOC" E Wo = r (cf.
Lemma2.2.4.2), and (byn .5) "OC" E Wo ; hence, for somek >0, Cis B, , and C E Wk
for all W E Wandfor all A E W, IAl
p = 1. (1)
Proof. Note first that the set of worlds Win the interpretation lp is the same as
the given IC-complete family W. Furthermore, consider the analogous Th. of
EC. We shall adapt somedetailsfrom the proofof this theorem by the reference motto
Has in EC". Incaseof suchan adaptation, an obvious modification consists of introduc-
ing a reference to a world w, e.g., an expression of form "lAl
" is to be replaced by
" , andso on.
Part I: Thedefiniton of Ip and v.
We shall define D and v by induction on TYPM. Let us choosea Wo fromW.
Since the variables are rigid terms, in defining v it is sufficient to refer to Wo only
(owingto Lemma2.2.4.6).
(a) For p E Var( 0) wedefme:
v(p) = 1 if p E Wo, andotherwise;
andD(0) = {O, 1}. As in EC, we have that therearePo, q0 such that v(p) = 1 and v(q)
=0; and v(p) =v(q) iff "(p =q)" E Woo
(b) We define v for members of Var(l) and the domainDit) = U exactly as in
EC (but referring to Wo insteadof T).
(c) Assumethat D( a) and D(P) are defmed, v is defined for Var( a) u Var(p),
andfor rE {a, P}, (i) and (ii) belowhold:
(i) a E D(y) for somex
' v(x) = a,
(ii) "(x
=Yr)" E Wo v(x) =v(y).
We then define v for Var( ap) and the domainD(ap) similarly as in Ee, putting Wo
insteadof r .Then(i) and(ii) holdfor r = a(pJ as well. Furthermore:
(2) " (f afJ (yp) =x
E Wo v(f)(v(y)) =v(x).
(d)Turning to the type as, we define: for all W E W,
)(w) = a iff for someYa, v(y) =a and "(vx =y)" E w.
Hereour induction assumptions are that v is defined for Var( a), D(a) is defined, and
(i), (ii) aboveholdfor r =a . Nowv(x
) E WD(a). Then:
) =df {rp: for someXas , v(x) = rp} WD(a).
Provethat (i) and(ii) holdfor r = as, and
(3) "( v
Xa s
=Ya) E W v(x)(w) =v(y).
(e) If C E ConCa), thenfor all W E W, thereis an Xa such that H(C = x)" E W
(cf. Th., (iii. We thendefine:
(4) o(C)(W) = v(x) iff H(C = x)" Ew.
Nowour definition of Ip and v is completed.
Part II:.Theproof of (1).
(A) As in EC, we prove (1) firstlyfor identities of form"(B
= Ya )". If B is a
variableor a termof form''f(x)'' then - using that these terms are rigid ones - (1) holds
according to the definitionof Ip and v (see also (2) above which holds not only for Wo
but for.all W E W). If B is a constant then (1) holds by (4). In other cases, B is a com-
pound termof form
(Al) "Fap(Cp)", or
(A2) "('Axp c.r. or
(A3) rcc; =o;)", or
(A4) "AC
", or
(A5) "v Cas" .
The proof for the cases (AI), (A2), and (A3) runs similarlyas in EC (put
"w E W" for r, and "IXl
" for "IXl
" everywhere). Let us turn to the remaining two
(A4) If "(I\C
= Yas )" E Wi E W then, by (16*), using that "AC" and Y are
rigid terms, " D(C =v y)" E Wi ' Then, for all WE W, "(C =V
)" E w. Furthermore, for
all W E W, there is an X
E Var(a) such that "(y = x
) E w. From these it then fol-
lows that for all w, "(C =x
)" E W . By (3)
(5) I( y =xw)l
We can apply the induction assumptions:
(6) for all w, I(C = xw)l
= 1.
From (5) and (6) it then follows that
for all w, I(C =v y)l
and this implies that
(for all w) ID(C=v y)l
= 1,
or, in another form,
(for all W E W) I(AC= y)l
= 1
(includingthe case W =Wi ).
(AS) If "(Cas =Ya)" EWE W then for somevariableX
"(C = x)" E w. Then -cc = vx)" E W and "(x = y)" E W. By (3), we have that
and by induction assumption,
I(C =x)l
The latter impliesthat
This and (7) togetherimplythat
(B) Nowwe can prove(1) for identities of form "(B
= C
)" - where both B
and C may be compound terms - exactly as in EC [cf. the proof of, Part II,
(C) Finally,if the sentence A is not an identity then applythe same deviceasin
EC [seethe proofof, Part II, (C)]. THEOREM. If the set r is IC-consistentthen F is GI-satisfiable.
Proof. By Th., T is embeddable into an IC-complete set Wo , and by
Th., Wo is embeddable into an IC-complete family W. By the preceding theo-
rem, thereis a GI-interpretation Ip = (U,W,D, (J ) and a valuation v such that the triple
(Ip, v, Wo ) is a GI-model of wo, and, since T ~ Wo, it is a GI-model of T as well.
COROLLARY. (Lowenheim-Skolem.) If the set T is Gl-satisfiable then T is
"denumerably" satisfiable in the sense that T has a GI-model U,~ (J ),v,w) such
that eachD( a) [a e TYPM] is at most denumerably infinite. - Cf. Th. THEOREM. The completeness of IC with respect to the Gl-semantics of IL.
If F I=GI A then r he A. - For the proof see Th.
In several papers, Montague formulated some fragments of English as a for-
mal(ized) language, giving the lexicon, the syntax, and the semantics of the fragment.
In his last two works (Universal Grammar and PTQ), the semantics of the fragment
was not formulated directly; instead, he formulated translation rules fromthe English
fragment into the language of IL (or IL+ ); thus, the semantics of the fragment was in-
directly givenvia the semantics of Montague's intensional logic.
In what follows, we shall present the approach explained in PTQ (with minor
changes in the notation).
Let us call the fragment of English treatedhereoLE (Montague qualifies it as "a
certainfragment of a certaindialect of English"; here the reference to "a certain dia-
lect" will mean that LE involves some compound terms unusual in "ordinaryII Eng-
We shall begin with the defmition of the system of categories of LE' (Here the
categories correspond to the types of logical languages.) The basic categories are t and
e: the category of declarative sentences and of individual names, respectively. The
functor categories are of form " a I P" and "a II P" wherethe difference between the
single and the double slash ('I') is of grammatical nature only, not concerning the se-
mantic values. (This will be enlighted belowin the particular cases.) An expression of
either category is to besuch that whenit is combined (in a specified way which is dif-
ferent for the two categories) with an expression of category P, an expression of cate-
gory a is produced. Thefunctor categories explicitly usedin LE are as follows:
IV= tie, thecategory of intransitive verbphrases.
eN =t lie, thecategory of common nounphrases.
[Comment. Intransitive verbs and common nouns are, obviously, different cate-
gories of English (this holds for most languages), theymay get different suffixes etc.
From a logical point of view, both are monadic predicates which means that their se-
manticvaluesbelongto the same domain. This motivates the notation.]
NOM=t I IV, the category of nominal phrases. - Roughly, nominal phrases
are expressions whichcanoccupythe subject(andthe directobject)placesof verbs.
[Remark. This category was denoted by 'T' and called 'the category of terms'
by Montague. We use the word 'term' in the sense of 'well-formed expression" of
some language. To avoidconfusion, we have to abandon here Montague's original no-
TV= IVI NOM, thecategory of transitive verbphrases.
ADV= IVIIV, the category of IV-modifying adverbs.
VIV= IVIIIV, the category of IV-taking verbphrases.
[Remark. An example of a VIV phrase is 'try to', e.g., in 'try to find'. Again,
the grammatical rules governing ADV and VIVphrases are different(as we shall see
this later on), but when appliedto an IV phraseboth result a compoundIV phrase. This
motivatesthe notation.]
ADS=tIt, the category of sentence-modifying adverbs.
SV= IVI t, the categoryof sentence-taking verbphrases.
PRE=ADVI NOM, the categoryof adverb-making prepositions.
The full inductive definition of categories of L
is as follows:
t and e are categories.
If a, Pare categoriesthen"a I P" and "aIIP"are categories.
However, onlythe categories listedabove will playa role in LE .
Concerning the lexicon, the set of basic terms of the category a will be de-
notedby "B(a )". These sets are givenexplicitlyas follows:
B(lV) = {run, waft taft rise, cfiange },
B(CN) = {man, woman, part [ish, pen, unicorn, price, temperature },
B(TV) = {find, rose, eat, love, {ate, be,seek, conceive },
B(NOM) = {Jolin, Mary, fJ3i{[, ninetg, lie
, lie} , 1ie
, },
B(ADV) = {rapUffg,sfowfg, vo{untarifg, a{{.geafg },
B(ADS) = {necessarifg },
B(VIV) = {try to, wisli to },
B(SV) = {oefieve tliat, assert that },
B(PRE) = {in, aoout },
BCa) = 0 if a is anycategoryother thanthosementioned above.
We used here script letters in printing the basic terms of LE . In what follows,
even the compound expressions of LE will beprinted similarly, and we shall not use
quotation marks surrounding them (except when the quoted text involves metavari-
Onecanthinkthat the sets B(a) contain only sampletermsandtheycouldbe enlarged by fur-
ther "similar" terms, Probably, this is true. However, onemust be cautious in doingso for it may happen
that the application of thefurther rulesto the newtermswillleadto undesirable results.
The set B(NOM) contains a potentially infmite sequence of pronouns with nu-
merical subscript (lien)n E aJ . This will beuseful in the construction of some compli-
The basic sets B(t) and B(e) are empty. This is so because (a) in English, there
are no one-wordsentences, and (b) individual names are ranked into B(NOM) (why?
- the answer will be givenlater on).
Notethat, when applying theMontagovian approach to Hungarian, thebasicset B(t) might con-
tain such "subject-free" sentences as fto.'lJ(U,;t viIlJJrrl;t ta'lJtJ.S.lAAfik Cit is snowing', 'it is lightning', 'spring
is coming') etc.
The set of all terms of category a will be denoted by "T(a )". The inductive
definition of thesesets will be givenby the syntactic rules (S1) to (S17)below.
Syntactic rules
Basic rules
(S1) B(a ) ~ T(a) foreverycategory a .
(S2) If A e T(CN) then the terms "every A", "the A", and "a / an A " are in
T(NOM).- Herethe notation"a / anA" is to be understood as to choose the indefinite
articlea or an according as the initial letterof A is a consonant or a vocal. Note that the
spaces (blanks) in the defined complex terms represent interspaces. - Examples: Since
{man, woman, parR. } ~ B(CN) ~ T(CN), {every man, a woman, tfiparR.} c T(NOM).
) If A e T(CN) and Se T(t) then"A suefi tfiat Sen) "e T(CN) where Sen)
comesfrom S byreplacing eachoccurrence of fin or fii1Ttn bylie / she/ it or him/ her/ it
respectively, according as the first common noun in A is of masculine/feminine/neuter
gender. [Here it is assumed that the gender of the members of B(CN) is given in ad-
vance.] - Example: Assumingthat 1ie
Iooes fiittto e T(t) (cf. the example of (S5) be-
low), woman suefi tfiat sheIooes fiittto e T(CN),by (S3} ).
Rules of functor application
(S4) If A e T(NOM) and B e T(lV) then"A B/" e 'I'(t) whereB' is the result
of replacing the first verb (i.e.,member of B(lV) u B(fV) u B(SV) u B(VIV)) in B by
its thirdpersonsingularpresent. [Here it is assumed that the thirdpersonsingularpre-
sent form of eachverboccurring in the lexicon is known.] - Example: Sincea woman
e T(NOM) andtalK.. e T(lV), a woman ~ e T(t).
(S5) If B e T(fV) andA e T(NOM)then "B A' "e T(lV) whereA' is fii1Ttn if
A is of formfin , and A' is A in other cases. - Example: Sincelove e T(TV) and lieo e
T(NOM), Iooe fiittto e T(lV), and, by (S4), 1ie
loves fiittto e T(t).
(S6) If Be T(PRE) andA e T(NOM)then"B AI" e T(ADV)whereA' is as in
(S5).- Example: Sincein E B(PRE)andtlieparR. e T(NOM), in tlieparR. e T(ADV).
(S7) If B E T(SV) and S e T(t) then"B S" e T(lV). - Example: Since6efieve
tfiat e T(SV) and 1ie
roves fiittto e T(t), 6efieve tfiat 1ie
roves fiittto e T(lV).
(S8) If B e T(VIV) and C e T(IV) then"B C" e T(lV). - Example: Sincetry
to E B(VIV) andrun E B(lV), tryto run e T(lV).
(S9) If B E T(ADS) and S E T(t) then"B, S" e T(t). - Example: Sinceneces-
sarifge B(ADS)andawoman ~ e 'I'(t), neeessarifgJ awoman ~ e T(t),
(SID) If Be T(ADV) and C e T(lV) then "C B" e T(lV). - Example: Since
sfowfgE B(ADV)andwalR. E B(lV), wa[ksfowfg E T(lV).
Rules of conjunction and alternation
(SI1) If s., S2 E T(t) then"S! aTUf S2" and"S! or S2" are in T(t).
(SI2) IfA, BE T(IV) then"A aTUf B" and "A or B" are in T(IV).
(S13) If A, B e T(NOM) then"A or B" E T(NOM). [Here the aTUf operation is
absent for "A aTUf B" in subject position requires the plural of the verb, but in this
fragment onlythe thirdpersonsingularfonn of verbsis used.]
Rules of quantification
) If A E T(NOM) andBe T(t) then"B[A / n]" E T(t) where
(i) if A is of form i e ~ then B[A / n] comesfrom B by replacing all occurrences
of lien or lii"'n by i e ~ or lii"'k respectively,
(ii) and, in other cases,B[A / n] comesfrom B by replacing the first occurrence
of lien or lii"'n by A and all other occurrences of lien or lii"'n by fielsfielit or liim/lier/it
respectively, according as the genderof thefirst common nounor nominalin A is mas-
Example: Sincea woman sueli tliat she roves liitno e T(NOM) and rove e T(TV),
rove awoman sueli tliat sheroves liitno e T(IV) (by(S5)), and
lieo roves awoman sueli tliat sheroves liitTto e T(t).
Usingthat every man e T(NOM) we get by (SI4
) that
(1) every man roves awoman sucli that sheroves liim e T(t).
) IfA E T(NOM) and BE T(CN) then"B[A/n]" E T(CN) whereB[A/n] is
as in (SI4
) If A e T(NOM) and Be T(IV) then " B[A / n]" E T(IV) whereB[A / n]
is as in (S14n)'
Rules of tense and negation
(S17)IfA e T(NOM) andB e T(IV) then"A B-''', "A B
" , "A B-F",
"A B
", and"A B-F" are in T(t) where
B-' is the result of replacing the first verbin B by its negative third personsin-
gular present,
~ is the result of replacing thefirst verbin B by its thirdpersonsingularfuture,
B-F is the result of replacing the first verbin B by its negative third personsin-
is the result of replacing the first verbin B by its third personsingular pres-
ent perfect, and
~ is the result of replacing the first verbin B by its negative third personsin-
gularpresent perfect.
As we see, the majoroperation of forming a sentence consists of a combination
of a nominal phraseandan intransitive verbphrasewherethe latter maybein a tensed
and/or negative form. The rules of this operation are (S4) and (S17). The precise char-
acterisation of the notions occurring in (S17)- such as the (negative) thirdperson sin-
gular future or presentperfect form of a verb- maybe given, as Montague says "in an
obvious and traditional way"; but the author gives no details. Unfortunately, no exam-
ple of a tensedor negated verboccursin PrQ.
The most important novelty of this syntax is the rule (SI4
) [and its variants
) and (S16
)] not occurring in the earlier writings of Montague. Without this
rule, the construction of sentence (1) would be impossible. These rules are, in fact,
rules of substituting (free) pronouns. The termwhich is substituted for the pronoun is
oftena quantifying expression (as in (1): eve'!! man); probably, this is the reason that
Montague speakson rules ofquantification.
The construction of sentences may be demonstrated by analysis trees (often
used by theoretical linguists). For example, the analysis tree of sentence (1) is as fol-
lows: [Thenumbers in squarebrackets refer to the numberof syntactic rule applied at
the indicatedstep.]
every man roves awoman such. tliat she roves him [14
/ ~ f i tfiat she Iooes Iiima [4]
every man [2] / ~ .
I lie
rove awoman sucb that she roves liimo [5]
man ~
rove awoman sfUli that she roves liimo [2]
woman sucli that sheroves liimo [3d
woman fie1 roves liimo [4]
liel rove liimo [5]
rove lieo
The termin eachnodeis eithera basic termor elseit comesfrom termsstandingin in-
feriornodesbymeansof the indicated rule.
Montague acknowledges that some sentences of LE are ambiguous. Such a
sentence has two(or more)essentially different analysis trees.His example is:
(2) Jolin seeK.! a unicorn.
Herefollow thetwo different analysis treesof (2):
Jolin seek.! aunicorn [4]

Jolin seek. aunicorn [5]
/ \
seek. a unicorn [2]
Jolin seek.! a unicorn [14
.> -,
a unicorn. [2] Jolin seek.! liitno [4]
1 r-.
unicorn Jolin seek.liitno [5]
seek. neo
As Montague says, "the first of these trees correspond to the de dicto (or nonreferen-
tial) readingof the sentence, and the second to the de re (or referential) reading." In
other words: The first readingdoes not presuppose that thereare unicorns whereas the
secondreading maybe paraphrased as follows: "Thereexistsa unicorn such that John
wants to find it." The translation rules (given in the next section) will verify these
In his Universal Grammar, Montague usedfoursortsof parentheses in orderto
get an unambiguous fragment of English. In the present Zj, , no parentheses are used.
The disambiguation of an ambiguous sentence may be done here by supplying an
analysis tree to the sentence. Thus, we can say that a pair (8, T(S)} - where 8 is a
sentence of.LE andT(S)is an analysis treeof 8 - represents a disambiguated sentence
of .L
. (To make these notions more exact needs some work; e.g., it is necessary to
defme the relation "not essentially different" between analysis trees of the same sen-
tence. For example, applying fie
insteadof fie
in the analysis treeof sentence (1), the
resulting treeis not essentially different from the one in whichfie
is applied. But let
us neglectherethis problem.)
INTO .L{i}
The first step towards the translation is to define a mappingf fromthe catego-
ries of .LE to the typesof .L(i). Theintention is that if A E T(a) thenthe translation of A
is to be of type f( a). The inductive definition of thefunction f is as follows:
f(t) =0, f(e) = t ,
f(a I fJ> = f(a IIfJ> = f(a)(f(fJS .
L _
This meansthat everyfunctor of LE countsas an intensionalonein the senseof operat-
ing on the intension of its argument. Experience showsthat this is not always the case;
e.g., run, man, find are extensional predicates. Montague does not deny this fact; his
remedy willbe treated in thenext section.
Atableauof the mapping f:
We use herethe abbreviations:
Concerning the translation of the basic terms, Montague prescribes that if A E
B(a) then- with some exceptions - the translation of A is a member of Con(f(a ,
of course, with the precondition that the translations of different terms must bediffer-
ent ones. The exceptions are the nominal terms (members of B(NOM, the transitive
verb Be, andthe sentence modifier necessarify; theirtranslations are defined separately.
Montague assumes a uniquelanguage of IL+ , and, hence, he was compelled to
choose some constants of this language as the translations of the basic terms of LE'
However, we havespokenalways about afamily of IL+ languages. Hence, we can as-
sumethat thereis a particularlanguage
= (Log, Var, Cone ' Cat. ';
such that the (nonlogical) constants of this language are just the basic terms of LE .
We shall proceed this way where the translation rules will be somewhat simpler. The
translation of a term A will be denoted by "[A]* ", but the square brackets will be
omittedif A consists of a single word. The translation rules (Trl) to (Tr17) correspond
to the syntactic rules (8I) to (817) of the preceding section.
Translation roles
Basic roles
(Trl) (a)If A E {Jolin, Mary, 1 3 i ~ ninety} thenA E Conit) and
A* =(Af5s'
f("A E Cate(E).
(b) lien * = (Af5s .v f( ~ n E Cat, (e),
where~ n is the 2n-th member of Var(l S).
(c) ne'" = (Ages(Ax
.vg("Ayz s(vx = vy ) E Cat, (OgS ),
(d) necessarify* = (Apos.Ovp) E Cate(o d ) .
(e) If A is a basic termof category a not occurring in (a), (b), (c), (d)
abovethenA E Cone (r( a andA* = A.
Notethatby (a)and(b), proper nouns and pronouns aretransferred from type l intotype Eo (TI2)below
shows thatthe translations ofcompound nominal terms belong tothesane type.Thisisthe reason that Montague
ranked theproper nouns and pronouns intothecategory NOM (instead ofcategory e).
(fr2) IfA E T(CN) then
[every A]* =(A/&'Vx,S[A*(x)::) vftx)]) E Cat, (e),
[tlie A]* = CA/& 3Yls ['Vx,S(A*(x) = (x = y &v 1(Y)] ) E Catie),
[alan A]* =(A/&3x,S[A*(x) & v ftx)] ) E Cafe (e).
rns, )IfA E T(CN) and S E T(t) then
[Asuelitliat S(n) ]* = & S* E Cafe (0).
[For the meaningof S(n) see (S3
Rules of functor application
(fr4) IfA ET(NOM) and BE T(IV) then [A B1* =A*("B*).
[For the meaningof B' see (S4).]
(frS) If BE T(TV) andA ET(NOM) then [B A']* = B*("A*).
(fr6) If B E T(PRE) andA E T(NOM) then [B A']* = B*("A*).
(fr7) IfB E T(SV) and SE T(t) then [B S]* = B*("S*).
(fr8) If B E T(VIV) and C E T(IV) then [B. C]* =B*("C*).
(fr9) If BE T(ADS) and SE T(t) then [B, S]* = B* ("S*).
(frIO) If BE T(ADV) and C ET(lV) then [C B]* = B*("C*).
Rules of conjunction and alternation
(Trll) If S1 ,S2 E 'I'(t) then [S1 ana S2]* = (S1 * & S2*)' and [S1 orS2]* =
(S1 * v S2 *).
(Tr IZ) If A, BE T(IV) then [A ani B]* = (AX
l S
(A*(x) & B*(x) and [A orB]*
= (')...x
(A*(x) v B* (x).
(Trl S)If A, BE T(NOM) then [A orB]* =(A/ & (A*if) v B*if)).
Rules of quantification
) If A E T(NOM) and B E T(t) then [B[AI n]]* = A*("(Aqn B*. [For
the meaningof "B[A In]", see (S14
rrns, )If A E T(NOM) and BE T(CN) then
[B[AI n]]* = .B* (y)]).
) If A ET(NOM) and BE T(IV) then
[B [AI n]]* = (AYl sA*("[Aqn .B*(y)]).
Rules of tense and negation
(TrI7) If A E T(NOM) and B E T(lV) then
[A BO]* =- A*("B*),
[A Jt]* =F A*("B*),
[A B-F]* =- FA*("B*)
[A B
]* =P A*("B*),
[A B-l']* =- P A*("B*).
[Concerningthe superscriptsof B (B"", if, etc.), see (S17).]
Examples of translation
We shall use the sign '=' for expressing logicalsynonymity, i.e., "A ::: B" ab-
breviates " 1= (A =B)", andwe shallexploitthe transitivity of this relation by writing
sometimes "A t;1 B s::: C s::: ''. Numbers in square brackets refer to thenumbers of translation
rules applied. Thefrequent occurrence of "(A/Bs 1(A
)" will be abbreviated sometimes by
"(A)+ ".
As the first example, let us take the step by step translation of sentence (I) of
the preceding section.
[rove fiitno]* =rove("fieo *) :::
r:; rove("(Alrs v f( r:;
roves fiitno]* = fie
*("[rove fiitno]*) r:;
r:; (A15s .v r:;
= :::
[byour abbreviation]
[bydeleting v"]
(c) [woman sucfi tfiat she roves fiitno ] * =
= & [fie
roves fiitno]* = [Ie,3d
r:; [woman & ) [by.(b)].
Let C abbreviate the term awoman tfiat she roves fiitno .
(d) C* = [awoman tfiat she roves fiitno]* =
= (A15s 3x
([woman tfiat she roves fiitno ]*(x) &v.f(x)) =
r:; & )])(x) &v.f(x)]) r:;
::: (Af3x[woman(x) & &vj(x)]).
[Weusedhere(Tr2), (c), and
(e) [rove C]* = rove("C*) r:; [5]
r:; rove("(Af3x[woman(x) & &vf(x)]) [(d)]
(f) [fie
roves C]* = (Airs .v C]*) =
::: v"[Cove r:; [Cove C]*(qo)
(g) [eve1Y man]* = (A15s 'VYls [man(y)::J v.f(y)])
[AI anddeleting v"]
(h) [evety man Coves C]* = [evety man]* roves C]* r:; [14
r:; 'VYls [man(y)::J Coves C]*)(y)] = [(g),(h),AY]
= vv., (man(y)::J [[lie
roves [del.v",
Finally, we havethat
(1) [eve1Jl man roves awoman suefi tfult she roves fiim]* =
:= V'YlS[man(y) :J rove(A'A/6s.3xlS [woman(x) &rove(A(y)+)(x) &v ftx)])(y)].
The ambiguous sentence (2) of the preceding section has two different transla-
tions. Its de dictameaningis expressedas follows:
(a) [seek. a unkorn]* =seek. (A[a unicorn ]*) =
:= seek.(A('A/6s .3x
5 [unkorn(x) & vftx)])
(2.1) lJofinseek,saunitom]* =Jofin* (A[seek.aunitorn]*) =
:=('A/65 .vf(AJofin(A[seek.(A'A/6s 3x
(unitorn(x) &vftx)])]) :=
=seek.(A('A/6s 3x,S[unitorn(x) &vftx)])("Jofin) [by'A/anddeletingVA].
The translation of the de rereadingis asfollows:
lJofin seek,s him]* = seek.(A( 0 )+)("Jolin)
[a unitorn]* = ('A/ 653x
[unitorn(x) & vftx)])
(2.2) lJolinseek,s aunitorn]* =[a unitorn]* fiimo]*:= [14
:= ('A/6s 3x[unitorn(x) & = .
:= 3x
[unitorn(x) & seek.(A(x)+)(AJolin)] ['A/, del.VA]
Our next examples will refer to the verb be.
(3) [6e 1Ji{{ ]* = 6e* (A('A/6s v/(A1Ji{C) =
= ('Ages (Ax
Vg[A(AYls (vx =vy ]) (A(X/ 6s vf(
= ('Ax
vA['A/6s v/(A1JifC)(A(AYls (x =vy]) :=
:= ('AxVA('Ay(v
=vy (Af}JifC) =
:= (Ax(vx = vAf}JifC = (AXlS (x = 'Rim)
['Ay, del. VA]
(The references ' [Ag]' etc.referto A-conversions with respect tothevariable following A.)
(4) [fie
is 1Jift]* = (A/6sVft (x = 'RifC) =
= VA(AX(vX = = (v ='13im [AJ: Ax, del.
[tfie temperature]* =('A.f.3y[V'x(temperature(x) =(x =y & "ley)]) [2]
[6e ninety]* = (Ax
(vx = ninety [cf.(3)]
(5) [tfie temperature is ninety]* =
:= 3Yls [V'x
(temperature(x) =(x =y & (y =ninety)].
(6) [tfie temperature rises]* = 3y[V'x(tettperature(x) = (x = y & rise(y)].
(7) [ninety rises]* =rise(Aninety).
In referring to the examples (5), (6), and (7), Montague wrote: "From the
premises tlie temperature is ninety and tlie temperature rises, the conclusion ninety rises
would appear to follow by normal principles of logic; yet there are occasions on which
both premises are true, but none on whichthe conclusion is." (Thisexampleis due to
Barbara Hall Partee.) Now, according to the translations above, the argument in ques-
tions turns out not to be valid. The reason, according to Montague, is this: "'llie tempera-
ture 'denotes' an individual concept, not an individual; and rises, unlike most verbs,
depends for its applicability on the full behaviour of individual concepts, not just on
their extensions with respect to the actual world and (what is more relevant here) mo-
ment of time. Yet the sentence tlie temperature is ninety asserts the identity not of two
individual conceptsbut onlyof theirextensions."
Montague continues: "We thus see the virtue of having intransitive verbs and
common nouns denote sets of individual concepts rather than sets of individuals - a
consequence of our general development that might at first appear awkward and un-
natural." We can add that the analogous treatment of transitive verbs can be appreci-
ated at the light of the translation in example(2.1) above.
Montague remarks also that his translation rule for he adequately covers both
the is of identity and the is of predication. Concerning identity, we have examples (4)
and (5) above. Nowlet us take anexamplefor the predicative/copulative use of is.
[he a man]* =he* (A[a man]* ) = [lc,5]
=('AgEs (Ax
Vg[A('Aycs (vX = Vy ]) (A'Af 6s .3z
[man(z) & Vf(z)])
(AX. vA'A!3z[man(z) &v.f(z)][A(Ay(v
]) [Ag]
('A.x. 3z[man(z) & vA(Ay(v
= Vy(z)]) [Aj]
(Axcs .3z
[man(z) & (vx =v
)]) [Ay]
(8) [llifl isa man]* =(Afos .vf(
llill) (A('A; 3z[man(z) &(vx =v

VA(Ax.3z[man(z) & (v
= V
)])(A1Jift) [Aj]
3z[man(z) &(llill =v
)] ['Ax]
We do not have here '(llill =Z)', hence, we cannot get the result man(Allill) [still less
man(lliflJJ. This is so becauseman is in Cone (OlS), not in Cone (or ). It seems here (and
in the earlier examples, too) that the translation rules are "over-intensionalized",
Maybe, some functors in our lexicon are really intensional ones, but most of them is
extensional. To get rid of the superfluous intensions, Montague introduces some re-
strictions on the possibleinterpretations of L
Montaguesuggests the following restrictions (Ml) to (M7) with respect to the
admissible interpretations of c,(i)
(M1) The proper namesin B(NOM) must be rigid names. In other words, if A
E {Jolin}f ~ fMarg}ninety } thenthe sentence
O(x =A)
must be valid.
(M2) The commonnouns in B(eN) - except price and temperature - are to be
extensional predicates: if A is one of thesetermsthenthe sentence
must bevalid.Then, in the role of vI, the term
will be suitable. We can replace in the translations A by A. (and its argument B by
"VB') everywhere. E.g., example(8) in the preceding sectionreducesto
(8.) ['1Ji[ is a man]* s:: 3x
[man. (x) &(fJJiC[ = x)] =: man. (fJJi[[).
(M3) The same holds for the verbs in B(IV), except rise and cfiange. An exam-
[a man w a ~ ] s:: 3x
[man. (x) &wa[k... (x)].
(M4) The transitiveverbs in B(fV) - except seek; and conceive - are to be ex-
tensional ones with respect to their both arguments: If A is one of these verbs then the
is to be valid. Note that we need not apply this postulate to the verb be, for it holds
automatically by the definition of be: . Again,in the role of vI, the term
is suitable in the translations. [Remember that "(I\y)+" abbreviates "(A!as .v.f(l\y".]
E.g., example(1) of the precedingsectionreduces to
(1.) [everg man Coves awoman sucli tfiat sfie Coves liim]* =:
s:: 'VYl[man. (y) ::J 3x
[woman. (x) &Cove. (y)(x)]].
(M5) The transitive verbs seek; and conceive as well as the special verbs in
B(VIV) and B(SV) are extensional with respect to their subject argument. The postu-
latefor A E {seet conceive } is as follows:
Replacing Se byPos or hos onegetsthepostulate forB(SV) or B(VN), respectively. The
examples (2.1), (2.2) in thepreceding section canbesimplified as follows:
(2.1.) [see("(A/o
(x) & vit"x)])("Jolin)
(2.2.) 3x
(x) & seek,(x)(7olin)]
Here "seek. (x)" denotes the extensional reduct of "seek..("(x)+)", in the sense of our
(M6) The preposition in is extensional (but aoout is not!) which can be ex-
pressedby the sentence:
Thus, if in. E Cone (TJl) playstherole of vg, then:
['.Bil{* == [tlie* (AYes (in. )(vy)(wa{k)("'J3ilf so:
== (Aj. 3z['<I u(park..(u) =(u =z& vf(z)])("Ay(in. )(vy)(wa{k)("f}Ji{f)
so: 3z
['<lu es (park.. (u) =(u =z) &(in. (vz)(wa{()("f}Jiff) so:
s::: 3x
[7'Ye (park. (y) =(y =x) & in. (x)(wafk.. )('.Bill) ].
(M7)The verb seek; is to be expressible as trg to fituf, namely:
O[seek.. lfes )(x
) =trg-to("find(f)(x)] .
These restrictions may be qualified as meaning postulates concerning the ex-
tensional functors of LE .
Example: The de dicta and the de re readingsof the sentence Jolin tries tofind
a unicorn are translatedas follows:
trg-to("(AYes 3x
(x) &find. (x)(vy)])("jolin);
[unicorn. (x) &trg-to ("(AYes find. (x)(vy)("Jolin )] .
Similarly, Jolin abouta unicorn has two readings:
about. ("(A/os .3x
(x) & vit"x)])("tafk..)("Jolin) ;
[unuorn. (x) &abou: ("(x)+)("ta{k..)("Jolin)] .
The next exampleshows that ambiguity can arise evenwhen the sentence con-
sists of purelyextensional terms. Let us considerthe sentence a woman loves everg man.
We can apply(S4) for the terms a woman and love everg man for gettingthis sentence.
In this case, its translationis:
[woman. (x) & '<lYe (man. (y) ::) looe, (y)(x]
Onthe otherhand, wecan apply(S14
) for the terms every man anda woman Iooes liimo..
The translation of theresultis
Vy, [man. (y) ::J 3x, (woman. (x) & Iooe, (y)(x]
[Every manis loved by somewoman.]
!Mary 6efieves tnat Jolinjintfs aunicorn ani lie eatsit
hasthreedifferent readings:
a) 3x
[unicorn(x) &6eCieve-tnat(Afjini. (x)(Jolin) & eat. (x)(Jolin)])(A!Mary)]
b) 3x
[unicorn. (x) & 6eCieve-tnat[A(jini. (x)(Jolin](A!Mary) & eat. (x)(Jolin)]
c) 6eCieve-tnat (A3x
[unitorn. (x) &jini. (x)(Jolin) & eat. (x)(Jolin)])(A!Mary)
In the following examples - due to the pronoun it - onlythe de re readingis
Jolin ~ a unicorn ani!Mary ~ it,
Jolin tries tojini aunicorn antfwishes to eat it.
Toget a nonreferential reading of these sentences, anothergrammarand anotherlogic
wouldbe necessary.
(A)Thefirst groupof our remarks concerns the system IL (andIL+).
(a) The system is "over-intensionalized" by permitting sequences of types such
as as, aSs , a sss , ... ad infinitum, and by the unlimited iteration of the intensor (A).
No iteration of the type symbol s occurs evenin the translations of sentences and terms
of LE - although we haveexamples of nested occurrences of s. It seems that one has
to fmdanotherdevice for distinguishing extensional andintensional functor types.
(b) Definite descriptions are absent from IL. Hence, the translation rule for the
definite article tlie follows the Russellian-Quinean schema[see(Tr2)]. Of course, the
introduction of definite descriptions in type l wouldimplypermitting the possibility of
semantic valuegapswhichis totallyaliento the spiritof the semantics of IL .
(c) Thedomain of individuals - i.e., the domain of quantification of type l- is
the same at all indices (at all worlds and time moments), although our intuition sug-
gests the variability of this domain. A partial remedy is to introduce a monadic predi-
cate E - expressing actual existence - whose truthdomain mayvaryfrom index to in-
dex, andto express quantification on "existing individuals" by
"Vxt(E(x)::J F(x" insteadof "v x.Fix)",
Thena sentence of form"3x
.F(x) &-3x(E(x) &F(x)" would say: "There existsa non-
existentobjectof which Fholds" - whichis (at least)somewhat curious.
System IL is not strongenough to express differences in meaning. If A and B
are validsentences then F(A =B), i.e., they countas logical synonyms of eachother.
Thus, for all A and B, "A :::::> A" and "B :::::> B" are synonymous, and so are A and "A &
(B v-B)". Hence, no difference in meaning can be exhibited in IL between the sen-
!Marg t l i i ~ tliat '13i{{sings.
!Mary tliink tliat '13i{{sings atufJolinsfeeps ordoes notsleep,
(B) Returning to Montague's grammar of a fragment of EnglishLE , the fol-
lowingtwo mistakes maybe qualified as simple oversights of the author (easily corri-
(a) By (SI2), wa{R.anita{R. E T(IV). Then,by (S4), Mary wa{k ani taC( E T(t).
For (S4) says that only the first verbis to be replaced byits thirdperson singularpre-
(b) By (S5) and (S4), lieo Coves lieo E T(t). Then, by (S14
) , !Mary Coves her E
T(t). According to the translation rule (Trl4
) , its translation is Cove. {Marg}{Marg}
which means that Mary loves herself. It is very doubtful that the sentence Marg Coves
herhas such a reading. It seems that (SI4
) needs somerestrictive clauses.
Amoreessential reflection:
In the syntax of LE , Montague does not distinguish extensional and inten-
sional functors (in the same category). According to the translation rules, all functors
are treatedas intensional ones. Later on, the introduction of "meaning postulates" will
make, nevertheless, a distinction between extensional and intensional functors (in cer-
tain categories). All this means that a correct construction of a fragment of English
is impossible withouta sharp distinction of extensional and intensional termsin some
categories. Thenit wouldbe a moreplainmethod to makethis distinction in the syntax
For example, the category of transitive verbs could be handled by introducing
the basic sets B(TVext) and B(TVint), that is, the set of extensional and intensional
transitive verbs, respectively. Then, the translation rules for these verbs would be as
IfA E B(TVint) thenA E Cone ~ S ), andA* = A.
IfA E B(TVext) thenA E Cone (Oll), and
A* = (Ages [Ax
vg("(AYls A(vx)(vy) )) E cat, ~ t ).
[Cf. (M4) of the preceding section.] Thus, the translation of every transitive verbbe-
longsto the same logical type. Let us consider anexample of application:
Vimf apen]* =jimf* (A[a pen]*) =::
= (AgEs [Axls.Yg(A(AYlsfimf(Vx)(Yy])(A(Afos .3Zt [pen. (z)& Yf(A
= [Ax(Af3z[pen. (z) & Yf(V
:= (Axl
[pen. (z) &jimf(Yx)(z)]) E Cat, (b).
['13ifljitufs apen]* = (AfrsY f(AtJ3ill)(AVimf apen]*) :=
= (Ax.3z
[pen. (z) & jimf(Yx)(Z)])(A'13ill) :=
= 3z[pen. (z) &jimf('13i{{)(z)].
An analogous method is applicable for the categories CN, 1':' and PRE. How-
ever, one can raise some doubts about the existence of intensional terms in B(eN)
and B(IV).
Montague's example concerning temperature and rise (cf. (5), (6), and (7) in
2.3.2) apparently proves that these predicates are intensional ones. However, tlie tem-
perature in the sentence
tlietemperature rises
refers to a function (defined on time) whereas the same term in the sentence
tlietemperature is ninety
refers to the value (at a given time moment t) of that function. This is a case of the sys-
tematic ambiguity of natural language concerning measure functions (as, e.g., 'the
velocity of your car', 'the height of the baby', 'the price of the wine', etc.). Mon-
tague's solution of this ambiguity seems to be an ad hoc one. Instead, a general
analysis of the syntax and the semantics of measure functions would be necessary.
It seems to be somewhat disturbing that the rules governing the verb be permit
the construction of the sentence:
tliewoman is eve'!! man.
Its translation is:
[V'Yl (woman. (y) =(y =x) & V'Zt (man. (z) :J (x =z] .
It is dubious whether an eve'!! expression can occur after is in a well-formed English
Let us note, finally, that T( e) is empty in L
, for the individual names belong to
T(NOM). In fact, the basic categories of LE are t, IV, and eN; by means of these all
other categories are definable. Why used Montague e at all? The reason is, probably,
that the definition of the function f mapping the categories into logical types (see at the
beginning of 2.3.2) became very short and elegant. If he had chosen t, IV, and CN as
basic categories the definition of f would grow longer with a single line. The mathe-
matical elegance resulted a grammatical unelegance: an empty basic category.
BARCAN, R. C. 1946,'A functional calculusof first order based'on strict implication.'
The Journal ofSymbolic Logic,11.
GALLIN, D. 1975, Intensional and Higher-Order Modal Logic. NorthHolland-
AmericanElsevier, Amsterdam-NewYork.
HENKIN, L. 1950, 'Completeness in the theoryof types.' The Journal ofSymbolic
Logic, 15.
LEWIS, C. I. AND LANGFORD, C. H. 1959, Symbolic Logic, seconded., Dover, New
MONTAGUE, R. 1970, 'Universal Grammar'. In: THOMASON 1974.
MONTAGUE, R. 1973, 'The proper treatment of quantification in ordinaryEnglish.'
In: 1HOMASON 1974.
SKOLEM, TH. 1920, Selected Works in Logic, Oslo-Bergen-Tromso, 1970, pp.l03-
1HOMASON, R. H. (ed.) 1974, Formal Philosophy: Selected Papers of Richard
Montague. Yale Univ. Press, NewHaven-London.
RUZSA, 1. 1991, Intensional Logic Revisited, Chapter 1. (Available at the Dept. of
SymbolicLogic, E.L. University, Budapest.)