Sie sind auf Seite 1von 66

Modal Logic

Summer, 2011
1 Introduction
These notes attempt to summarize a selection of basic material about modal logic in a very
clear and motivated way.
There are a number of reasons to be interested in modal logic in general. One, the most
obvious reason perhaps, is because the subject may be interesting in its own right. Second,
one could value it for its relationship with other elds such as linguistics, philosophy, articial
intelligence, and mathematics.
These notes begin from the denition of some common modal languages. In giving a
semantics for these languages, we focus on Kripke semantics. A frame for the basic modal
language is a collection of worlds with an accessibility relation (a directed graph), and a
model on such a frame supplies the additional information of what atomic facts are true
at each world (a valuation). In the model context, modal logic can be understood as a
fragment of rst order logic (via the standard translation). This raises the question of which
rst order formulas are equivalent to modal formulas, which is answered by Van Benthems
characterization theorem, using the notion of bisimulation.
However, we can also understand modal formulas as asserting something about the un-
derlying frame. For example, p p is valid only on transitive frames. So the class of
transitive frames is modally denable. This leads to the question of which classes of frames
are modally denable, and this is answered by the Goldblatt-Thomason theorem.
Another result we consider is a modal version of Lindstroms theorem. The original
Lindstr om theorem yields rst order logic as a maximal logic among certain abstract logics
possessing the compactness and skolem properties, while the version considered here yields
modal logic as a maximal logic among certain abstract logics possessing a notion of nite
degree and a property involving bisimulations.
These results are abstract ways of characterizing what modal logic is, and what it is
capable of expressing. One direction of the Van Benthem theorem is that modal formu-
las cant tell the dierence between bisimilar models, and the other direction says that
any rst order formula which also cant make such distinctions is a modal formula. The
Goldblatt-Thomason theorem describes what modal formulas are capable of expressing from
the underlying frame-, rather than model-, perspective. Finally the Lindstr om-type theorem
suggests that having a notion of nite degree and being invariant under bisimulations in
some sense characterize modal logic.
So far we havent been considering deductive consequence, and we should remedy that.
As boolean algebras correspond to propositional logic, so do boolean algebras with opera-
tors correspond to modal logics. The Jonsson-Tarski theorem is the generalization of Stones
representation theorem for boolean algebras to boolean algebras with operators. This theo-
rem is one way to approach completeness theorems for modal logics. But we will also use the
notion of a canonical model. At any rate, we shall be interested in supplying completeness
1
proofs for a few common modal logics (K, S4, and S5). We shall also consider a couple of
examples of incompleteness (KL and K
t
ThoM). A completeness result is a matching of a
semantic notion of consequence ( [= ) with a syntactic notion of consequence ( ).
From the point of view of elucidating the semantic consequence relation, this matching is
useful because it yields a compactness theorem and demonstrates recursive enumerability.
From the point of view of elucidating the syntactic consequence relation, the matching gives
us a more intuitive understanding of the relation. As for incompleteness results, there are
two types, which I will describe later.
Finally, we will conclude the notes with a glimpse of modal rst order logic. We dene
the language, give a constant domain semantics, and supply a completeness proof.
The material for these notes comes primarily from Modal Logic by Blackburn, Rijke, and
Venema, but I also made use of A New Introduction to Modal Logic by Hughes and Cresswell,
and the Handbook of Modal Logic edited by Blackburn, Van Benthem, and Wolter.
2 Modal Languages and Kripke Semantics
2.1 Denition of Modal Language
What is a modal language? Let
1
,
2
, . . . be a collection of modal operator symbols with
specied arities (n
1
, n
2
, N). Typically, we will be interested in the case where there
is just one modal operator symbol () of arity 1. However, arguments tend to generalize
to the more general case of having more than one operator and operators of arity other
than 1 without too much trouble, and theres good reason to do so as Ill explain with some
examples in a moment. Let = , , ,
1
,
2
, . . .. That is, consists of a collection of
modal operator symbols of specied arities along with the usual propositional logic operator
symbols (falsehood, arity 0), (negation, arity 1), and (disjunction, arity 2). Modal
logic for us will always be an extension of propositional logic.
Let = p
1
, p
2
, . . . be a countable collection of proposition letters (atomic sentences).
Let F be the free -algebra with as generators. I.e., F is the smallest thing containing
and and closed under the following rules of formation:
1. If F, then so is
2. If
1
,
2
F, then so is
1

2
3. If and has arity n, and
1
, . . . ,
n
are in F, then (
1
, . . . ,
n
) F
The elements of F are called terms or formulas or sentences depending on the context, but
I suppose typically well call them modal formulas. For example, if
3
has arity 2 and
7
has arity 1, then
3
(p
14
, p
6
)
7
is a modal formula. The case where = , , , is
called the basic modal language. Here the modal formulas, the elements of F, are things like
p
1
, p
12
, , , p
1
(p
12
), p
1
, p
1
, and so on. We employ the usual abbreviations such as
for , for , X Y for (X Y ), etc. Further, to each modal operator symbol
regardless of arity, we may dene the dual V by: if has arity n, then V(x
1
, . . . , x
n
) is
the same as (x
1
, . . . , x
n
).
2
2.2 Examples of Modal Languages
These modal languages can be put to varying uses, but to get started its useful to have some
intuitive notion of them. Consider the basic modal language. If is some modal formula,
then can mean something like is possibly true. Similarly, would then mean
is necessarily true.
What about other modal languages other than the basic one? Why dont we just limit
our attention to the basic modal language? Well, for starters theres the basic temporal logic.
Here we have two diamonds, that is, there are two modal operator symbols both of arity
1. Well write them as P and F, and their duals (boxes) will be written H and G. So, P
means something like was true at some point in the past. F means something like
will be true at some point in the future. Similarly, H means something like has been
true at every point in the past and G is going to be true at every point in the future.
(The boldfaced letters are suggestions on how to remember the naming conventions.)
What about a modal language with a modal operator symbol of arity more than 1?
What can that mean? Well, suppose were talking about some directed arrows with a modal
operator of arity 2. We could set up our semantics, for example, so that (
1
,
2
) holds
at an arrow w just in case w can be decomposed into two smaller arrows x
1
and x
2
such that

1
holds at x
1
and
2
holds at x
2
.
2.3 Frames, Models, and Truth
Now that we have some sense of what a modal language is, we turn our attention to one
way of capturing the intended meaning mathematically. First x some modal signature
=
1
,
2
, . . . (weve omitted reference to the always present propositional symbols). A
-frame is a set W together with an (n + 1)-ary relation R

on W for each n-ary modal


operator symbol . You can think of this relation R

as deciding which nodes are close


to other nodes. A valuation V on a frame W is a mapping from to P(W), the power set
of W. One thinks of the valuation as deciding what atomic facts hold at each node. E.g.,
a node x being in V (p) means that p is true at x. A model M = (W, V ) is just a frame W
together with a valuation V .
Every valuation V can be extended uniquely to a map

V : F P(W) such that
1.

V (p) = V (p) for each p
2.

V () =
3.

V () = W

V () for each F
4.

V (
1

2
) =

V (
1
)

V (
2
) for all
1
,
2
F
5.

V (
1

n
) = w W [ x
1
x
n
such that x
i


V (
i
) and R

wx
1
x
n
for
each modal operator symbol (of arity n) and all
1
, . . . ,
n
F
3
This is our recursive truth denition. It tells us which modal formulas are true at which
nodes, given a valuation (an assignment of the basic atomic facts). E.g., if

V (p) contains a
node x, then p is considered to be true at x. The boolean connectives arent that exciting:
of note is how each acts as a kind of local (existential) quantication.
We may also describe our denition of

V in a more algebraic way as follows. Its essentially
the same denition, but couched in dierent terminology. P(W) may be considered a -
algebra in the following way: is interpreted as P(W) the empty set, : P(W) P(W)
is complementation, : P(W)
2
P(W) is union, and for each modal operator symbol
of arity n we have the operation : P(W)
n
P(W) dened by:
(X
1
, . . . , X
n
) := w W [ x
1
X
1
x
n
X
n
such that R

wx
1
x
n

Then, as F is the free -algebra, there exists a unique -homomorphism



V : F P(W)
which extends V : P(W).
Let M = (W, V ) be a -model, let w W be a node of the frame, and let F be
a modal formula. We write M, w [= if w

V (), and in this case we say is true at
w or holds at w. We write M [= if M, w [= for every w W. We write W [= if
(W, V ), w [= for every valuation V on W and every w W. If (W, V ), w [= for all
choices of W, V , and w, then we say is valid. For all of these, we may also replace the
single modal formula by a collection of modal formulas in the obvious way. E.g., M [=
means that M [= for every .
2.4 Semantic Consequence
Theres two notions of semantic consequence that well have occassion to discuss. One is
global in that it depends on all the nodes of the frame and will be indicated with a g,
whereas the other is local in that it considers each node separately and will be indicated
with an l. Let be a collection of modal formulas, and let be a modal formula. We
write [=
g
if: for all models M, if M [= then M [= . We write [=
l
if: for all models
M = (W, V ) and all nodes w W, if M, w [= then M, w [= . We will be considering the
local version somewhat more often, so the default is for [= to mean [=
l
.
I would like to give an example showing that global and local consequence are not the
same. Of course, local consequence implies global consequence, but not the other way around.
Consider p [= p. As a global consequence, this is ok, but its not ok as a local consequence.
2.5 Intuitive Examples
Lets see how these denitions play out in the case of the basic modal language . A
frame W is simply a directed graph: it is a set, whose elements are called nodes or worlds,
with a binary relation R W
2
. If x, y W and Rxy then we say that x sees y, or that
y is accessible from x. Truth and falsity of the modal formulas are evaluated at each node.
There is a recursive truth denition. A valuation V : P(W) decides which atomic facts
are true at each node. E.g., if V (p
7
) contains x W, then p
7
is considered true at x. As for
4
the inductive steps, Boolean connectives are handled as usual, internally to each node. E.g.,
is true at a node x just in case is false at x. However, to get the truth of , we make
use of the accessibility relation R. We say that is true at a node x just in case x can see
some node y such that is true at y. I.e., x [= just in case there exists y such that Rxy
and y [= . Similarly, a node x thinks that is true just in case is true at every node
that x sees. I.e. x [= just in case for all y such that Rxy we have y [= .
A nice way to think about and is that the correspond to and , except that they
are local versions. Whereas z(z) says that there exists some z somewhere such that (z),
says that there is a nearby node that satises .
What are some examples of valid formulas? Well, all of the usual propositional validities
are valid still. For an example of a validity involving , as is interpreted as falsehood and
is true at no node, we have [= . Also, we have [= (
1

2
) (
1

2
) for
every modal formula
1
and
2
. If a node x can see some node at which either
1
or
2
is
true, then x can either see a node at which
1
is true or it can see a node at which
2
is
true, and vice versa.
Lets now consider the basic temporal language P, F. Recall that P is supposed to
mean was true at some point in the past, and F that will be true at some point in
the future. The duals (boxes) are written H and G respectively. Now, in order to get the
intended meaning, its reasonable to only consider frames which have the binary relations
R
P
and R
F
converses of each other, i.e. we should have R
P
xy i R
F
yx. Its actually possible
to express this condition modally in the basic temporal language. The class of frames that
validate p GPp and p HFp are exactly the frames where P and F are converses of
each other.
To see this, rst take a moment to internalize what these formulas are saying: p GPp
is saying that if p is true at some point in time, then at every future point in time after that,
it will be true that at some point in the past p was true. p HFp says a similar thing in
reverse. So, suppose these two formulas are valid on some frame W. Let x, y W such that
R
F
xy. We wish to show that R
P
yx. So we are assuming that y is more in the future than
x and we are trying to show x is more in the past than y. Let V be a valuation such that
V (p) = x. Then x [= p and so by assumption we get x [= GPp. Since y is more in the
future than x, R
F
xy, we get y [= Pp. Since x is the only node which makes p true, we see
that we must have X more in the past than y, R
P
yx. A similar argument works to show
that R
P
xy implies R
F
yx. Now, suppose that R
P
and R
F
are converses of each other. We
need to see that the two formulas are valid. Consider some node x. Now lets try to show
that x [= p GPp (the other is similar). So assume x [= p. Now let y be a node such that
R
F
xy. We need to show that y [= Pp. Well, as R
F
and R
P
are converses, we have R
P
yx.
Thus, as x [= p, we have y [= Pp.
Now consider a modal language where is a 2-ary modal operator symbol. Suppose
were interested in talking about directed arrows. The nodes of our frames will themselves
be directed arrows. Now, R

is supposed to be a 3-ary relation on the frame. Our intended


meaning is that R

xyz i x is the composition of y and z; i.e., i y ends where z begins


and x is the same as y followed by z. Then
1

2
(written in the familiar inx notation)
5
would be true of those arrows x which can be decomposed into two arrows y and z such that
y [=
1
and z [=
2
. One axiom we might want to make about the frames is:
[p
1
(p
2
p
3
)] [(p
1
p
2
) p
3
]
3 The Van Benthem Characterization Theorem
In this section we show how modal formulas about models (frames with valuations) can be
understood as rst order formulas. However, not every rst order formula (in the relevant
language) is equivalent to a modal formula. The question is which rst order formulas are
equivalent to modal formulas. The answer is the ones invariant under bisimulation. This
is the Van Benthem characterization theorem. We shall dene bisimulation and prove the
theorem.
3.1 Standard Translation
Fix some modal signature =
1
,
2
, . . .. We can convert it into a rst order signature
as follows: for each modal operator symbol of arity n we introduce an (n+1)-ary relation
symbol R. Furthermore, for each proposition letter p we introduce a unary relation symbol
P. The result is a rst order signature P
1
, P
2
, . . . , R
1
, R
2
, . . . whose models are exactly the
same things as are models for the modal signature as dened above. In detail, given a modal
model M = (W, V ) we form a rst order model by dening the R
i
as the R

i
given from the
frame W and setting the extension of P
i
equal to V (p
i
). Likewise, given a rst order model
we may form a modal model using the same matching.
Now, although these two languages talk about the same things, the rst order one is more
expressive than the modal one. Every modal formula has an equivalent rst order formula.
This is the standard translation of the modal formula and we dene it recursively. Lets
describe the standard translation in the case of the basic modal language for the sake of
clarity. We dene ST
x
() and ST
y
() where x and y are two distinct variables by induction
on .
1. ST
z
() = for z = x, y
2. ST
z
(p) = P(z) for z = x, y
3. ST
z
() = ST
z
() for z = x, y
4. ST
z
(
1

2
) = ST
z
(
1
) ST
z
(
2
) for z = x, y
5. ST
x
() = y(Rxy ST
y
()) and similarly with x and y reversed
Heres an example of the translation at work: consider = (p
1
p
2
). Its standard
translation is: ST
x
() = [y(Rxy (P
1
(y) x(Ryx P
2
(x))))]. Note that weve reused
the variable x, but thats ok.
6
To see that weve gotten this translation right, one may prove by induction that the
translations are equivalent to the originals. I.e., if M is a model (both a modal and a rst
order model), then for any w M and any modal formula we have
M, w [= M [= ST
x
()[w]
3.2 Bisimulation
So every modal formula is equivalent to a rst order formula, but is the reverse true? No. For
example we have a rst order formula asserting theres exactly one element in the model.
There is no corresponding modal formula. How can we be sure? Well, lets consider the
concept of bisimulation. Well limit our discussion to the basic modal language.
Let (M, w) be a model with a node selected, and similarly (M

, w

). Now, a bisimulation
Z between these two pointed models is a relation Z M M

such that
1. (w, w

) Z
2. (x, x

) Z implies that x and x

agree on atomic facts, i.e. p V (x) p V

(x

)
for each proposition letter p
3. If (x, x

) Z and y is some node such that Rxy, then there is some node y

such that
R

and (y, y

) Z
4. If (x, x

) Z and y

is some node such that R

, then there is some node y such that


Rxy and (y, y

) Z
We might say something like w and w

are modally back-and-forth equivalent, or that theyre


bisimilar.
Now, if two nodes are bisimilar, then they have to in fact agree on all modal formulas.
We can prove this by induction. We show that if Z is a bisimulation then for all (x, x

) Z,
x and x

agree on every modal formula by induction on . The atomic case is obvious, as


are the boolean steps. So suppose x [= . We wish to show that x

[= . As x [= ,
introduce y such that Rxy and y [= . Then by clause 3 in the denition of bisimulation we
get a y

such that R

and (y, y

) Z. By the inductive hypothesis, we get that y

[=
and so x

[= . The other direction is similar.


Getting back to our example of showing that theres rst order formulas that arent
modal, recall we were trying to show that theres no modal formula which is true of a node
just in case its in a model with exactly one element. We can imagine a model with just
one element not connected to itself and then another model with two elements also with an
empty accessibility relation. We can pick one of the nodes in the two element model and
see that theres a bisimulation between it and the node in the one element model locally
theres no way to tell these two nodes apart.
Now, it is true that if two nodes w and w

are bisimilar, then they agree about all modal


formulas, i.e. they are modally equivalent, but the reverse is not in general true: it is not
true that modal equivalence implies bisimilarity. However, under appropriate assumptions
7
it does. We say that a model is M-saturated if every node w has the following property: if
is some collection of modal formulas which is nitely satisable among children of w, then
there is some child of w which satises all of . I.e., if for every nite subset
0
of there is
some y such that Rwy and y [=
0
, then there is some y such that Rwy and y [= . When
the models under consideration are M-saturated, modal equivalence does imply bisimilarity.
Suppose w M and w

are modally equivalent nodes from two M-saturated models


M and M

. Let
Z := (x, x

) [ x M, x

and x and x

are modally equivalent


We claim that Z is a bisimulation of w and w

. The only thing that really needs to be


checked is the back and forth conditions. Suppose (x, x

) Z and Rxy. We need to nd a y

such that Rx

and y and y

are modally equivalent. Well, let be the complete modal type


of y, i.e. the collection of all modal formulas that y thinks are true. is nitely satisable
among children of x

since x and x

are modally equivalent. In detail, if


0
is a nite subset
of , then we note that x [=

0
and so x

[=

0
. Since M

is M-saturated, we get
our desired child y

of x

with y

[= .
This property of M-saturated is relevant to our theorem below, as it happens in any -
saturated model. Recall an -saturated model is one in which every consistent type with only
nitely many parameters is realized. One such type is q(x) := Rwx ST
x
() [ .
This type has one parameter (w) and it is consistent if is nitely satisable among the
children of w.
3.3 Van Benthem Characterization Theorem
A rst order formula (x) is said to be invariant under bisimulations if w and w

are bisimilar
implies that (w) (w

). I.e., cant tell the dierence between two bisimilar nodes.


We just saw that every formula which is equivalent to a modal formula must be invariant
under bisimulations. It turns out that this is the only obstacle to a rst order formula being
equivalent to a modal formula.
Theorem 1 (Van Benthem Characterization). Let (x) be a rst order formula (in a trans-
lated modal signature). Then is equivalent to a modal formula i is invariant under
bisimulations.
Proof. One direction of this we already saw. So, let (x) be invariant under bisimulations.
We want to nd some modal formula that is equivalent to . Let MC denote all the modal
consequences of . That is, MC := ST
x
() [ (x) [= ST
x
(). If we can show that
MC [= , then were done. Because, if so, then by the compactness theorem for rst order
logic there is some nite subset T of MC such that T [= and so

T is equivalent to .

T is in turn of course equivalent to a modal formula.


So we would like to show MC [= . So let M be some model and w M such that
M [= MC[w]. We show M [= [w]. Let C(x) := ST
x
() [ M, w [= . I.e. C is the
complete modal type of w. Note that C(x) (x) is consistent: otherwise there would be
8
some nite subset C
0
of C such that [=

C
0
by the compactness theorem for rst order
logic. Then

C
0
would be in MC, but this contradicts the fact that M [= MC[w].
So, as C(x) (x) is consistent, we may introduce a model N with a node v N
such that N [= C[v] and N [= [v]. In fact, using methods from model theory (such as an
elemenatary chain argument, or by taking a suitable ultrapower), we may ensure that such
an N is -saturated. Next, we may introduce an -saturated elementary extension M

of
M. Since w in M

and v in N are modally equivalent, and were dealing with -saturated


models, it follows that w and v must be bisimilar. From the assumption that is invariant
under bisimulations, we get that M

[= [w], whence M [= [w] as desired.


Once again, the point of this theorem was to determine a theoretical distinction between
the rst order formulas which are equivalent to a modal formula, and the rst order formulas
which are inherently non-modal.
4 Denability of Classes of Frames
We continue in this section trying to understand the expressivity of modal language, but
from a slightly dierent perspective. Here we give ourselves a class of frames and ask whether
its possible to give a collection of modal formulas which yields exactly this class. But wait!
What does it mean for a modal formula to talk about a frame? Didnt we dene things
so that we could evaluate the truth/falsity of a modal formula at a model (a frame with a
valuation) and a specied node? How could a modal formula say something about a frame
with no valuation given? Theres actually a standard way to do this, as we dened in a
previous section. A frame W [= i for every valuation V on W and for every w W,
(W, V ), w [= . Another way to think about this denition is in terms of second order logic.
We take it that there is a second order universal quantier (p) at the beginning of the
formula for each proposition letter p that occurs in the formula. E.g., p p, when
considered as a modal formula describing a frame W, actually means for every possible
valuation of p, the model with this valuation on W makes p p true everywhere.
This is second order since V (p) is a subset of W.
4.1 Modal versus Second Order versus First Order
4.1.1 The Basic Layout
Figure 1 shows how modal formulas, rst order formulas, and second order formulas are
related in the context of describing frames. Every rst order formula is of course also a
second order formula, so the rst order denable classes of frames (the elementary classes)
are contained within the second order denable classes. The modal classes are also all second
order classes. This is obvious from how we dened how the modal formula talks about a
frame: its the usual rst order standard translation, except that we add a second order
quantier at the beginning for each proposition letter p that occurs in the formula.
9
Figure 1: Classes of Frames
The more interesting part of the picture is the relationship between the elementary classes
and the modal classes. Its not a simple relationship as they intersect non-trivially. There
is a bunch of theorems that get at understanding this picture better, but well be focussing
on just one: the Goldblatt-Thomason theorem.
The Goldblatt-Thomason theorem answers the question which elementary classes are also
modal classes. It says that if K is an elementary class of frames, then K is modally denable
i K is closed under taking bounded morphic images, generated subframes, disjoint unions,
and reects ultralter extensions. Well dene all these notions shortly, but we should already
note something about the general character of the theorem. Were taking a linguistic notion
(whether there exists some modal formulas with such and such properties) and converting
it into a structural notion (whether a class of structures is closed under certain structure-
building operations). We can say this helps us understand the expressivity of modal logic.
4.1.2 Some Modal Formulas With First Order Correspondents
Before we get to the Goldblatt-Thomason theorem, it will be in our interest to consider
some examples. Now, the Goldblatt-Thomason theorem concerns itself with nding the
modal out of the rst order, but we might ask the reverse as well take, for example, the
formula p p. Does this modal formula correspond to a rst order condition on frames?
Remember were not asking the question whether the modal formula corresponds to a rst
order condition on models we already know the answer to this is yes and we even have
an eective way of going from the modal formula to the rst order formula (the standard
10
translation). The question were asking here is instead whether there is a corresponding
rst order sentence only in the language of frames (i.e. we can use the relation symbols R

corresponding to each modal operator , but not any relation symbols P corresonding to
the proposition letters p).
It turns out that p p does correspond to a rst order sentence. Any sentence that
asserts that the frame is reexive works, e.g. xRxx. Lets check that this does indeed
match. Suppose W is a frame such that W [= p p. We want to show that W [= xRxx.
Well, let w W be given. We show Rww. Let V be a valuation on W such that V (p) = w.
Then, as W, w [= p, and W [= p p, we get W, w [= p. Thus there is some w

W such
that Rww

and W, w

[= p. However, we stipulated that V (p) = w, so w

must equal w.
Thus Rww.
Now we show the other direction. Suppose W [= xRxx. To show W [= p p,
let w be some node in W and V some valuation such that (W, V ), w [= p. We show that
(W, V ), w [= p. Well, since Rww, this follows immediately.
So thats nice; weve found a modal formula (sometimes called T) that corresponds to the
rst order condition of reexivity. What are some other modal formulas that correspond to
a nice rst order condition? Well, the same kind of proof as just above can be used to show
that p p (sometimes called 4) corresponds to transitivity, p p (5) corresponds
to right-Euclidean (xyz(Rxy Rxz) Ryz), p p (B) corresponds to symmetry,
p p (D) corresponds to right-unboundedness (xyRxy), and p p (CD) cor-
responds to every element having a unique R-successor or none at all. Some examples of
modal formulas not in the basic modal language that have rst order correspondents include:
(p)(p) pVp and (
1

2
p
2

1
p)
1
(p
2
p).
4.1.3 Some Modal Formulas Without First Order Correspondents
The G odel-Lob Formula Given this long list of modal formulas that do have rst order
correspondents, one might wonder whether all modal formulas have them. Yet the answer
is indeed no. Lets consider the Godel-Lob formula p (p p) (L). (Note this
formula is also commonly written as the equivalent (on frames) (p p) p.) I
claim (L) doesnt correspond to a rst order condition. In fact, the frames that validate
(L) are exactly the transitive frames which have no innite ascending chain x
0
Rx
1
Rx
2
R
(note the elements of the sequence dont have to be distinct). Given this, (L) cant cor-
respond to a collection of rst order sentences because of compactness: the collection
c
0
Rc
1
R Rc
n
[ n N where the c
i
are new constant symbols would be nitely satis-
able as 0, . . . , n with R interpreted as strict inequality would work for any nite amount.
But then compactness would give a frame satisfying with an innite ascending chain.
So, lets see that (L) denes the transitive, reverse well-founded (in the sense of having no
innite ascending chains) frames. As a preliminary general comment, note that the formula
p (p p) could be paraphrased as saying if p is possible at some point to the
right, then there is a point to the right at which p happens but after which no more p can
be found. Let W be a transitive, reverse well-founded frame. Let w be a node in W and
V a valuation such that (W, V ), w [= p. We show that (W, V ), w [= (p p). Well,
11
Figure 2: An ad hoc frame for showing (M) is not rst order
suppose to get a contradiction that (W, V ), w [= (p p). Then, as (W, V ), w [= p, we
may introduce an x
0
W such that Rwx
0
and (W, V ), x
0
[= p. Then, using the fact that
w [= (p p), we get x
0
[= p. Thus, we may introduce an x
1
W such that Rx
0
x
1
and x
1
[= p. Since the frame is transitive, we have Rwx
1
and we conclude as before that
x
1
[= p. We continue in this way and obtain an innite ascending chain x
0
Rx
1
Rx
2
,
violating reverse well-foundedness.
Now lets see the other direction. Assume that W validates (L). We show that W is
transitive and reverse well-founded. Let Rxy and Ryz. We show Rxz. Let V be a valuation
such that V (p) = y, z. Then x [= p, so by (L) we have x [= (p p). Since y [= p
(Ryz) and z is the only other node in V (p), we must have z [= p and Rxz. Now we show
that W must be reverse well-founded. Suppose to get a contradiction that x
0
Rx
1
Rx
2
R
is some innite ascending sequence. Let V be some valuation such that x
0
, x
1
, . . . = V (p).
Then x
0
[= p, but x
0
[= (p p), contradicting (L).
The McKinsey Formula Another interesting example of a modal formula which does not
have a rst order correspondent is the McKinsey formula p p (M). We show that
if it did, then the Lowenheim-Skolem theorem would be violated. Consider the following
frame:
W := w v
n
, v
(n,0)
, v
(n,1)
[ n N z
f
[ f : N 0, 1
The relation R on W is dened by
R :=(w, v
n
), (v
n
, v
(n,0)
), (v
n
, v
(n,1)
), (v
(n,0)
, v
(n,0)
), (v
(n,1)
, v
(n,1)
) [ n N
(w, z
f
), (z
f
, v
(n,f(n)
) [ n N, f : N 0, 1
See Figure 2 to make sense of this.
12
First we note that W [= p p. This just involves checking each type of node: w,
v
n
, v
(n,i)
, and z
f
. Well just check w here. Suppose were supplied with some valuation V
such that (W, V ), w [= p. Then, as each v
n
is a successor of w, and v
n
only sees v
(n,0)
and
v
(n,1)
, we have that for every n N either v
(n,0)
or v
(n,1)
is in V (p). Thus, we may introduce
a function f : N 0, 1 such that for every n N we have v
(n,f(n))
V (p). It follows that
(W, V ), z
f
[= p and so w [= p.
Next, we may apply the downward Lowenheim-Skolem theorem to W to obtain a count-
able elementary submodel W

_ W which contains w, v
n
, and v
(n,i)
for each n N and
i 0, 1. I.e., W

is a submodel containing the points just mentioned such that for


every rst order formula (x
1
, . . . , x
m
) and elements w
1
, . . . , w
m
W

we have W

[=
(w
1
, . . . , w
n
) W [= (w
1
, . . . , w
n
). Since W is uncountable, we may introduce a
function f : N 0, 1 with z
f
, W

. Let V be a valuation with V (p) = v


(n,f(n))
[ n N.
We claim (W

, V ), w [= p but (W

, V ), w ,[= p. This will imply that p p (M)


cant be equivalent to a rst order sentence since W

doesnt satisfy (M) any more (recall


an elementary submodel must satisfy all the same rst order sentences as the original).
Lets rst see that w ,[= p. Well, the successors of w are the v
n
and the z
f
that still
remain in W

. If z
f
W

then f

,= f, and so there is an n N such that f

(n) ,= f(n),
from which it follows that v
(n,f

(n))
,[= p based on our denition of V . Hence, z
f
,[= p. As
for the v
n
, simply observe that both v
(n,0)
and v
(n,1)
are successors, but only one can have p
holding. Thus v
n
,[= p.
Now we turn our attention to showing that w [= p. Its clear that each v
n
can see a
node where p holds: again, either p holds at v
(n,0)
or v
(n,1)
. And as long as f

agrees with
f at least one n, we have z
f
[= p. However, we still need to rule out the possibility that
there is a z
f
W

such that f

is the exact opposite of f in the sense that f

(n) = 1 f(n)
for every n. Luckily, we can rule this out using the fact that W

is an elementary submodel
of W. Were able to express in a rst order way that if z
g
W

, and g is the opposite of


g

, then z
g
W

. How is this done? Well, note that were able to say in a rst order way
(with the parameter w) of some element z that Rwz and there are at least three distinct
successors of z. Thus, were able to say of z that its one of the z
f
, because these are the only
successors of w that have at least three successors. Thus, a rst order statement that works
is one that asserts that for all z
1
, if z
1
= z
g
for some g

, then there exists an element z


2
for
which z
2
= z
g
for some g, and for all y, Rz
1
y implies Rz
2
y. The last condition here ensures
that g is the opposite of g

. Since W models this rst order sentence, so too must W

, and
hence if f

is the opposite of f, then z


f
/ W

lest z
f
W

. Weve completed verifying that


w [= p.
4.1.4 Some Other Theorems
Weve seen that theres some modal formulas that have rst order frame correspondents (e.g.
p p), and theres some modal formulas that dont (e.g. p (p p)). Is there
some way to tell in general whether a given modal formula has a rst order correspondent
or not? Well, eectively no. Chagrovas theorem states that it is undecidable whether an
arbitrary basic modal formula has a rst order correspondent, though we wont discuss this
13
theorem further.
Although we cant get all the modal formulas with rst order correspondents in an eec-
tive way, there is a large fragment that we can get eectively, the Sahlqvist fragment. This
is a decidable collection of modal formulas (dened syntactically) each formula of which
has a rst order correspondent which can be eectively computed from the modal formula.
Further, Krachts theorem lets us go in the reverse direction: we can syntactically dene a
decidable collection of rst order formulas which are exactly the rst order correspondents
of the Sahlqvist formulas, and one can go back and forth eectively. We will not consider
these results further, except to mention that all of the examples of modal formulas with rst
order correspondents given so far have been Sahlqvist formulas (though there are other such
formulas, such as M 4 and M 4).
The point of these remarks is to point out that there is a good deal of known material
around in understanding the picture in Figure 1 better that we wont be talking about here.
4.2 Some Frame-building Operations
In this section we now turn our attention to the frame-building operations that are involved in
the Goldblatt-Thomason theorem. Each operation is also naturally an operation that works
on models as well as frames, as well see, by tacking on whats supposed to happen with
the valuations. We will give the denition of each operation and some illustrative examples.
Further, we will supply arguments explaining why closure under them is a necessary condition
for a class of frames to be modally denable (regardless of whether or not this class is
elementary, i.e. rst order denable). This is one direction of the Goldblatt-Thomason
theorem. The other direction is a slightly weakened converse that says that if a class of
frames is closed under these operations, and in addition it is an elementary class, then the
class is modally denable. This other direction will be proved in the next section. We will
focus on the basic modal language , though all of this works in the more general case
with slight modications.
4.2.1 Generated Subframes
A frame W

is called a subframe of a frame W, written W

W, if W

is a subset of
W and for all x, y W

, R
W

xy R
W
xy. W

is called a generated subframe of W,


written W

W, if in addition we have for all x W

and y W, if Rxy then y W

.
I.e., if x W

, then W

also contains all the W-children of x. A model (W

, V

) is called
a submodel of (W, V ) if W

is a subframe of W and for all proposition letters p we have


V

(p) = V (p) W

. The model (W

, V

) is called a generated submodel if in addition W

is
a generated subframe of W.
The motivation for considering subframes and submodels should be apparent: we may
wish to consider structures that are smaller pieces of larger structures. But what does the
generated part do? This ensures that truth at a node in the submodel matches truth in
the larger model. We want to keep all the children around since our recursive denition of
truth was so sensitive to looking at the children.
14
We show that if (W

, V

) is a generated submodel of (W, V ) then for all modal formulas


we have that for every w W

(W

, V

), w [= (W, V ), w [=
We show this by induction on . The case that is a proposition letter is built into the
denition of submodel. The boolean cases are easy to deal with. So consider the case
where our result is known to hold for . Then suppose (W

, V

), w [= . Then
there is an x W

such that Rwx and (W

, V

), x [= . By the inductive hypothesis we get


(W, V ), x [= and so (W, V ), w [= . Now suppose (W, V ), w [= . Let x W such that
Rwx and (W, V ), x [= . Since W

is a generated subframe of W, we get that x W

, and
so (W

, V

), w [= .
Using this, we get a similar result for frames, but it only goes in one direction. Suppose
W

W, i.e. W

is a generated subframe of W. We claim that for every modal formula


, we have W [= W

[= . Suppose W

,[= . Then let V

be a valuation on W

and
w W

such that (W

, V

), w [= . Then dene a valuation V on W by V (p) := V

(p).
Then (W

, V

) is a generated submodel of (W, V ). Thus, (W, V ), w [= . So W ,[= .


The reverse direction doesnt work, i.e. just because W

W it doesnt necessarily
follow that W

[= W [= . Heres a counterexample. Suppose W

= (N, <) and


W = W

x where x is some new isolated point (no children and no parents). Then
is valid in W

but not in W.
What do generated subframes have to do with the Goldblatt-Thomason theorem? Recall
that were interested in classes of frames which are modally denable. That is we are
interested in classes K of frames such that there is a collection of modal formulas such
that K = W [ W [= . Since, as weve just observed, going from a frame to a generated
subframe preserves validity, if K is modally denable, then it must be closed under generated
subframes. I.e., if W K and W

W, then W

K too. We summarize this by saying


that modally denable frame classes K are closed under generated subframes.
One way to think about this is that if the validity of some rst order (or other) sentence is
not closed under taking generated subframes, then it doesnt correspond to a modal formula.
For example, consider the rst order sentence xRxx, i.e. it asserts that there is a node which
can see itself. Now, we claim K = W [ W [= xRxx is not modally denable. Consider,
e.g., a frame W with two isolated nodes x and y, one of which, x, can see itself, while y
cannot. Now, y is a generated subframe of W, and W [= xRxx, yet y ,[= xRxx.
4.2.2 Disjoint Unions
A disjoint union of frames or models is pretty much self-explanatory you just take the
union of a bunch of structures but dont mess around with having parent-child relationships
between any of the pieces. Notationally, we write the disjoint union of a bunch of frames W
i
(i I) as

iI
W
i
.
In terms of the Goldblatt-Thomason theorem, the relevant observation here is that if K
is modally denable and for each i I, W
i
K, then

iI
W
i
is also in K. This follows
from observing that each W
i
is a generated subframe of

iI
W
i
. This seems perhaps at
15
rst to be of no help, since were trying to go the wrong way, but since

iI
W
i
is covered
by the W
i
it works out. In detail, let W :=

iI
W
i
,[= for some modal formula . Then
introduce a valuation V on W and a node w W such that (W, V ), w [= . Then w W
i
for some i I, and we may dene a valuation V
i
on W
i
by letting V
i
(p) := V (p) W
i
. It
follows that (W
i
, V
i
) is a generated submodel of (W, V ), and hence (W
i
, V
i
), w [= . Weve
concluded showing (the contrapositive of the statement) that if W
i
[= for each i I, then
W [= too, as desired.
We see from this result that the class of nite frames is not modally denable, as its
not closed under disjoint union were allowing the index set I to be any size we want,
including innite. Note that the class of nite frames is closed under generated subframes,
so it is necessary to consider closure under disjoint unions over and above closure under
generated subframes.
4.2.3 Bounded Morphic Images
We call a function f : W
1
W
2
between frames a bounded morphism if
1. For all x, x

W
1
, R
1
xx

R
2
f(x)f(x

) (f is a homomorphism)
2. For all x W
1
and all y W
2
, if R
2
f(x)y, then there exists an x

W
1
such that
R
1
xx

and f(x

) = y (f is surjective among children)


A function between models (W
1
, V
1
) and (W
2
, V
2
) is called a bounded morphism if in addition
it satises x V
1
(p) f(x) V
2
(p) for all x W
1
and all proposition letters p.
Now, these conditions might seem slightly unnatural at rst, but they are exactly whats
needed to make an inductive proof work that for every modal formula , for every x W
1
,
(W
1
, V
1
), x [= (W
2
, V
2
), f(x) [= . Lets see how this proof goes to see where the
assumptions come in. We do it by induction on .
The base case involving the proposition letters is covered exactly by the condition that
x V
1
(p) f(x) V
2
(p). The boolean cases are straightforward. Consider .
Suppose (W
1
, V
1
), x [= . We may introduce an x

W
1
such that R
1
xx

and (W
1
, V
1
), x

[=
. By the inductive hypothesis, (W
2
, V
2
), f(x

) [= , and by the homomorphic condition


on f we have R
2
f(x)f(x

). Thus, (W
2
, V
2
), f(x) [= . Now suppose (W
2
, V
2
), f(x) [= .
Introduce a y W
2
such that R
2
f(x)y and (W
2
, V
2
), y [= . By the surjective among
children condition, we may introduce an x

W
1
such that R
1
xx

and f(x

) = y. Using the
inductive hypothesis we get (W
1
, V
1
), x

[= and so (W
1
, V
1
), x [= as desired.
Id like to say a few more words about these conditions dening a bounded morphism.
Perhaps the more typical denition which we might expect would be a strong homomorphism
which would satisfy x V
1
(p) f(x) V
2
(p) and R
1
xy R
2
f(x)f(y). However,
this would not be enough to ensure that the above inductive proof would go through. We
would need a surjective strong homomorphism for it to work. But then again, these con-
ditions would be overkill since, as we saw, we only need surjective among children for the
inductive proof to go through.
16
A frame W
2
is called a bounded morphic image of another frame W
1
if there is some
surjective bounded morphism f : W
1
W
2
. We write W
1
W
2
in this case. Closure under
bounded morphic images is another condition relevant to the Goldblatt-Thomason theorem.
If is some modal formula, and W
1
[= , and W
1
W
2
, then it follows that W
2
[= . I.e.,
modally denable classes of frames are closed under bounded morphic images. To see this,
suppose W
2
,[= and so let V
2
be some valuation on W
2
and y W
2
such that (W
2
, V
2
), y [=
. Then dene a valuation V
1
on W
1
by letting V
1
(p) := x W
1
[ f(x) V
2
(p), where
f : W
1
W
2
is some surjective bounded morphism. Then f is actually also a bounded
morphism of models. Since f is surjective, we may introduce an x W
1
such that f(x) = y.
By the inductive lemma above, we have (W
1
, V
1
), x [= . So W
1
,[= .
Consider the class of strongly asymmetric frames. I.e. those frames which validate
xy(Rxy Ryx). An example of such a frame is W
1
= (N, S), i.e., the natural numbers
with the usual successor relation as the accessibility relation. An example of a frame which is
not strongly asymmetric is a two element frame W
2
= e, o where the accessibility relation
is (e, o), (o, e). However, there is a surjective bounded morphism from W
1
to W
2
which
shows that the condition strongly asymmetric is not modally denable. It can be easily
checked that the function dened by f(2n) = e and f(2n + 1) = o works. Note also that
strongly asymmetric is closed under generated subframes and disjoint unions, so we do indeed
have a new closure condition. Further, all three are still needed since surjective bounded
morphisms preserve the existence of reexive elements and niteness.
4.2.4 Ultralter Extensions
Ultralter extensions are an instance of a more general construction called ultralter frames.
However, as an introduction to the concept, and because our use of ultralter frames will
presently be limited to ultralter extensions, we will phrase things only for the case of
ultralter extensions right now. Later, when we look at canonical frames for normal modal
logics, we will think about the more general notion of ultralter frame and see how both
canonical frames and ultralter extensions are instances of it.
Ultralters To dene the ultralter extension, we rst need to know what an ultralter
is. Let I be any set. As usual, we use the notation P(I) to denote the power set of I. A
(proper) lter U on P(I) is a collection of subsets of I (i.e. U P(I)) such that
1. , U (proper)
2. If X, Y U then X Y U (closed under intersection)
3. If X U and Y P(I) and X Y , then Y U (closed under superset)
An ultralter is a lter that satises the additional condition that for every X P(I), either
X U or I X U (maximality). A subset C of P(I) that has the property that any nite
collection X
1
, . . . , X
n
U has non-empty intersection is called consistent. Any consistent
subset C may be extended to a lter by closing o under nite intersections and supersets.
17
A commonly used theorem in model theory (and elsewhere), called the ultralter theorem,
is that every consistent subset of P(I) may be extended to an ultralter. It can be readily
proved using the axiom of choice.
As an example of an ultralter, let i be any element of I. Let U := X I [ i X.
This is called the principal ultralter generated by i, and well denote it
i
. The existence of
non-principal ultralters follows from the ultralter theorem we may take the collection
of all conite subsets of some innite set I and observe that this collection is consistent.
Any ultralter extending it cant be principal as it wont contain any nite sets: if it were
generated by i then it would contain, e.g., i.
Propositions Let W be any frame. (As usual well limit ourselves to the basic modal
language to keep things simple.) Note that we may dene an operation : P(W) P(W)
by setting
X := w W [ there is some x such that Rwx and x X
I.e., X consists of those w W that can see something in X. Similarly, we may dene an
operation : P(W) P(W) by setting
X := w W [ for all y W such that Rwy we have y X
I.e., X consists of those w W that can only see things in X. We may also dene an
operation : P(W) P(W) by setting X := W X, the complement. From these
denitions it follows that for all X P(W) we have X = X and X = X.
Similarly, X = X, (X Y ) = X Y , and X Y X Y .
Looking at things this way, motivates the denition of a proposition as a subset of W.
We have a proposition-building operation , which takes a proposition X W and outputs
another proposition X W. Similarly for , , etc. The function

V : F P(W) is just
a way of associating a proposition to every formula in the modal language. An ultralter, in
this light, can be thought of as a maximal, consistent belief state. I.e., the ultralter doesnt
believe in falsity ( , U), if the ultralter believes two propositions then it believes their
conjunction, and it decides for every proposition whether to believe it or its negation.
The ultralter extension is a way of taking a model M = (W, V ) and forming a new model
ue M = (ue W, ue V ), whose elements are the ultralters on P(W), such that ue M, U [=
i

V () U. If we think of an ultralter as a belief state, then we can reword this as saying
that holds at a belief state just in case the proposition dened by in the original model
is one of the propositions believed. Lets go through the details of how this is accomplished.
Denition of the Ultralter Extension Let W be a frame. We will dene a new frame
called the ultralter extension of W, written ue W. The elements of the frame are the
ultralters on P(W). The accessibility relation R
ue
is dened as follows:
R
ue
U
1
U
2
X W[(X U
1
) (X U
2
)]
I.e., the ultralter U
1
can see the ultralter U
2
just in case, whenever X U
1
, X U
2
for
every proposition X P(W). I.e., X [ X U
1
U
2
. What does this mean in terms
18
of our (loose) belief state analogy? Well, if youre at a belief state U
1
, you might think its
reasonable to move to another belief state U
2
so long as all the things you thought were
necessary before (X U
1
) are still true (X U
2
).
Its possible to rephrase this denition of the accessibility relation in terms of . R
ue
U
1
U
2
i X U
2
implies that X U
1
. In our loose analogy, this is saying that if youre at some
belief state U
1
, you can only move to another belief state U
2
so long as you consider everything
in U
2
at least possible. Heres an argument showing that these two denitions are equivalent.
Suppose X [ X U
1
U
2
. We show X [ X U
2
U
1
. Let X U
2
. Suppose
to get a contradiciton that X U
1
. Then X U
1
, as we noted above = .
I.e., X U
1
. By assumption, we get X U
2
, a contradiction. The other direction is
similar.
The ultralter extension of a model M = (W, V ) is a new model ue M = (ue W, ue V ) such
that ue W is the ultralter extension of W (as frames) and ue V (p) := U [ V (p) U. Under
this denition, we can show that for every modal formula , we have for every U ue W:
ue M, U [=

V () U
We show this by induction on . The base case of proposition letters is by denition. The
boolean cases follow from the ultralter properties of U and the recursive denition of

V .
The -case is also straightforward but slightly more tricky. First suppose U [= . We show

V () U. Well, let U

be an ultralter such that R


ue
UU

and U

[= . By the induction
hypothesis, we have

V () U

. Since we know X [ X U

U, we get

V () U.
By the inductive denition of

V , we have

V () =

V (). So

V () U.
Now for the other direction. Suppose

V () U. We show that U [= . We need
to produce an ultralter U

such that R
ue
UU

and U

[= . To produce U

, we will use
the ultralter theorem. We will show that the collection X [ X U

V () is
consistent, allowing us to introduce (by the ultralter theorem) an ultralter extending it.
Since X [ X U U

, we then get R
ue
UU

, and since

V () U

, we get, by the
induction hypothesis, U

[= . So it only remains to show this collection is consistent. As


(X
1
X
n
) = X
1
X
2
X
n
, X [ X U is closed under intersections,
and so we only need to show that X

V () ,= for any X P(W) such that X U.
Well, suppose that X

V () = . Then

V () X. Thus

V () X. Since U is
closed under superset, and

V () =

V () U, we have X U. I.e., X U, hence
X , U.
Weve nished showing that the ultralter extension ue M of M has the property that
for any modal formula and for any ultralter U ue M, we have ue M, U [= i the set
of nodes at which is true in M is in U. In particular, we get ue M,
x
[= i M, x [= for
all x M where
x
denotes the principal ultralter generated by x. In fact, one can check
that x
x
is actually an embedding of frames in the sense that R
ue

x
1

x
2
Rx
1
x
2
.
Modally Denable Classes Reect Ultralter Extensions In terms of the Goldblatt-
Thomason theorem, the relevant fact is that modally denable classes K of frames reect
ultralter extensions. This means that if K is modally denable and ue W K, then W K
19
too. Notice that we use the word reect because the closure property here is in perhaps
the opposite direction than expected. Its not the case that W K implies that ue W K,
i.e. closed under taking ultralter extensions; instead its that ue W K implies W K.
To see that this is true, let be some modal formula such that W ,[= . Then introduce
a valuation V on W and a node w W such that (W, V ), w [= . Then it follows that
(ue W, ue V ),
w
[= , so ue W ,[= . I.e., weve seen that ue W [= implies W [= .
An Example Consider the class of frames K = W [ W [= xy(Rxy Ryy). I.e., the
class of frames which have the property that every node has a child which can see itself.
Although, as can be checked, this class is closed under generated subframes, disjoint unions,
and bounded morphic images, it does not reect ultralter extensions. Here is an example
showing failure of this. Let W = (N, <) be the frame based on the natural numbers with
the usual strict ordering. We claim that for every U ue W which is non-principal, we have
RU
1
U for every U
1
ue W. To see this, let U
1
ue W. We show X [ X U U
1
.
Well, X U implies that X is innite, which implies that X = N. Thus, X U
1
. It
follows that ue W K. However, W , K, since for no n N do we have n < n.
Closure versus Reection Its not true in general that modally denable classes of
frames are closed under taking ultralter extensions (i.e. W K does not necessarily imply
ue W K). This is a dierent condition than reecting ultralter extensions (ue W K
implies W K). Here is an example demonstrating this. Consider the frame W = (Z

, <)
consisting of the negative integers strictly ordered in the usual way. This frame satises (L).
However, ue W does not, as one can show that if U is a non-principal ultralter, then RUU,
in a manner similar to the above example of (N, <). Then the ultralter extension clearly
does not satisfy (L) since we have an innite ascending chain URURUR .
However, under certain conditions, modally denable classes are closed under taking
ultralter extensions. E.g., if K is both modally denable and rst order denable, then K
is closed under taking ultralter extensions. This can be proven using an argument similar
to the Goldblatt-Thomason theorem below.
Ultralter Extensions are M-saturated A fact that will come up in the proof of the
Goldblatt-Thomason theorem is that ue M is M-saturated for every model M. Recall that
M-saturated means that for every node w M, and for every collection of modal formulas,
if is nitely satisable among children of w, then is satisable all at once at some child
of w. To see this is true, let U ue M be an ultralter and let be a collection of modal
formulas which is nitely satisable among children of U. We would like to nd an ultralter
U

such that RUU

and U

[= . To do this, it suces to show that X [ X U


is consistent, where we lazily identify each with

V (). Let
1
, . . . ,
n
, and let
X P(M) such that X U (recall that X [ X U is closed under intersection). We
show X
1

n
,= . By the nitely satisable among children of U assumption, we
may introduce an ultralter U

such that X
1

n
U

. This shows this set is


nonempty, as ultralters dont contain the null set.
20
Concerning the Independence of the Four Closure Conditions Weve already seen
an example showing that reecting ultralter extensions is not implied by the other three
conditions. Further, the example xy(Rxy Ryx) shows that closure under bounded
morphic images is not implied by the other three, as can be checked. Also, the nite example
works similarly for closure under disjoint union. However, the example xRxx doesnt work
for showing closure under generated subframes is not implied by the other three, since this
condition does not reect ultralter extensions, as is demonstrated by the examples just
above. I imagine there is some example that can be made here, but Im still not quite sure.
My best thought so far is the condition that for every x there an innite descending chain
starting at x: i.e., there are y
0
, y
1
, y
2
, . . . such that Ry
2
Ry
1
Ry
0
Rx. This is not closed under
taking generated subframes, yet is closed under taking disjoint unions and bounded morphic
images. The problem is just that Im not sure whether it reects ultralter extensions.
Independence is not essential to show for the purposes of the Goldblatt-Thomason theorem,
but it would make things nice and clean.
4.3 Goldblatt-Thomason Theorem
In the last few subsections weve seen four dierent frame-building operations: generated
subframes, disjoint unions, bounded morphic images, and ultralter extensions. We saw
that if K is a modally denable class of frames, i.e. there are modal formulas such
that K = W [ W [= , then K is closed under taking generated subframes, disjoint
unions, bounded morphic images, and reects ultralter extensions. This is one half of the
Goldblatt-Thomason theorem, we now turn to proving the other direction.
A frame F is said to point-generated if there is some w F such that for every x F, x
is a descendant of w, i.e. there exists a nite sequence w = w
0
, w
1
, w
2
, . . . , w
n
= x such that
Rw
i
w
i+1
for each i. We say that w generates F. Given any frame F and any w F, we may
always construct the smallest generated subframe F
w
of F generated by w, which consists
of w and all its descendants. In fact, F is a bounded morphic image of the disjoint union
of all these generated subframes. I.e.

wF
F
w
F. The surjective bounded morphism
that works here is the union of the inclusion mappings. The reason for this observation will
become apparent shortly.
Theorem 2. Let K be an elementary class of frames (i.e. a class of frames denable by
rst order formulas). Then K is modally denable i K is closed under taking generated
subframes, disjoint unions, bounded morphic images, and reects ultralter extensions.
Proof. Weve already seen one direction of the proof, so we concentrate here on the other
direction. Let K be an elementary class of frames which is closed under taking generated
subframes, disjoint unions, bounded morphic images, and reects ultralter extensions. We
hope to nd a collection of modal formulas such that K = F [ F [= . Well, lets dene
:= [ K [= , i.e. consists of the modal formulas that every frame in K validates. By
denition, we have F K implies F [= . Thus, we just need to show that F [= implies
that F is in K.
21
First we note that it suces to show that F [= implies F K for point-generated F.
To see this, assume weve shown this implication for point-generated frames, and lets try to
show it for frames in general. Let F be some frame, not necessarily point-generated, such
that F [= . Then each generated subframe F
w
also has F
w
[= . By our assumption, we
get that each F
w
K. Then, as K is closed under disjoint unions and bounded morphic
images, and

wF
F
w
F, we get F K too.
So, if we can show for any frame F which is point-generated that F [= implies F K,
then well be done with the proof. So let F be some frame, let w F be a generator for F,
and assume F [= .
This wont be a short proof, so rst let me try to give a brief summary of how things
will go. We are trying to show that F K. We will do this by showing that ue F K, and
cite the assumption that K reects ultralter extensions. But how will we show ue F K?
We will nd some frame N K and some surjective bounded morphism f which oversees
that N ue F, and cite the assumption that K is closed under bounded morphic images.
But what are N and f going to be? That gets a bit more complicated. We are going to
make the frame F into a model in a language with lots of proposition letters. Well call this
model M. Of course the generator w is still in M. Then well nd a model N

with a point
w

such that (N

, w

) suitably matches (M, w) except that the underlying frame of N

is in K. Then well nd a certain extension of N

to a larger model N which nevertheless


has very similar properties to N

and is still in K. Then well dene, with the help of all


the extra proposition letters, a function f : N ue M. Well show that f is a surjective
bounded morphism by carefully checking each condition. Then, simply forgetting about the
extra stu yields ue F as a bounded morphic image of a frame in K. Figure 3 tries to give
a quick visual summary of our method of proof.
Let

= p
X
[ X F. That is,

is a collection of proposition letters, one for each


subset of F. We make F into a

-model in a natural way. We dene the valuation V by


setting V (p
X
) := X for each X F. I.e., a node x F thinks p
X
is true i x X. We
have a model M = (F, V ) which is point-generated by w M. Let be the collection of
modal formulas such that M, w [= . is of course a collection of modal formulas in the
expanded language with all the proposition letters in

.
We claim that there is a

-model N

and a point w

such that the underlying


frame of N

is in K, and such that N

, w

[= . As were assuming that K is an elementary


class, we may introduce a collection of rst order sentences in the frame language (=, R)
such that K = F [ F [= . Now, the modal formulas may be considered as rst order
formulas with one free variable (say x) via the standard translation. However, these rst
order formulas (x) are in the

-model language (=, R P


X
[ X F). Nonetheless,
as the frame language is a sublanguage of the model language, the rst order sentences
may also be considered to be a sentences in this

-model language.
We will show that (x) is nitely satisable. This implies, by the compactness
theorem for rst order logic, that there is some model N

[= (x). Letting w

be the
interpretation of x, we get our desired model N

and point w

. The underlying frame


of N

is in K since N

[= . N

, w

[= since N

[= (x)[w

].
22
Figure 3: A pictorial representation of the proof
So lets see that (x) is nitely satisable. Suppose it werent, to get a contradiction.
Then [=

0
(x) for some nite subset
0
(x) of (x). This is equivalent to saying
that, for every frame W K, we have W [=

0
. Since
0
is nite, only nitely
many proposition letters occur in it, and so this

0
is equivalent to a modal formula
in the usual -language. Thus, we have

0
, by the denition of . However, this
contradicts the fact that M, w [=

0
as M [= by the denition of F.
Weve concluded showing that we may introduce a model N

with w

such that
N

, w

[= , the

-modal type of w M, and such that the underlying frame of N

is in K.
In fact, we may assume that N

is generated by w

because modal truth is preserved under


taking generated submodels (regardless of how many proposition letters there are), and K
is closed under taking generated subframes.
Now, using a fact from model theory, we may introduce an -saturated elementary ex-
tension N of N

. Recall that -saturated implies M-saturated which in turn yields that


modal equivalence is a bisimulation. N being an elementary extension of N

means that
for every rst order formula (x
1
, . . . , x
n
) and any elements w
1
, . . . , w
n
N

, we have
N [= (w
1
, . . . , w
n
) N

[= (w
1
, . . . , w
n
). In particular this holds for sentences
and for formulas of one free variable. Thus, in particular, we have N [= . the underlying
frame of N is still in K and the modal type of w

in N agrees with that of w

in N

. I.e.
N, w

[= N

, w

[= .
The fact that N

is generated by w

(and M by w), allows us to observe the following


lemmas about every

-modal formula :
23
1. M [= i N [=
2. is satisable in M implies that is satisable in N
To prove the rst of these, we note the following chain of equivalences:
N [= N

[=
N

, w

[=
n
for all n N
M, w [=
n
for all n N
M [=
The second may be proven as follows:
is satisable in M M, w [=
n
for some n N
N

, w

[=
n
for some n N
N, w

[=
n
for some n N
is satisable in N
Now were ready to dene a mapping f : N ue M. We set, if s N,
f(s) := X M [ s [= p
X

Recall our hope is that f is a surjective bounded morphism. Thus, there are a bunch of
things to check about f. Here is a list of the things we need to do to keep them straight:
(a) Check that f is well-dened in the sense that f(s) is indeed an ultralter
(b) Show that f is homomorphic
(c) Show that f is surjective among children
(d) Show that f is surjective
Naturally, we check (a) rst. Lets see why , f(s) rst of all. By the denition of f, we
have f(s) i s [= p

. But we have M [= p

so by lemma (1) above we have N [= p

.
In particular, we have s [= p

, so , f(s).
Next, lets check that f(s) is closed under intersection. Let X and Y be elements of f(s)
so that s [= p
X
p
Y
. Since M [= (p
X
p
Y
) p
XY
, N also satises this, and so s [= p
XY
.
Thus, X Y f(s) as desired.
Now we check that f(s) is closed under superset. Let X f(s) and let Y F such that
X Y . We show that Y f(s). We know s [= p
X
. Further, we have M [= p
X
p
Y
, so
N [= p
X
p
Y
and hence s [= p
X
p
Y
. Thus s [= p
Y
, and Y f(s).
Finally we check maximality. Let X F. We show either X f(s) or X f(s). Well,
M [= p
X
p
X
, so we have N modelling this and hence s [= p
X
p
X
. Thus, either X f(s)
or X f(s).
24
Weve nished showing (a), that f(s) is an ultralter for any s N. Next we show (b)
that f is homomorphic. Let s
1
and s
2
be elements of N such that Rs
1
s
2
. We want to show
that Rf(s
1
)f(s
2
). That is, we need to show that X [ X f(s
2
) f(s
1
). So let s
2
[= p
X
.
We show that s
1
[= p
X
. Well, M [= p
X
p
X
, so we just need s
1
[= p
X
. But this
follows from Rs
1
s
2
and s
2
[= p
X
.
Now we consider (c) that f is surjective among children. It suces to show that
(s, f(s)) [ s N is a bisimulation. Since N is -saturated it is also M-saturated. Also,
ue M, being an ultralter extension, is automatically M-saturated. Thus, it suces to show
that f(s) = U i s and U are modally equivalent, since in the context of M-saturated struc-
tures modal equivalence is a bisimulation. Suppose rst that f(s) = U. As M [= p
V
M
()
,
so too does N [= p
V
M
()
for every

-modal formula . Thus


s [= s [= p
V
M
()
V
M
() f(s)
V
M
() U
U [=
Now suppose s and U are modally equivalent. Then for X F
X f(s) s [= p
X
U [= p
X
X = V
M
(p
X
) U
Thus, (s, f(s)) [ s N is a bisimulation and so f is surjective among children. (Note that
this argument also showed that s and f(s) agree on proposition letters.)
Finally we show (d) that f is surjective. Let U P(F) be an ultralter. We need to
nd an s N such that f(s) = U. In other words, we need to nd an s such that s [= p
X
i X U. Well, let := p
A
[ A U. Note, to prevent confusion, that the p
A
can
be considered either modal formulas or as the equivalent rst order standard translations
ST
x
(p
A
).
We claim that is nitely satisable in N. Let p
A
1
, . . . , p
An
be a nite collection of
formulas in . Then, as U is an ultralter, we know that A
1
A
n
,= . Thus, the
formula p
A
1
p
An
is satisable in M. By lemma (2) it follows that p
A
1
p
An
is
satisable in N too.
Since is nitely satisable in N, and N is -saturated, all of is satised by some
element s N. We claim f(s) = U. Let A U. Then p
A
, so s [= p
A
, so A f(s). Now
let A , U. Then A U, so p
A
, so s [= p
A
, so A f(s), and so nally A , f(s) as
f(s) is an ultralter.
Weve completed showing that f : N ue M is a surjective bounded morphism of models.
As the underlying frame of N is in K and K is closed under bounded morphic images, we
have that ue F is in K. Finally, as K reects ultralter extensions, F must be in K too.
25
So weve completed showing the Goldblatt-Thomason theorem. Once again, this theorem
tells us which rst order conditions on frames are also modal conditions on frames. It relates
the expressivity of rst order logic and modal logic.
The proof we gave above of the Goldblatt-Thomason theorem was model-theoretic. How-
ever, its also possible to give an algebraic proof using Birkhos theorem. This theorem
states that a class of algebras is equationally denable i the class is closed under taking
homomorphic images, subalgebras, and products.
5 A Modal Version of Lindstr oms Theorem
In the last section we focussed on modal formulas talking about frames, but now we will
switch back to considering how modal formulas describe models. Remember that the truth
or falsity of a modal formula is evaluated at a node of a model. Thus, the structures of
interest to us really are pointed models, a pair consisting of a model and a specied node.
Pointed models are the things that make modal formulas true or false.
One way to think about modal formulas is that each one separates out a class of pointed
models. I.e., if is a modal formula, then it denes a class (M, w) [ M, w [= of pointed
models. Two modal formulas are considered equivalent if they dene the same class of
pointed models. That said, it makes sense to actually identify the modal formula with this
class. I.e. we can think of a modal formula not just as a nite sequence of symbols in a
certain language, but as a class of pointed models.
This kind of thinking actually works for any language and model type. E.g., rst order
languages, second order languages, propositional logic, etc. You take some language L and
have in mind associated L-models. Then, each L-sentence denes a class of L-models, and
two such sentences are equivalent if they dene the same class. So, again, we can identify
the sentences with classes of models.
This motivates the denition of a sentence as a class of models. This is an abstract
type of sentence that isnt tied down to any sort of syntactic structure it only depends
on what type of models youre dealing with presently. For example, weve already seen how
a modal formula and its rst order standard translation ST
x
() talk about the same type
of model and are equivalent, even though they are in dierent languages. This equivalence
lies in the fact that they dene the same class of pointed models.
One upshot of looking at things in this light, is that it allows us to compare dierent
logics that could have vastly dierent syntactic structure. Further, it provides a convenient
system for trying to understand what characterizes modal logic. Weve seen that modal logic
is inherently local. We wonder if theres some way to capture this impression by comparison
with other logics in a formal way. The answer is a (tentative) yes, and for it we turn to a
modal version of Lindstr oms theorem.
26
5.1 Abstract Logics
We have seen already that there are dierent modal languages: the basic modal language is
, but we also noted the basic temporal language F, P and an arrow language which
consists of a 2-ary modal operator symbol. However, its not just the operator symbols that
can vary the proposition letters can vary too. Typically weve just xed one countable-
sized set of proposition letters . However, its nice to be able to vary which proposition
letters were using. We already saw an example of this in the Goldblatt-Thomason theorem
where we introduced a potentially uncountable-sized set of proposition letters

. In the
following well also have occassion to consider nite sized sets of proposition letters. These
observations motivate the following denition.
A modal signature is a pair = (, ) where =
1
,
2
, . . . is a collection (nite or
innite) of modal operators of specied arities ( N), and is a collection (nite or innite)
of proposition letters.
Our notion of frame and model is of course relative to the signature. If is a collection
of modal operator symbols of specied arities, then a -frame is a set W together with an
(n + 1)-ary relation R

for each of arity n. E.g., if = , then a -frame is a set


W with a 2-ary relation R on it. If = (, ) is a modal signature, then a -model is a pair
M = (W, V ) where W is a -frame and V : P(W) is a mapping from to the power
set of the frame. A -pointed-model is a pair (M, m) such that M is a -model and m M
is a node of the underlying frame. E.g., if = (, p, q), then a -pointed-model is a set
M together with a 2-ary relation R on M, plus two specied subsets V (p) and V (q) of M,
plus a specied point m M.
If = (, ) and

= (

) are two modal signatures, then we say

is a subsignature
of , written

, if

and

. If

, and M is a -pointed-model, then there


is a natural way to get a corresponding

-pointed-model M `

we just forget about the


extra structure of M.
Modal formulas are also signature-dependent. Let = (, ). The -modal formulas are
all the syntactic objects you obtain by starting with the atomic proposition letters, and then
by closing o under the operators in and the boolean operators, say (0-ary), (1-ary),
and (2-ary). E.g., if = and = p, q then examples of modal formulas include: p,
q, , p, (q ), etc.
We can associate to each -modal formula a class of -pointed-models in the usual way.
In fact, in this section, we shall identify this syntactic formula with this class and say that
is a class of -pointed-models. In this light, a bare-bones view of modal logic would be to
think of it as a function from modal signatures to collections of classes of -pointed-models.
I.e., if L
M
denotes modal logic, then L
M
() consists of all the classes of -pointed-models
denable by some -modal formula. I.e., L
M
() consists of all the sentences that are
expressible by some -modal formula. Of course we do lose some information when we look
at modal logic in this light, but still a large amount of information remains. E.g., p q and
q p are now considered the same sentence, but we still know theyre distinct from p q,
say.
Lets reiterate our point of view. We consider a logic to be some way of assigning to every
27
signature under consideration a collection of classes of -models. Every class of -models
is called a sentence. We forget all the information about a logic except how many classes
of models it can discern. Some logics are able to express more sentences than others. For
example, second order logic can express the sentence nite (the class of all nite models)
whereas rst order logic cannot.
We will be focussing on (abstract) logics that deal with modal signatures, because were
interested in understanding modal logic better. We will want to compare modal logic (viewed
as an abstract logic) with other logics in an attempt to understand modal logic better. We
are attempting to answer the question how modal logic ts in with other logics one could
dene in the vicinity. As such, it will be useful to give a denition of a logic that rules
out some especially quirky ones. The denition of logic given below at least ensures that
sentences have a nite nature in that they only depend on nitely many symbols of the
signature. Well also require that if a sentence is expressible in some signature then its still
expressible in any larger signature.
Denition of a Logic If is a modal signature, then a class of -pointed-models is called
a -sentence. A logic L is a function from modal signatures to collections of -sentences
such that:
1. (nite sentences) For every modal signature , if L(), then there is a nite
subsignature

of and a sentence

L(

) such that for every -pointed-model M,


M M `

.
2. (expansions) For every modal signature

and sentence

L(

), if is some modal
signature with

, then there is some sentence L() such that for every


-pointed-model M, M M `

.
One might additionally require other reasonable principles, such as containing the boolean
connectives, but we will not need to do this for what follows. (The restriction could be
phrased as follows: if
1
,
2
L() then
1

2
L(), etc.)
The Quintessential Example The quintessential example of the type of logic just dened
is the usual modal logic, which well write as L
M
. This indeed can be thought of as a function
from modal signatures to collections of -sentences, as weve already observed. In detail,
L
M
() is dened to consist of the classes of -pointed-models that are denable by some
(actual, syntactic) -modal formula.
Comparing Two Logics Let L
1
and L
2
be two logics. We write L
1
L
2
if for every
modal signature and sentence L
1
() we have L
2
(). I.e., every sentence thats
expressible in the rst logic L
1
is expressible in the second logic L
2
as well. This means that
L
2
can express just as much as L
1
and possibly more. It follows from our denition of logic
that if L
1
L
2
and L
2
L
1
then L
1
= L
2
. That is, two logics are the same if they can
express the same sentences.
28
Our modal version of Lindstroms theorem below puts two extra conditions on a logic
(having a notion of nite degree and being invariant under bisimulations, both to be dened
shortly) that in a weak sense characterize modal logic. What is this weak sense? Well, well
be proving that if L is any logic that satises these two extra constraints and is at least as
expressive as the usual modal logic, then it actually is the same as the usual modal logic
after all. In other words, if L is some logic such that L
M
L and L satises the two extra
conditions (involving nite degree and bisimulations) then L = L
M
. Another way to say
this is that L
M
is a maximal logic among logics possessing these two extra properties. This
result, then, suggests (tentatively) that having a notion of nite degree and being invariant
under bisimulations are important properties from the point of view of understanding modal
logic.
One reason to hedge ones bets and say words like tentatively here is that the theorem
only yields modal logic as a maximal logic with these two properties. Indeed, I expect
that its not the case that its unique in this regard. For example, the original rst order
Lindstr om theorem states that rst order logic is a maximal logic (here logic is dened a bit
dierently of course) among compact, skolem logics. Yet there are known examples of logics
other than rst order which possess this same property (of course theyre incomparable to
rst order logic). So a Lindstr om theorem doesnt uniquely pinpoint the logic in question
in general, but it does pinpoint it half-way, or from one direction so to speak. (It should be
remarked that Ive thought about how adapt the rst order example I know of to the modal
case, but it doesnt seem to adapt well. It would be nice to think up some modal example.)
Before we get to the theorem proper, we have to dene these two extra conditions, having
a notion of nite degree and invariance under bisimulations. Well also prove a few lemmas
that are potentially of some independent interest, though we wont make use of them in
these notes other than in the Lindstr om theorem.
5.2 Lemmas
To understand what we mean by having a notion of nite degree, lets rst look at the case of
the quintessential logic, L
M
. We will inductively dene a function deg from modal formulas
(in any signature) to N as follows:
1. We set deg(p) = 0 for each proposition letter p. Similarly, we set deg() = 0.
2. We set deg() = deg().
3. We set deg(
1

2
) = max(deg(
1
), deg(
2
)).
4. Finally, we set deg(
1

n
) = max(deg(
1
), . . . , deg(
n
)) +1 for each modal oper-
ator symbol of arity n.
That is, the degree of atomic formulas is 0, and the degree goes up by one each time you
introduce a new modal operator symbol, otherwise it doesnt go up. In other words, the
degree is the maximum quantier depth of the formula. E.g., the degree of (p q) is 0,
while the degree of (p) (p ) is 2.
29
Weve said that modal truth evaluation is local, and this degree function just dened
is actually a measure of how local each formula actually is. A formula of degree 0 only
depends on the properties of the current node for its evaluation. A formula of degree 1
involves looking at children of the current node in addition to looking at the current node.
A formula of degree 2 involves looking at children of the children of the current node, and
so on.
Going hand in hand with this denition of degree for the usual modal logic is the denition
of height of a node in a pointed model, and truncating a pointed model at some specic
height. Let (M, w) be a pointed model. Let x and y be nodes of M. Then x is said to be a
child of y if there exist a modal operator symbol (of arity n) and some i 1, 2, . . . , n
and some nodes z
1
, . . . , z
i1
, z
i+1
, . . . , z
n
such that R

yz
1
z
i1
xz
i+1
z
n
. In this case y
is said to be a parent of x. The denitions of descendant and ancestor are as expected. The
height of a node x M is dened as the smallest number n such that there is a sequence of
nodes w = w
0
, w
1
, . . . , w
n1
, w
n
= x such that for each i 0, 1, . . . , n 1, w
i+1
is a child
of w
i
. If there is no such sequence, then the height is dened to be . For short, we may
write the height of x as h(x).
Given any pointed model (M, w) we can always truncate so that only the nodes of a
certain height or less are left. We write (M, w)|
n
for the submodel x M [ h(x) n of
M. Note that as h(w) = 0, w will never be lost by truncating in this way. Thus, (M, w)|
n
can be made into a pointed model in a natural way too, by using w as the designated node.
However, we will abuse notation, and also use (M, w)|
n
for what we might otherwise write
as ((M, w)|
n
, w). Hopefully context will make clear which is meant.
The relationship between truncation at height n and modal formulas of degree at most
n is very tight, as demonstrated by the following lemma.
Lemma 3. Let (M, w) be a pointed model and n N. Then for all modal formulas of
degree no more than n, we have
(M, w) [= (M, w)|
n
[=
.
Proof. We actually prove by (reverse) induction on k = n, n 1, . . . , 1, 0 that for all x M
with h(x) k, and for all modal formulas of degree no more than n k
M, x [= (M, w)|
n
, x [=
The case where k = 0 and x = w gives us the lemma as stated. We prove it by induction on
k except that we start with k = n and work our way down towards k = 0.
For the base case where k = n, we note that formulas of degree n n = 0 cannot have
modal operator symbols in them, and so their truth only depends on the valuation of the
node in question. By the denition of submodel, the valuation at each node remains the
same, so the submodel must agree at every node with the original model on formulas of
degree 0.
30
Now assume weve shown the equivalence for k n and we show it for k 1 0. Let
x M with h(x) k 1. Since the formulas of degree no more than n k +1 are boolean
combinations of formulas of the form
1

l
where
1
, . . . ,
l
are formulas of degree no
more than nk (and the atomic formulas), we may restrict attention to just these formulas

1

l
. Of course M, x [=
1

l
i there are y
1
, . . . y
l
M such that R

xy
1
. . . y
l
and M, y
i
[=
i
for each i. Now note that if there are such y
i
, then their height is at most k
because x has height k 1, so they are also in (M, w)|
n
. Further, by inductive hypothesis,
as h(y
i
) k and
i
has degree no more than n k, we have (M, w)|
n
, y
i
[=
i
for each i.
Thus, (M, w)|
n
, x [=
1

l
. The other direction is essentially the same argument except
that we dont have to argue for the y
i
being in M.
Finite Degree Notion So weve seen that there is a function deg which takes modal
formulas to natural numbers such that for every modal formula and every pointed model
(M, w) we have
(M, w) [= (M, w)|
deg()
[=
We can also phrase this not in terms of the syntactic modal formulas, but instead in terms
of the abstract sentences, though in the case of modal logic we have to make some choices,
because two syntactic modal formulas of dierent degrees can still yield the same sentence,
i.e. class of pointed models.
The general denition of a logic L having a nite degree notion is as follows: there exists
some operation deg with codomain N such that for every modal signature and for every
sentence L(), we have
(M, w) (M, w)|
deg()

The usual modal logic L
M
is an example of a logic having a nite degree notion. We may
dene the degree deg() of a sentence to be, say, the smallest degree of a modal formula
dening . E.g., the degree of = (M, w) [ (M, w) [= is 0, since this sentence is
dened by the zero-degree formula .
Invariance Under Bisimulations The other notion that will help us partially charac-
terize modal logic is invariance under bisimulations. Since we only dened bisimulations for
the basic modal language before, and to make sure the denition is fresh in our minds, lets
dene it in the general case of any modal signature.
Let = (, ) be a modal signature. Let (M, w) and (M

, w

) be two -pointed-models.
We say (M, w) and (M

, w

) are bisimilar, written (M, w) = (M

, w

), if there is a bisimu-
lation Z M M

that contains (w, w

). A bisimulation is a relation Z M M

such
that
1. (atomic facts match) For all (x, x

) Z and for all p , we have x V


M
(p) i
x

V
M

(p).
31
2. (forth) For all (x, x

) Z, if is of arity n and there are y


1
, . . . , y
n
M such that
R
M

xy
1
y
n
, then there are y

1
, . . . , y

n
M

such that R
M

1
y

n
and (y
i
, y

i
) Z
for each i 1, . . . , n.
3. (back) This is the same as the forth condition except going the other way. In detail,
for all (x, x

) Z, if is of arity n and there are y

1
, . . . , y

n
M

such that
R
M

1
y

n
, then there are y
1
, . . . , y
n
M such that R
M

xy
1
y
n
and (y
i
, y

i
) Z
for each i 1, . . . , n.
A logic L is said to be invariant under bisimulations if it satises the following property:
For every modal signature , and for every sentence L(), if (M, w) and (M, w) =
(M

, w

), then (M

, w

) too.
Weve already seen that L
M
, the usual modal logic, is invariant under bisimulations
because we saw that if two nodes are bisimilar, then they are also modally equivalent, i.e.
satisfy the same modal formulas. (We actually only saw the proof of this for the basic modal
language, but the proof is essentially the same in the more general case.)
n-complete formulas Before we proceed to the modal Lindstr om theorem, we have a few
more concepts and lemmas to introduce. The rst order of business is to see how in the usual
modal logic, if one is dealing with a nite signature (nitely many modal operator symbols
and nitely many proposition letters), then single formulas become powerful at describing a
pointed model in full detail up to any desired depth. These are called n-complete formulas
which well introduce after the following necessary lemma.
Lemma 4. Let = (, ) be a nite modal signature. Let n N. There are nitely many
formulas of degree n up to logical equivalence.
Proof. We prove this by induction on n. Consider the case n = 0. Here we just have the
boolean combinations of the proposition letters. If there are k proposition letters, then there
are 2
2
k
distinct boolean combinations, up to equivalence. (Each atomic fact p can be
true or false, so a complete state description is a subset of . There are 2
k
such complete
state descriptions. However, the boolean combinations express each possible combination
of such complete states. E.g. p q can be identied with the set of complete states that
contain p or q. Since there are 2
k
complete states, there are 2
2
k
sets of complete states, and
so this is how many boolean combinations there are.) Anyways, 2
2
k
is nite.
Now suppose there are nitely many formulas of degree no more than n up to equivalence,
and well try to show the same is true for n + 1. Now, every formula of degree n + 1
is a boolean combination of formulas of degree no more than n and formulas of the form

1

m
, where has arity m and the
i
have degree no more than n. Thus, it
suces to show that there only nitely many formulas of the form
1

m
where the
i
have degree no more than n. We only need to show this for a particular , since is
nite. We note that, by our inductive denition of truth, if
i
is equivalent to

i
for each
i, then
1

m
is equivalent to

m
. If k is the number of formulas of degree at
most n up to equivaelnce, then it follows that there are at most k
m
formulas of the form

1

m
where each
i
has degree at most n up to equivalence.
32
Lemma 5. Let be a nite modal signature. Let n N. There is a nite collection of
satisable modal formulas
1
, . . . ,
m
each of degree at most n that are pairwise contradictory,
collectively exhaustive, and each decides the truth of all formulas of degree n or less. I.e.,
1. (pairwise contradictory) For each i, j 1, . . . , m with i ,= j, the formula (
i

j
)
is valid.
2. (collectively exhaustive) The formula

i{1,...,m}

i
is valid.
3. (complete) If is a -modal formula of degree at most n, then for each i 1, . . . , m,
either
i
or
i
is valid.
These formulas are called the n-complete -modal formulas, and are unique up to equivalence.
Proof. By the previous lemma, we may list all nitely many formulas of degree at most n as

i
[ i K where K is of course some nite set. For each subset J K, we may introduce
a formula of degree at most n

J
:= (

iJ

i
) (

iKJ

i
)
Since there are nitely many subsets of K, there are nitely many such formulas
J
.
Now, some of the
J
may be unsatisable, and some of the
J
may be equivalent to
others. No problem: from each satisable equivalence class of the
J
s (grouped according
to logical equivalence) choose one representative
j
. We get the desired n-complete formulas

1
, . . . ,
m
.
Theyre obviously complete by construction: every formula of degree at most n is
either a conjunct of
j
or its negation is a conjunct. Theyre collectively exhaustive since
weve chosen a representative element from each satisable equivalence class, and theyre
satisable since we only chose from satisable classes. Theyre pairwise contradictory too: if

i
and
j
are in dierent equivalence classes, then we cant have both
i

j
and
j

i
valid. Thus, by completeness, we must have, say,
i

j
valid, i.e. (
i

j
) is valid.
I think its fairly intuitive that the n-complete formulas are unique, but here is an argu-
ment for this. We show that if
1
,
2
, . . . and

1
,

2
, . . . both satisfy the requirements
of the lemma, then there is a bijection between them that sends a formula to a logical equiv-
alent. To see that this is so, we argue as follows. Let
j
be given. Because the

i
are
collectively exhaustive, and
j
is complete and satisable, we have
j

k
is valid for some
k. In fact, there is a unique such k since the

i
are pairwise contradictory. Because

k
is
complete, and
j
is satisable, we have

k

j
is valid. Thus,
j
and

k
are actually
equivalent. This gives us a function from the
i
to the

i
that sends formulas to logical
equivalents. This is an injective function since the
i
are pairwise contradictory, and it is
surjective by the same argument given above but done in reverse. I.e., for each formula

j
there is a logically equivalent
k
.
Our nal batch of lemmas has to do with a condition we can place on pointed models so
that modal equivalence implies bisimilarity. One such condition we can place on the models
33
is M-saturation, as we discussed earlier. However, here another condition will be needed.
It turns out modal equivalence up to degree n formulas is enough to imply bisimilarity in
a nite signature as long as youre dealing with tree-like models that have a maximum
height of n. Before we get to this result, we introduce the notion of tree-like model with a
lemma that establishes the existence of tree-like models bisimilar to any given model.
Lemma 6. Let (M, w) be any pointed model. Then there exists a pointed model (M

, w

)
such that (M, w) =(M

, w

) and (M

, w

) is tree-like in the following sense:


1. The number of parents of w

is 0.
2. If x M

and x ,= w

, then the number of parents of x is 1.


Proof. The process we use to construct the model (M

, w

) is called unravelling. The nodes


of M

are nite sequences of elements of M of the form (w = w


0
, w
1
, . . . , w
n
) where each
w
i+1
is a child of w
i
. If x is a nite sequence of elements from M and m is an element of M,
we use x + m to denote the obvious concatenation. For notational convenience, lets write
the last element of a sequence x as l(x). We dene the accessibility relations as follows:
R
M

xy
1
y
n
i each y
i
= x + m
i
for some m
i
M and R
M

l(x)m
1
m
n
. The valuation
of the nite sequence x M

is just inherited from M according to the last node of x. I.e.


x V
M

(p) l(x) V
M
(p). The designated node w

of M

is the nite sequence


consisting just of w.
Weve nished dening (M

, w

), so now we show that its tree-like and is bisimilar to


(M, w). By the denition of the accessibility relations, its clear that w

has no parents.
Further, every other node has exactly one parent, since to be a parent of y, you have to be
the same nite sequence as y except with the last element removed. This y must have a
parent since the last element of y is a child of the next to last element, by the denition of
what the nodes of M

are.
Finally, we check that (M, w) = (M

, w

). Dene Z M M

by saying (a, x) Z i
the last element of x is a. I.e., Z = (a, x) M M

[ l(x) = a. By the denition of V


M

its clear that if (a, x) Z, then a and z agree about atomic facts. Now consider the forth
case. Let (a, x) Z and let b
1
, . . . , b
n
M such that R
M

ab
1
b
n
for some of arity
n. We need to nd y
1
, . . . , y
n
M

such that R
M

xy
1
y
n
and (b
i
, y
i
) Z for each i. Well,
we set y
i
:= x +b
i
. This works.
Next lets check the back case. Let (a, x) Z and let y
1
, . . . , y
n
M

with R
M

xy
1
y
n
.
We want to nd b
1
, . . . , b
n
M such that R
M

ab
1
b
n
and (b
i
, y
i
) Z for each i. By the
denition of R
M

, we know that R
M

l(x)l(y
1
) l(y
n
). As (a, x) Z, we know a = l(x), and
(y
i
, l(y
i
)) Z for each Z, we see that we may let b
i
:= l(y
i
).
In the following lemma we will start using the notation (M, x) -
n
(M

, y) to mean that
for all modal formulas of degree at most n, (M, x) [= (M

, y) [= . In cases where
the underlying model is clear, we may write x -
n
y.
Lemma 7. Let (M, w) and (M

, w

) be two tree-like models in a nite signature. Assume


further that the two models each have a maximum height of n; i.e., for every x M, h(x) n
34
and similarly for M

. Under these conditions, if the two models agree on all modal formulas
of degree at most n, then they are bisimilar.
Proof. We dene Z M M

by setting (x, y) Z i h(x) = h(y) = k and x -


nk
y. Of
course h(w) = h(w

) = 0 and as w -
n
w

is given, we have (w, w

) Z. Thus, it suces to
show that Z is a bisimulation. It follows from our denition of Z trivially that if (x, y) Z,
then x -
0
y, i.e. they agree about atomic facts.
So lets turn to the forth condition (by the way, this is same as the back condition which
wont be checked). Let (x, y) Z and let x
1
, . . . , x
m
M such that R
M

xx
1
x
m
. We need
to nd y
1
, . . . , y
m
M

such that R
M

yy
1
y
m
and (x
i
, y
i
) Z for each i. Suppose that
h(x) = h(y) = k (which must be strictly less than n). For each node x
i
, we may introduce a
corresponding (n k 1)-complete formula
i
such that x
i
[=
i
as described in Lemma 5.
Recall that the role of
i
is to decide all modal formulas of degree up to (n k 1), and it
is itself of degree no more than (n k 1). It follows that
1

m
is a modal formula
of degree no more than n k which x thinks is true. By the assumption that (x, y) Z,
we know that x -
nk
y. Thus, y [=
1

m
too. Thus we may introduce y
1
, . . . , y
m
such that R
M

yy
1
y
m
and y
i
[=
i
. Since
i
decides all formulas of degree no more than
(n k 1), we see that x
i
-
nk1
y
i
for each i.
Now, since M is tree-like, each x
i
only has one parent: x. It follows that h(x
i
) = k + 1
for each i. The same argument holds for M

, so h(y
i
) = k + 1 for each i too. Putting this
all together then, we get h(x
i
) = h(y
i
) = k + 1 and x
i
-
nk1
y
i
, i.e. (x
i
, y
i
) Z for each
i.
5.3 The Theorem
We nally turn to stating and proving a modal version of Lindstroms theorem.
Theorem 8. Let L be a logic such that L
M
L. If L has a nite degree notion and is
invariant under bisimulations, then L = L
M
.
Proof. We show that L L
M
. Let be some modal signature and let L(). We need to
show that L
M
(). By the rst property dening a logic (applied to L), we may introduce
a nite subsignature

of and a sentence

L(

) such that for every -pointed-model


(M, w) we have
(M, w) (M, w) `

If we can show that

L
M
(

), then it follows by the second property dening a logic


(applied to L
M
), that L
M
(). So we try to show that

L
M
(

). Notice weve
successfully limited our attention to a nite modal signature. This will enable us to make
good use of those lemmas in the previous section.
As were assuming L as a nite degree notion, we may introduce n N as the degree of

in L. As in Lemma 5, we may introduce the n-complete

-modal formulas
1
, . . . ,
m
.
Dene J 1, . . . , m by setting
J := i [ there is a pointed-model (M, w)

such that (M, w)


i

35
We claim that

iJ

i
. One direction of this is easy to see. Let (M, w)

. Then
(M, w)
j
for some j 1, . . . , m as these n-complete formulas are collectively exhaustive.
Further, this puts j J by the denition of J, so (M, w)

iJ

i
. To see the other direction
requires a bit more work.
Suppose (M, w)

iJ

i
. Then (M, w)
j
for some j J and we may introduce an
(M

, w

such that (M

, w

)
j
. Since
j
is n-complete, it follows that (M, w) -
n
(M

, w

). It would be nice if we could now conclude (M, w) = (M

, w

), because then we
could cite the invariance under bisimulations of L and obtain (M, w)

completing the
proof. As it is, however, this doesnt necessarily follow. This is basically our method of
proof, but we have to go to smaller tree-like models to make it work.
By Lemma 6 we may introduce a tree-like model (M
1
, w
1
) such that (M
1
, w
1
) =(M, w).
Similarly we may introduce a tree-like model (M
2
, w
2
) such that (M
2
, w
2
) = (M

, w

).
Next, by Lemma 3, we observe that (M
1
, w
1
)|
n
-
n
(M
1
, w
1
) and similarly (M
2
, w
2
)|
n
-
n
(M
2
, w
2
). By the transitivity of -
n
, and remembering that (M, w) -
n
(M

, w

), we get
(M
1
, w
1
)|
n
-
n
(M
2
, w
2
)|
n
. Further, the (M
i
, w
i
)|
n
are still tree-like.
So, since these (M
1
, w
1
)|
n
and (M
2
, w
2
)|
n
are both tree-like, in a nite signature, the two
models agree on modal formuls up to degree n, and their nodes have a maximum height of
n, we may apply Lemma 7 to obtain that (M
1
, w
1
)|
n
=(M
2
, w
2
)|
n
.
Now, were given that (M

, w

. Since (M
2
, w
2
) is bisimilar to (M

, w

) and L is
invariant under bisimulations, we get (M
2
, w
2
)

. Since L has a nite degree notion


and the degree of

is n, we know that (M
2
, w
2
)|
n

. As this model is bisimilar to


(M
1
, w
1
)|
n
, we get (M
1
, w
1
)|
n

as well. Next we apply the degree assumption again to


get (M
1
, w
1
)

. Finally, by invariance under bisimulations again, we get (M, w)

.
Figure 4 shows this chain in pictorial form.
6 Completeness and Incompleteness for Propositional
Modal Logic
Suppose is a collection of formulas (in some logic) and is a single formula. A consequence
relation is a relation consisting of pairs of the form (, ). Of course one typically may assume
a bit more about the relation, such as that if (, ) and

, then (

, ) too. However,
we wont be concerned here with the general theory of consequence relations, as well only
be considering specic ones. But it will be nice to have a sense of what these things are,
especially a sense of what a completeness or incompleteness result is, in general terms.
A completeness result is a matching between some (intuitive) semantic consequence re-
lation [= and some syntactic consequence relation . Its syntactic in the sense
that must mean something like there exists a nite proof of from .
We can view this explication in two directions. On the one hand, we might start with
some intuitive notion of semantic consequence [= and then look for a corresponding
proof notion. A completeness result can be successful in this way because it typcially yields
36
Figure 4: Getting to the Other Side
37
the following (desirable) consequences for [=:
1. [= implies there is a nite subset
0
such that
0
[= .
2. (compactness) If is nitely satisable, then is satisable.
3. (eective) If is recursively enumerable, then [ [= is recursively enumerable.
On the other hand, we might start instead with a syntactic proof notion . If the
matching semantic notion [= is really intuitive, then weve succeeded in understanding
the given better. Also, it might become clear how could be applicable to more
situations of interest. For example, in modal logic, if you start with some syntactically
dened rules like (
1

2
) (
1

2
) then it may be unclear what meaning this may
have or where its applicable until you are able to give some kind of semantics for it, such
as frame semantics.
Though there are these two ways of viewing a completeness result, still it is only one kind
of thing really: a matching between a semantic notion [= and a syntactic notion .
(I should remark that in a strict sense completeness typically means just implies
[= , but this is usually proved in the context where the other direction, soundness, has
been or is about to be established. To label both directions together, its easier to say
completeness than completeness-soundness, and implies [= is usually the harder
direction anyways.)
However, there really are two dierent types of incompleteness results. First, we may
start with a semantic notion [= and show that there can be no matching syntactic notion
. An example of this kind is given by second order logic. Here we have a semantic
notion dened as usual. We know that there can be no corresponding syntactic notion, since
second order logic doesnt satisfy compactness. E.g., we can write sentences in second order
logic asserting that there are nitely many elements, and that there at least n elements for
every n N. These sentences together are nitely satisable but not satisable.
Second, we may start with a syntactic notion and show that there can be no
matching semantic notion [= . This direction, however, depends much more heavily on
what we mean by semantic notion after all. The criteria for a corresponding semantic
consequence relation must be carefully delineated before an incompleteness theorem can be
proven in this sense. For example, we may work in a modal language and give ourselves a
syntactic notion and then show there is no class of frames K such that i for
every model M based on a frame in K, we have M [= implies M [= (where [= here is
the notion weve already dened in previous sections).
We will also have the occasion to distinguish between weak completeness and strong
completeness. Strong is as described above, while weak means we only have the match
between [= and (or nite).
In this section we want to discuss a few completeness and incompleteness results for
propositional modal logic. To get the ball rolling, we will rst consider the case of (non-
modal) propositional logic, because the method of proof is so similar and generalizes in a
nice way. The rst modal completeness result we will consider is one for the relation [=
g
,
38
the global consequence relation. We dened this in an earlier section as: For all models
M, M [= implies M [= . Well prove this via the J onsson-Tarski theorem concerning
boolean algebras with operators. The second modal completeness result will be for [=
l
,
the local consequence relation. This was dened earlier as: For all pointed models (M, w),
(M, w) [= implies (M, w) [= . This will be proven using the notion of canonical frame.
Then we will see how our methods generalize to other situations. Well see that the
logic of reexive, transitive frames matches S4, and the logic of equivalence relation frames
matches S5.
Next well turn to some incompleteness results. Well see that KL is not strongly complete
with respect to any class of frames (though it is weakly complete with respect to nite strict
partial orders). Finally, well consider a certain temporal modal logic K
t
ThoM and show
that its not even weakly complete with respect to any class of frames.
6.1 Propositional Logic
Our focus of course is not propositional logic in these notes, but, since the completeness
proofs we will give for modal logic are easily seen to be generalizations of the propositional
case, it will be useful to consider this case in detail rst. It helps us x notation and ideas
for how the modal version will go.
Let L = , , consist of a nullary , a unary , and a binary , all function symbols.
An L-algebra is a set B together with operations on B that correspond to the three function
symbols. Recall a nullary function symbol is a constant, so is just an element of B, while
: B B is a unary function, and : B
2
B is a binary function. There are two important
examples of L-algebras for us. First, there is a two-element algebra 2 = 0, 1 where the
operations are dened as follows: = 0, 0 = 1, 1 = 0, x y = 0 unless both x and y are
1. Second, there is the free L-algebra on countably many generators = p
1
, p
2
, . . ., which
well call F. The elements of F may be called propositional formulas, and are things like
(p
1
p
2
) (p
3
), or ( ()) p, etc. That is, F consists of the basic proposition letters
p
1
, p
2
, . . . and , and F is closed under the following formula building operations
1. If F, then F
2. If
1
,
2
F, then
1

2
F
and nothing so generated is identied. We think of F as being composed of nite syntactic
objects like p
1
, p
2
, and p
1
p
2
. Note however the slightly ambiguous role that plays here.
It is on the one hand a binary operation on F that, for example, has (p
1
, p
2
) = p
1
p
2
.
Yet we also use as a symbol to help us name the element p
1
p
2
F. Similarly, plays
two roles here.
An important property of F is its freeness: for every L-algebra B and for every func-
tion f : B, there is a unique extension

f : F B of f to F such that

f is an L-
homomorphism. This means that

f preserves , , and in the sense that

f(
F
) =
B
,
for all x F,

f(x) =

f(x), and for all x, y F,



f(x y) =

f(x)

f(y).
39
Also to be noted is that, for any L-algebra, not just F, we will use the usual abbreviations
0 , 1 (), p q (p q), etc.
Intuitive Semantic Consequence for Propositional Logic Let 2 be the two element
L-algebra as dened above. Let F and F. We say that is a semantic consequence
of and write [= if for every L-homomorphism h: F 2 such that for all we
have h() = 1, we also have h() = 1. This is the same condition as the truth table
notion of consequence. In view of the freeness of F, L-homomorphisms from F to 2 may be
identied with functions from to 2. Such a mapping can be thought of as a row in a
truth table. The columns are the and , and [= i in every row in which a 1
occurs in every -column 1 also occurs in the -column.
Now that we have dened the semantic notion, our goal is to provide a matching syntactic
notion. We will progress there step by step through a series of lemmas and denitions.
Denition 9 (Equations). A (formal) equation for us is simply an element of FF. Let be
a collection of equations. Let B be an L-algebra, and let h: F B be an L-homomorphism.
The homomorphism h is said to satisfy the equations if for every (t
1
= t
2
) , h(t
1
) =
h(t
2
), while h is said to satisfy F if h() = 1. The algebra B is said to validate
if for every L-homomorphism f : F B, f satises . If F, then it can naturally
be identied with the collection of equations = 1 [ . In this case, we may also
sometimes write h() = 1 to mean that for every , h() = 1.
Denition 10 (Boolean Algebra). A boolean algebra is an L-algebra that validates the
boolean algebra axioms BA. This collection of axioms BA F F consists of the fol-
lowing (formal) equations, for each t
1
, t
2
, t
3
F:
1. Associativity. (t
1
t
2
) t
3
= t
1
(t
2
t
3
), and similarly for .
2. Commutativity. For both and .
3. Distributivity of over and vice versa.
4. Absorption. t
1
(t
1
t
2
) = t
1
and t
1
(t
1
t
2
) = t
1
5. Complementation. t
1
(t
1
) = 0 and t
1
(t
1
) = 1
6. Identity. t
1
1 = t
1
and t
1
1 = 1 and t
1
0 = t
1
and t
1
0 = 0
7. Double-negation. (t
1
) = t
1
The list of axioms is overkill, but I dont want to muck around with (in)dependence
proofs here. Huntington has been able to axiomatize boolean algebras with a very small
number of axioms, but Im not sure how illuminating this will be for us here.
Note that B is a boolean algebra i for all x, y, z B, x (y z) = (x y) z, etc.
40
Denition 11 (Proofs). Let F. A -proof is a nite sequence of equations (elements
of F F) such that each equation in the sequence is an element of BA, or of the form = 1
for some , or obtained from previous equations in the list by one of the following rules:
t = t
t
1
= t
2
t
2
= t
1
t
1
= t
2
t
2
= t
3
t
1
= t
3
t = t

t = t

t
1
= t

1
t
2
= t

2
t
1
t
2
= t

1
t

2
t
1
= t
2
means that there is some -proof of t
1
= t
2
, and is shorthand for
= 1 where F.
This notion of proof is indeed a nite syntactic notion, as the following lemma makes
clear.
Lemma 12. Let F and let F. First, if , then there is a nite subset
0

such that
0
. Second, if is nitely consistent, then it is consistent. Third, if is
recursively enumerable, then [ is recursively enumerable.
Proof. To the see the rst thing, note that a proof is nite, and so if we restrict to only
the axioms
0
actually used in the proof, we get our result.
is inconsistent means that and consistent means not inconsistent. Finitely
consistent means that every nite subset is consistent. Its clear that the rst thing implies
the second thing because implies
0
for some nite subset
0
.
As for the third thing, we will have to satisfy ourselves with an intuitive argument. So
suppose that is listable, and well describe an eective procedure for listing [ .
In parallel, we may start listing and BA and all the proofs from the axioms weve listed
so far. As we get new conclusions, we list them.
So, if we can show that [= i , then we will have succeeded in a giving a
reasonable completeness proof.
Note that given any set I, the power set of I, written either as 2
I
or P(I), can naturally
be made into an L-algebra using for , complement for , and intersection for .
Lemma 13. [= i for all sets I and for all L-homomorphisms h: F 2
I
, h satises
implies h satises .
Proof. The right to left direction is easy, because we may note that I may have size one. As
2
1
is isomorphic to 2, we recover our denition of [= .
Now we consider the left to right direction. Let h: F 2
I
with h() = 1. Note that for
each i I we have a projection
i
: 2
I
2 which is an L-homomorphism. As such, we have

i
(1) = 1. Thus, for each i I,
i
h: F 2 is an L-homomorphism that sends to 1. By
our assumption that [= , we get
i
h() = 1. Thus, each component of h() is 1, and
so it follows that h() = 1 = (1, 1, 1, . . .).
Lemma 14. [= i for all sets I and for all L-subalgebras B of 2
I
and for all L-
homomorphisms h: F B such that h() = 1 we have h() = 1.
41
Proof. The right to left direction is clear. We may let I = 1 and B = 2
I
= 2.
As for the left to right direction, let f : B 2
I
be an embedding (injective L-homomorphism),
and let h: F B such that h() = 1. Now, f h: F 2
I
satises , and so by Lemma 13
we get f(h()) = 1. As f is injective and f(1) = 1, we get h() = 1.
Now we want to start relating this condition to boolean algebras. To do so we need to
recall the notion of ultralters. In an earlier section we dened ultralters just in terms of
power sets, but the same denition works for any boolean algebra. In detail:
Denition 15 (Filters). If x, y are elements of a boolean algebra, we write x y to mean
x y = x (or equivalently (x y) = 1). A subset U of a boolean algebra B is called a
(proper) lter if
1. 0 , U
2. x, y U implies x y U
3. x U and y B with x y imply y B
If U additionally satises that for every x B, either x U or x U, then U is called an
ultralter. We let Uf(B) denote the set of all ultralters on B.
One can check that x y so dened does indeed produce a partial order on any boolean
algebra. Recall also the Ultralter Theorem, which states that any consistent subset of B
may be extended to an ultralter.
Lemma 16 (Stone Representation Theorem). B is a boolean algebra i there is a set I such
that B embeds into 2
I
.
Proof. The right to left direction just involves checking that power set algebras and their
subalgebras are boolean algebras. To do this, we would just go through each boolean algebra
axiom, e.g. x y = y x, and check that it holds. We also observe that the axioms are all
universal, so subalgebras of boolean algebras are again boolean.
Now consider the left to right direction. We let I = Uf(B). We need to dene an injective
L-homomorphism f : B 2
Uf(B)
. We set f(b) := U [ b U.
Is f an L-homomorphism? Well, f(0) = because no ultralter contains 0 by denition.
Now let b B and we show f(b) = f(b).
U f(b) b U
b , U
U f(b)
If x, y B the fact that f(x y) = f(x) f(y) follows similarly from the denition of
ultralter.
The only non-trivial part of this is that f is injective, and this requires the Ultralter
Theorem. Let x, y B with x ,= y. Without loss of generality we may assume that x , y.
I.e., x y ,= 0. Thus, x y is a consistent subset of B so there is an ultralter U
extending it. We have U f(x) but U , f(y), so f(x) ,= f(y).
42
Lemma 14 and Lemma 16 together obviously yield:
Lemma 17. [= i for all boolean algebras B and for all L-homomorphisms h: F B
such that h() = 1, we have h() = 1.
Now we will link things up to our notion of proof, but to do this we will need the notion
of modding out by a congruence relation, which I will take a few moments to review here.
A congruence relation on an L-algebra is an equivalence relation that respects the oper-
ations of L. I.e., B B is a congruence relation on B if it is reexive, symmetric, and
transitive, and additionally, for all x, x

, y, y

B we have x x

implies x x

, and
x x

and y y

imply x y x

.
The nice thing about a congruence relation is that we may mod out by it and obtain
another L-algebra. In detail, let B be an L-algebra and let be a congruence relation
on it. We dene a new L-algebra B/ as follows. The elements of B/ are simply the
equivalence classes of . If x B, lets use [x] to denote the equivalence class of x. Now,
we dene the zero element to be [0]. Next, [x] := [x] and similarly [x] [y] := [x y].
However, to check that this makes sense as a denition, we have to be sure that if we use
dierent representative elements for the equivalence class then the denitions of and
wont be aected. I.e., we need that if [x] = [x

] (x x

), then [x] = [x

] (x x

),
and similarly for . But this is exactly what the additional congruence conditions over and
above the equivalence relation conditions ensure.
Note that [] : B B/ is always an L-homomorphism. This follows directly from the
denition of the operators on B/ .
Now, you may have noticed a similarity between the proof inference rules and the con-
ditions dening a congruence relation. This is no accident. The next denition shows how
we can identify formulas based on a provable equivalence relation according to a collection
of assumptions .
Denition 18. Let F. By F/ we mean the L-algebra F/ , i.e. F modded out by
the congruence relation , where t
1
t
2
i t
1
= t
2
.
Lemma 19. For any collection F, F/ is a boolean algebra and the L-homomorphism
[] : F F/ satises .
Proof. Well, its clear that [] satises BA and since for each t
1
= t
2
BA we have
t
1
= t
2
and so [t
1
] = [t
2
].
To show that F/ is a boolean algebra, we need to show that for any L-homomorphism
h: F F/ we have h(t
1
) = h(t
2
) for each t
1
= t
2
BA. To do this we dene by induction
a map

: F F as follows:
1. For each p we choose some representative element p

of h(p)
2.

:=
3. (t)

:= t

43
4. (t
1
t
2
)

:= t

1
t

2
It follows that [t

] = h(t) for all t F. This is easily proved by induction as h is an L-


homomorphism. The idea of

is that it replaces each proposition letter p by a term t = p

which is in h(p), while leaving everything else unchanged. I.e., to go from t to t

we do
a uniform substitution of each p in t by some element of h(p). In particular, we have if
t
1
= t
2
BA, then t

1
= t

2
BA. To see this, we note that the boolean algebra axioms are
closed under uniform substitution. Saying this another way, we may consider each boolean
axiom separately, e.g. t
1
t
2
= t
2
t
1
, and then observe (t
1
t
2
)

= (t
2
t
1
)

is also of this
form, since (t
1
t
2
)

= t

1
t

2
and similarly (t
2
t
1
)

= t

2
t

1
.
So, if t
1
= t
2
BA, then h(t
1
) = [t

1
] = [t

2
] = h(t
2
), and so F/ is a boolean algebra.
Lemma 20. Let B be a boolean algebra and let h: F B be an L-homomorphism that
satises F. Then there is a unique L-homomorphism g : F/ B such that g([t]) =
h(t) for every t F.
Proof. Such a g is clearly unique. To show that there is such a g, we just need to show
that if [t
1
] = [t
2
] then h(t
1
) = h(t
2
). Well, we can show this by induction on proofs yielding
t
1
= t
2
. If t
1
= t
2
BA then it follows from the fact that B is a boolean algebra
and h satises . If t
1
= t
2
is obtained by one of the three equivalence relation inferences,
then by properties of equality its clear that h(t
1
) = h(t
2
). If its obtained from one of the
congruence inferences, then it follows from the fact that h is an L-homomorphism. In any
case, if [t
1
] = [t
2
], i.e. t
1
= t
2
, then h(t
1
) = h(t
2
).
Now weve done all the work we need to get the completeness result.
Theorem 21. [= i
Proof. In view of Lemma 17, we may show that i for every boolean algebra B and
for every L-homomorphism h: F B that satises we have h() = 1.
Suppose rst that . Let B be a boolean algebra with h: F B satisfying .
Then, by Lemma 20 introduce g : F/ B such that g([t]) = h(t) for all t. In particular,
g([]) = h(). Since , we have [] = [1], and since g([1]) = 1, we see that h() = 1.
Now we show the other direction. Suppose that for every boolean algebra B and for
every h: F B that satises we additionally have h() = 1. Well, [] : F F/ is an
L-homomorphism that satises . Our assumption applies therefore and we obtain [] = 1,
i.e., .
In the introduction to this section we stated compactness as one of the desirable conse-
quences of a completeness result, but we havent yet seen exactly how this works in. Recall
compactness says that if is nitely satisable, then is satisable. is satisable means
that there is some model that makes it true, but what are our models in this case? In
the context of propositional logic, a model is simply an L-homomorphism h: F 2. In
Proposition 22 below, well see that satisable and consistent match up. Thus, recalling the
second part of Lemma 12, which said that nitely consistent implies consistent, we see that
compactness does hold for propositional logic.
44
In contexts where the intuitive semantic relation is explicated in terms of models, i.e.
[= i every model that satises also satises , then it makes sense to express the
completeness result in terms of consistency and satisability. In situations where you have
a deduction theorem (such as the present one), then there is a correspondence between the
matching of satisability and consistency on the one hand, and [= and on the other.
A deduction theorem states that . In the propositional
logic case, this follows from the completeness theorem above, or induction on proofs can be
used. Using this (and a couple other little things) well show the following proposition:
Proposition 22. The following two conditions are equivalent.
1. For all and , [=
2. For all , is consistent i is satisable
Proof. Suppose the rst condition. Suppose is consistent, i.e. , . Thus, ,[= . So
is satisable. Now suppose is not consistent. I.e., . Then [= , so is not
satisable.
Now we suppose the second condition and show the rst. Suppose , . Since
is equivalent to , we see by the deduction theorem that we cant have
. I.e., is consistent. Thus, is satisable. Therefore, ,[= .
Now suppose ,[= . Then is satisable and therefore consistent. So , .
6.2 Boolean Algebras with Operators
We will limit our discussion here the basic modal signature , though with slight modi-
cations the same denitions and propositions will go through in the more general case. So
our language, in this and the next few sections, will be L = , , , . This is a unary
operator symbol, just like . An L-algebra is now a set B with a specied element B
and operators : B B and : B
2
B, as before, but now we also have an additional
operator : B B.
As before, we may let F be the free L-algebra on countably many generators =
p
1
, p
2
, . . .. F consists of the formulas of the language.
Full powerset algebras Given any frame W in the basic modal language, we may obtain
an L-algebra in a natural way. The underlying set of the L-algebra is the powerset of W,
P(W). The boolean operators , , and are dened as usual. The additional : P(W)
P(W) operator is dened as follows.
X := y W [ there is some x X such that Ryx
I.e., X consists of the points in W that can see something in X. We note under this
denition we have

xX
x = X for every X P(W). Well call such an L-algebra
which is based on a powerset (in the usual boolean way) and satises this property of a
full powerset algebra.
45
Full powerset algebras and frames are the same Full powerset algebras and frames
are actually in 1-1 correspondence. Given a full powerset algebra (P(W), ) we may recover
the accessibility relation R on W
2
as follows.
Ryx y x
We have described how to go from frames to full powerset algebras and how to go from
full powerset algebras to frames, and these operations are actually inverses of each other.
To facilitate showing this, lets give the two operations names. Well denote the diamond
operator on P(W) we get from an accessibility relation R of a frame W by Po R. Going
the reverse direction, well denote the accessibility relation on W we get from a diamond
operator of a full powerset algebra P(W) by Fr .
To show that these operations are inverses of each other, we need to show two things.
First, if W is a frame with accessibility relation R, then R = Fr(Po R). Second, if P(W) is
a full powerset algebra with diamond operator , then = Po(Fr ). Lets show the rst
thing rst.
[Fr(Po R)]yx i (by denition of Fr) y [Po R](x). This is, in turn, equivalent to Ryx
by the denition of Po.
The second thing requires our assumption that

xX
x = X for every X P(W).
[Po(Fr )]X = y W [ x X[(Fr )yx]
= y W [ x X[y x]
=

xX
x
Boolean algebras with operator The above discussion has shown that we may identify
full powerset algebras and frames. Given our success in proving a completeness theorem
for propositional logic by noticing that boolean algebras are the subalgebras of powerset
algebras (without the diamond), it seems reasonable to attempt a similar thing for modal
logic, especially given how weve just seen that full powerset algebras and frames match
up so nicely. We just need to nd something to play the role the boolean algebras did for
propositional logic. The things that will work are called boolean algebras with operators,
or BAOs. However, in our case, its really just boolean algeba with operator (no s) since
were only dealing with the basic modal language with one diamond .
The equational theory of boolean algebras with operator is the usual theory of boolean
algebras plus two additional axioms:
1. =
2. (x y) = x y
Just as the boolean algebras are exactly the subalgebras of the powerset algebras (without
a diamond) called the Stone representation theorem so the boolean algebras with oper-
ator are the subalgebras of the full powerset algebras. This is the content of whats called
46
the J onsson-Tarski theorem, which is a kind of generalization of the Stone representation
theorem. Well prove this theorem here in this subsection, and in the following subsection
use it to prove a completeness theorem for the global modal logic consequence relation [=
g
.
The local consequence completeness result will be proven in the subsequent subsection using
a slightly dierent method involving canonical models, to be introduced shortly.
Full powerset algebras and their subalgebras are BAOs First note that both =
and (x y) = x y hold of all subalgebras of full powerset algebras. Both of course
will be inherited down by subalgebras of an algebra that satises them, since these axioms
are universal. So we just need to see that a full powerset algebra satises them. This follows
by the assumption we place on a full powerset algebra that

xX
x = X for each subset
X. In detail, we have =

x
x = , and
(X Y ) =

zXY
z
=

xX
x

yY
y
= X Y
Ultralter Frames To prove the J onsson-Tarski theorem, we use a generalization of the
ultralter extension construction called the ultralter frame. However, in our case we will
be creating a full power set algebra (keeping in mind the identication of frames and full
powerset algebras). So how does this construction go? Well, you start with some boolean
algebra with operator B. As usual we have the collection of ultralters on B, Uf B. The
ultralters will be the nodes of our frame. Then, if U
1
and U
2
are two ultralters, well say
that RU
1
U
2
i x B [ x U
1
U
2
. This is equivalent to RU
1
U
2
i x U
2
x U
1
.
Nothing substantially dierent has changed from the case of the ultralter extension the
only dierence is that weve allowed ourselves to start at a general BAO instead of just a
frame/full powerset algebra. We may also understand the ultralter frame, the result of the
construction, as either a frame or a full powerset algebra. We just dened it as if we were
getting a frame coming out. If we had dened it explicitly in terms of getting a full powerset
algebra, it would read something like this: the set were taking the powerset of is Uf B and
the operation is dened by
X := U
1
Uf B [ U
2
X x B(x U
1
x U
2
)
I.e. = Po R where R is dened as above. Well typically denote the ultralter frame Uf B
whereas the ultralter full powerset algebra will be written either P(Uf B) or 2
Uf B

though of course we naturally identify the frame and the full powerset algebra.
Theorem 23 (J onsson-Tarski). An L-algebra B is a BAO i B embeds into a full powerset
algebra.
47
Proof. We already saw the right to left direction of this, that all subalgebras of full powerset
algebras are BAOs. So we turn our attention to the other direction.
We show that B embeds into 2
Uf B
. We use the same mapping as in the Stone represen-
tation theorem, i.e. f : B 2
Uf B
is dened by f(b) := U Uf B [ b U. The same
argument as in the Stone representation theorem yields that f is a well-dened, injective
mapping, and it preserves the boolean operations , , and . The only additional thing
to check is that f preserves . I.e., we need to check that f(b) = f(b) for each b B.
Let U f(b). Then b U. To show that U f(b) we show that there is some
ultralter U

such that b x [ x U U

. To nd such a U

we use the ultralter


theorem. We show that b x [ x U is consistent. We note that x [ x U is
closed under because the axiom (x y) = x y yields (x y) = x y in the
presence of the BA axioms. Since U cant contain , and b U, from the axiom =
we see that b ,= . Now we show that b x ,= 0 so long as x U. To see this, suppose
to get a contradiction that b x = 0. Then b x, and from this it follows, by the axiom
(x y) = x y, that b x. Thus, as U is upward-closed, we get that x U,
i.e. x , U.
Now let U f(b). Let U

be an ultralter with RUU

and b U

. We show U f(b),
i.e. b U. Since b U

, we get b U by the fact that RUU

.
Its not the case that every BAO is isomorphic to a full powerset algebra. As an example,
consider the set
B := X N [ X is nite or conite
We supply B with the usual boolean operations on P(N) (noting that these operations work
as they dont make nite/conite into non-nite/conite). And we dene : B B by
setting X = if X is nite and X = N if X is conite. This boolean algebra satises the
additional axioms = and (X Y ) = X Y , so we can call it a BAO.
We havent presented it as a full powerset algebra, but how do we know that its not
isomorphic to one? Well, every full powerset algebra is complete in the sense that every
subset X B of the algebra has a least upper bound. However, this property is preserved
by isomorphisms but it is not evident in our example. If we let X consist of the even numbers
as singletons, then of course no nite set is an upper bound, and conite upper bounds can
always get closer, as they contain innitely many odd numbers.
Normal modal logics and canonical frames Weve already noted the ultralter exten-
sion as a notable example of the ultralter frame construction. However, there is another
example which we will be using later. The canonical frame of a normal modal logic is the
ultralter frame of the Lindenbaum-Tarski algebra of the normal modal logic. Theres a
bunch of new words here that deserve some commenting. A Lindenbaum-Tarski algebra has
as underlying set equivalence classes of formulas, where the equivalence relation is given by
provable equivalence in whatever logic youre talking about. In the context of modal logic,
this algebra becomes a BAO where the operators are interpreted using the usual logical
operators looking at representatives. To make sure this works out, you have to be looking
48
at a logic that identies things properly. These are exactly the assumptions of a normal
modal logic. Moreover, as with the ultralter extension, we can make the canonical frame
into the canonical model by adding in a natural denition for what should happen with the
valuations.
A normal modal logic is a collection of (formal) equations (so F F) that satises
the following closure properties:
1. BA (here BA is dened by the same equation schema as before, except that the
terms now allow in them e.g. p p = p p BA)
2. = is in
3. (t
1
t
2
) = t
1
t
2
is in for all formulas t
1
, t
2
F
4. is closed under the equivalence relation rules: reexivity, symmetry, and transi-
tivity. E.g., if t
1
= t
2
, then so too t
2
= t
1

5. is closed under the congruence relation rules for , , and . E.g., if t
1
= t
2
,
then so too t
1
= t
2

The smallest normal modal logic is called K, however there are others that well consider
too. We can specify a normal modal logic by an axiom or too, with the understanding that
the new axioms are actually axiom schema, and to get the normal modal logic we close o
under the operations as dened above. For example S4 is given by the axioms p p and
p p. I.e., S4 is the smallest normal modal logic that contains
1. t t = 1 for every formula t F, and
2. t t = 1 for every formula t F
If we recall the names we gave these axioms back in a previous section, we see that we
can specify S4 by writing S4 = K + T + 4. The normal modal logic S5 is dened by
S5 = K + T + 5 (recall that 5 is p p). It turns out that S5 is also K + T + B + 4
where B is p p, as well see later.
Now let be any normal modal logic. We dene a congruence relation on F by
setting t
1
t
2
i t
1
= t
2
. This is indeed a congruence relation exactly because is
closed under the equivalence relation rules and the congruence relation rules. Well use F/
to denote the L-algebra we get by modding F out by . Further, F/ is a BAO by the
assumption that contains BA, = , and (t
1
t
2
) = t
1
t
2
.
Next, as F/ is a BAO, we may construct the ultralter frame Uf F/. Further, we may
make this frame into a model by setting V (p) = U [ [p] U for each p . We call this
model the canonical model for , and similarly the frame is called the canonical frame for
.
One way to think about the nodes of Uf F/, i.e. the ultralters of F/, is as maximal
consistent sets of (equivalence classes of) formulas with respect to the logic . An ultralter
U is consistent in the sense that if t = , then [t] , U, and its maximal in the sense
49
of closure under meets, upwards-closure (according to ), and decides yes or no to every
formula t. As for the accessibility relation, a node U can see another node U

whenever
everything that U thinks is necessary is true in U

. Equivalently, everything that is true in


U

must be considered possible by U.


6.3 Global Completeness for K
Now we will use the J onsson-Tarski theorem to prove a completeness result for modal logic.
Our approach will parallel very closely (essentially lemma for lemma) the approach used for
propositional logic above. We will be nding a syntactic proof notion that matches the
global modal consequence relation [=
g
. In the next subsection we will consider the local
modal consequence relation [=
l
.
Recall that [=
g
i, by denition, for every model M (based on any frame), we have
M [= M [= . Our rst lemma converts this into the language of full powerset algebras.
It corresponds to Lemma 13 from the propositional case.
Lemma 24. [=
g
i for all full powerset algebras (P(W), ) and for all L-homomorphisms
h: F P(W) we have h() = 1 implies h() = 1.
Proof. Suppose rst the right-hand side and we show [=
g
. Let (W, R) be a frame and

V : F P(W) a valuation such that



V () = W = 1 for each . If one reviews the
denition of a valuation for a frame, one sees that what is being asserted is that

V : F
P(W) is an L-homomorphism for the full powerset algebra (P(W), Po R). Thus, we have
must have

V () = 1 = W by the assumption on the right.
Now suppose that [=
g
. Then let (P(W), ) be a full powerset algebra and let
h: F P(W) satisfy . I.e. h() = W = 1 for every . Consider the frame (W, Fr ).
h is a valuation for this frame, because
h(t) = h(t) = [Po(Fr )]h(t)
Thus, by assumption we get h() = 1
Now we can use the Jonsson-Tarski theorem to reexpress this in terms of BAOs.
Lemma 25. [=
g
i for every BAO B and for every L-homomorphism h: F B with
h() = 1 we have h() = 1.
Proof. In view of Lemma 24 and the Jonsson-Tarski theorem, we only need to check that
if [=
g
, then for every subalgebra B of a full powerset algebra and L-homomorphism
h: F B we have h() = 1 implies h() = 1. To see this, f : B P(W) be an injective
L-homomorphism from B to some full powerset algebra P(W). Then if h: F B satises
, so too does f h. Thus, by the assumption [=
g
and Lemma 24 we get f(h()) = 1.
Since f is injective and f(1) = 1, we get h() = 1 as desired.
Our proof system is the same as the propositional one except that we use the BAO axioms
instead of just BA, and we add an inference rule for . As such, it is still a nite syntactic
notion.
50
Denition 26 (Proofs). Let F. A -proof is a nite sequence of equations (elements
of F F) such that each equation in the sequence is an element of BA, or = , or
(t
1
t
2
) = t
1
= t
2
for some t
1
, t
2
F, or of the form = 1 for some , or obtained
from previous equations in the list by one of the following rules:
t = t
t
1
= t
2
t
2
= t
1
t
1
= t
2
t
2
= t
3
t
1
= t
3
t = t

t = t

t
1
= t

1
t
2
= t

2
t
1
t
2
= t

1
t

2
t = t

t = t

t
1
= t
2
means that there is some -proof of t
1
= t
2
, and is shorthand for
= 1 where F.
Observe that for every , the set of theorems of , i.e. [
g
, is a normal
modal logic. One way to think about the proof system is that we start with K, the smallest
normal modal logic, and then we add in as model-wide assumptions. The fact that the
are added in as model-wide assumptions is represented by the fact that were allowing the
inference from t
1
= t
2
to t
1
= t
2
even for things that involve . For example, we have
p
g
p. From p = 1 we get p = 0 by the -rule. Then we get p = 0 = 0 by the
-rule and a BAO axiom. One more use of the -rule gets us p = 1.
As the theorems of each form a normal modal logic, we may mod F out by these
theorems as mentioned in the last subsection. The resulting Lindenbaum-Tarski algebra we
call F/. We of course have the L-homomorphism [] : F F/ which takes a formula to
its equivalence class [t], the collection of formulas that thinks are provably equivalent to t.
Lemma 27. F/ is a BAO and [] satises .
Proof. The proof is essentially the same as Lemma 19. Its easy to check that [] satises
the BAO axioms and because these are provable. To show that F/ is a BAO we observe
that the BAO axioms are closed under uniform substitution and given any L-homomorphism
h: F F/ we may perform an appropriate uniform substitution

and get [

] = h.
Lemma 28. Let B be a boolean algebra with operator and let h: F B be an L-homomorphism
that satises F. Then there is a unique L-homomorphism g : F/ B such that
g([t]) = h(t) for every t F.
Proof. The proof is the same as the one for Lemma 20. Recall it is an induction on -proofs
to verify that h(t
1
) = h(t
2
) for every equation t
1
= t
2
which is provable from .
Finally, we have our completeness theorem. The proof is the same as the propositional
case.
Theorem 29. [=
g
i
g

Proof. In view of Lemma 25, we may show that
g
i for every boolean algebra with
operator B and for every L-homomorphism h: F B that satises we have h() = 1.
Suppose rst that
g
. Let B be a boolean algebra with operator with h: F B
satisfying . Then, by Lemma 28 introduce g : F/ B such that g([t]) = h(t) for all t.
51
In particular, g([]) = h(). Since
g
, we have [] = [1], and since g([1]) = 1, we see
that h() = 1.
Now we show the other direction. Suppose that for every boolean algebra with operator B
and for every h: F B that satises we additionally have h() = 1. Well, [] : F F/
is an L-homomorphism that satises . Our assumption applies therefore and we obtain
[] = 1, i.e.,
g
.
As a corollary of this theorem, we note that if we let = , then we obtain that K is the
set of valid modal equations. I.e., t
1
= t
2
K i every node of every model satises t
1
t
2
.
The collection F [ = 1 K is the collection of all valid modal formulas. In what
follows, we will often be lazy to distinguish this collection from K. They are essentially the
same thing theres just a technical dierence between them. So we may now without bad
conscience call K the modal theory of models based on any frame. Since our proof notion is
properly syntactic, we obtain that this modal theory is recursively enumerable. In fact, it is
also decidable, though our argument so far hasnt shown this.
To see that K is decidable, we note that if is not valid, then is satisable at some
node of some model (M, w). Then, by using similar ideas to what happened in the Lindstr om
theorem, we may massage this model to become a nite model (M

, w

) which satsies .
Thus, we may enumerate the formulas not in K by constructing (in parallel) all nite models
in all nite signatures and listing a formula if its not satised at some node considered so
far.
6.4 Local Completeness for K
In this subsection, we wish to nd a matching syntactic notion for the local semantic con-
sequence relation [=
l
instead of the global version [=
g
. Recall that [=
l
i, by denition,
for every pointed model (M, w), we have (M, w) [= implies (M, w) [= .
The proof system we will use is a slight variant of the
g
described in Denition 26. We
again start with the base K, but instead of letting hypotheses be model-wide, we only let
them have smaller scope. The way this is accomplished is by not using the -inference after
we start introducing hypotheses from .
Denition 30 (Proofs). Let F. A -proof is a concatenation of two nite sequences
of equations. The rst sequence may use as axioms any axiom of BAO and may use all of
the following inference rules:
t = t
t
1
= t
2
t
2
= t
1
t
1
= t
2
t
2
= t
3
t
1
= t
3
t = t

t = t

t
1
= t

1
t
2
= t

2
t
1
t
2
= t

1
t

2
t = t

t = t

The second sequence may use anything that was established in the rst sequence, as well as
equations of the form = 1 for . As for inference rules, the second sequence may use
all of them except the -rule.
If there is a -proof of t
1
= t
2
we write
l
t
1
= t
2
, and we write
l
to mean

l
= 1 where F.
52
Another way to express this denition is to say that
l
means that there is some nite
subset K
0
of K such that K
0

prop
. Here
prop
means we have the usual propositional
proof rules, except that our atomic formulas include things of the form . (Of course,
when we include K
0
on the left here, K
0
is technically containing equations while contains
formulas, but hopefully this presents no confusion.)
This is a nite syntactic notion, and so if we can get a match between [=
l
and
l
then weve
found a reasonable completeness theorem. To get there, we will rst prove the soundness
direction, and then we will use the canonical model of K to get the completeness direction.
Lemma 31. If
l
then [=
l
.
Proof. We show by induction on -proofs that if
l
t
1
= t
2
, then [=
l
t
1
t
2
. If t
1
= t
2
occurs in the rst part of the -proof, then actually isnt involved and t
1
= t
2
K. As
such, we have t
1
t
2
= 1 K too, and so [=
l
t
1
t
2
and in particular [=
l
t
1
t
2
.
If t
1
= t
2
occurs in the second part of the -proof, then it is either an equation of the
form = 1 where , or it is obtained from previous equations in the proof (including
possibly the rst part) by one of the usual propostional rules of inference, for which our
inductive hypothesis holds. Consider the case t
1
= t
2
is actually = 1. Then its obvious
that [=
l
1. Lets consider just one of propositional cases, say the congruence inference
for . In the case, all we need to observe is that if [=
l
s
1
s
2
, then [=
l
s
1
s
2
.
The other cases (reexivity, symmetry, transitivity, ) are similarly straightforward.
Now, to show the other half of our completeness result, that [=
l
implies
l
,
we will actually show that if is consistent, then is satisable. In the subsection on
propositional logic, the proof of Proposition 22 showed that this is sucient. That result
required a deduction theorem, but this is easy enough to furnish for
l
. Recall a deduction
theorem says that if
l
, then
l
.
There are a couple ways we could show such a deduction theorem. One way is specically
describing how we would transform the proof. One could prove by induction on the -
proofs that if you get
l
t
1
= t
2
, then you can get
l
(t
1
t
2
). Another
approach would be to recall our recasting of
l
in terms of
prop
. Then we could cite the
propositional deduction theorem.
Regardless, we realize that all we need to show now is that if is consistent (in the sense
of local proofs), then is satisable (at some pointed model). In fact, theres just one model
that well need to satisfy consistent sets of formulas, and thats the canonical model for K,
Uf F/K. The reason this works out so nicely is because if U is an ultralter in Uf F/K, i.e.
U is a maximal, consistent collection of formulas [] modulo K-provable equivalence, then
U [= [] U. This fact can be proven by induction on we proved a similar fact
for ultralter extensions already, namely that ue M, U [= V
M
() U.
Lemma 32. If is consistent, then is satisable.
Proof. If is consistent in the sense of local proof, then it follows that [] [ is a
consistent (in these sense of boolean algebras) subset of F/K. As such, it may be extended
to an ultralter U by the ultralter theorem. Then (Uf F/K, U) satises .
53
From Lemma 31 and Lemma 32 we get our desired completeness result:
Theorem 33. [=
l
i
l
.
So, weve managed to nd matching syntactic notions for both global and local modal
consequence. In the following subsections well focus on local (in)completness results for
extensions of K.
6.5 Completeness for S4, S5
The methods we saw in the last few subsections can be extended to some normal modal logics
other than K. Weve see that K is the class of modal formulas that are valid everywhere.
In a previous section on frame denability, we noted that p p (T) is valid on a frame
i that frame is reexive, and p p (4) similarly corresponds to transitivity. Our
question here is what the theory S4 = K+T +4 is. Weve dened this collecion of formulas
as the smallest normal modal logic containing T and 4, so we have a syntactic description
of it, but it would also be nice to have a semantic description. Why do we care about S4 in
particular? Well, this system adds in two additional axioms that get it closer, perhaps, to
describing necessarily and possibly as theyre used in natural language. When we say
Its necessarily the case that 2 + 3 = 5, we typically mean to also imply that 2 + 3 = 5.
Similarly, we dont usually nd ourselves saying Its possible that its possible that blah,
instead we simply say Its possible that blah. Perhaps S4 is a natural choice for modelling
or describing other situations too. Regardless, S4 is good as an example for how the methods
of the last few subsections can be adapted.
Now, by our earlier results, we know that the class K

of reexive, transitive frames has


the property that any model based on it must satisfy all of K + T + 4. Every frame in K

must validate K because all frames validate K, and such a frame must validate all instances
of T and 4 because its reexive and transitive. But its not yet clear whether the modal
theory of K

, that is Th K

= [ K

[= , is the same as S4. All we can be sure of at


the moment is that Th K

S4. This is simply because the operations we close o under


in getting a normal modal logic preserve frame validity. E.g., if t
1
= t
2
is valid on a frame
W, then t
1
= t
2
is also valid on W.
However, S4 and the theory of reexive, transitive frames actually do match up. Well
show this in a slightly more generalized way, described as follows. Lets use
S4
to
mean that there is a local proof of from . I.e., there is a nite subset S
0
of S4 such that
S
0

prop
. Lets use K

to denote the class of all reexive, transitive frames. We write


[=
K
i for every pointed model (M, w) based on a frame in K

, we have (M, w) [=
implies (M, w) [= . We will prove that
S4
i [=
K
. This is an instance of a more
general kind of (in)completeness result where you start with some normal modal logic and
show that there is (not) a class of frames that corresponds to it in this way.
We have already seen the soundness direction of this completeness result. If S
0

prop
,
and (M, w) is some pointed model on a reexive, transitive frame such that (M, w) [= ,
then it follows that (M, w) [= S
0
S4, and so (M, w) [= because
prop
inferences preserve
node validity.
54
To see the completeness direction we show that if is S4-consistent, then is satisable
at some pointed model whose underlying frame is reexive and transitive. Of course, is
satised by (Uf F/S4, U) where U is some ultralter such that implies that [] U,
by the same argument as in the case of K, but we need to still check that Uf F/S4 has an
underlying frame which is reexive and transitive.
We check the reexive condition rst. Let U Uf F/S4. We want to show that RUU.
I.e., we want to show that [] U implies [] U. Well, since S4, we see that
[] [] = [] and so the implication [] U [] U follows from the fact that U
is upward-closed.
Finally, we show transitivity. Let U
1
, U
2
, U
3
Uf F/S4 such that RU
1
U
2
and RU
2
U
3
.
We show that RU
1
U
3
. I.e., we show that [] U
3
implies [] U
1
. Well, from RU
2
U
3
we
know that [] U
2
, and so from RU
1
U
2
we get that [] U
1
. Then we use the fact
that S4 and we get [] U
1
.
We have nished proving the following:
Proposition 34.
S4
i [=
K
, where K

is the class of reexive, transitive frames.


Now we will do the same thing for S5. Recall that S5 = K+T +5 where T is as above
p p and 5 is p p. Recall that 5 corresponds to the rst order condition on frames
called right-Euclidean. This is: If x can see y and z, then y can see z. I.e. all children can
see each other. I.e. xyz[(Rxy Rxz) Ryz].
The same argument as above works in this case too, except we have to check that the
canonical frame for S5 is right-Euclidean. So, let U
1
, U
2
, U
3
Uf F/S5 such that RU
1
U
2
and RU
1
U
3
. We show that RU
2
U
3
. Let [] U
3
. We show that [] U
2
. By the RU
1
U
3
assumption we get that [] U
1
. Since S5, we get [] U
1
. Then, by
the RU
1
U
2
assumption, we get [] U
2
.
We get then the following proposition:
Proposition 35.
S5
i [=
K
, where K

is the class of reexive, right-Euclidean


frames.
If we forget about modal formulas for a moment and just think about rst order condi-
tions, we may recall that reexive, right-Euclidean frames are the same thing as reexive,
symmetric, transitive frames. The argument for this is straightforward. So we know that
the class K

is the both the class reexive, right-Euclidean frames, and the class of equiva-
lence relation frames. But this has implications for our modal formulas based on the above
proposition. Consider S5

:= K+T +B+4, where B is as usual p p which corresponds


to symmetry. If we show that the canonical frame for S5

is symmetric (the same arguments


we used for S4 shows its reexive and transitive), then we get that S5

= Th K

= S5.
To see that the canonical frame for S5

is symmetric, let U
1
, U
2
Uf F/S5

such that
RU
1
U
2
. We show that RU
2
U
1
by showing that [] U
1
implies [] U
2
. Using
S5

and [] U
1
we get [] U
1
and so [] U
2
by RU
1
U
2
.
We could of course show that S5

= S5 using syntactic proofs, but perhaps the above


method is more intuitive.
55
Looking back at our results for S4 and S5 we might observe a suspicious similarity in all
the proofs. In fact, theres a more general theorem at work here. This is called the Sahlqvist
completeness theorem, which Ill mention here, but we wont actually go through.
Recall from the section on frame denability that the Sahlqvist formulas are a certain
(syntactically-dened) class of modal formulas each of which has a rst order frame corre-
spondent. Now, the Sahlqvist completeness theorem says that if is a collection of Sahlqvist
formulas, and K

is the class of frames which validate them, and K is the smallest normal
modal logic containing , then
K
[=
K

. The above results for S4 and S5


are special cases since T, 4, B, and 5 are all Sahlqvist formulas.
6.6 Incompleteness for KL
Not every normal modal logic is strongly complete with respect to some class of frames. The
example we consider here is KL which is the normal modal logic we obtain by taking the
smallest normal modal logic containing the G odel-L ob modal formula L which you recall
is p (p p). We saw that as a second order property of frames this formula
corresponds to transitive, reverse well-founded frames. Reverse well-founded means that
there is no innite ascending chain x
0
Rx
1
Rx
2
R (the x
i
may repeat). We show in this
subsection that there is no collecton of frames K

such that
KL
matches [=
K
with premises.
Recall that [=
K
means that for every pointed model (M, w) based on a frame in K

, we
have (M, w) [= implies (M, w) [= .
Proposition 36. There is no collection of frames K

such that
KL
[=
K
.
Proof. Suppose there were such a class of frames K

. Then we know that if W K

, W
is transitive and reverse well-founded. (In detail, from the assumption, as
KL
L, we have
[=
K
L. So given a frame in K

, every model on every frame in K

validates L and so every


frame is transitive and reverse well-founded.) It follows that the collection
= p
1
(p
i
p
i+1
) [ i 1
is not satisable on any pointed model based on a frame in W

. I.e., [=
K
. However,
is KL-consistent, i.e. ,
KL

In fact, we may assume that K

is the class of all transitive, reverse well-founded frames,


since such frames validate KL. Thus, we show that ,
KL
by showing that every nite
subset
0
of is satisable at some pointed model based on a transitive, reverse well-founded
frame. This is easy: we may take arbitrarily large nite initial segments of (N, <) and set
the valuation of p
i
as i.
6.7 Weak Completeness for KL
Although there is no class of frames for which KL is strongly complete, there is a class of
frames for which it is weakly complete. Recall that the dierence between strong and weak
here is that in strong we may have innitely many premises, while in weak only nitely
many.
56
Proposition 37. Let K

be the class of transitive, reverse well-founded frames. Then [=


K

i
KL
.
Proof. We already know the soundess direction of this. So it suces to show that if
is KL-consistent, then is K

-satisable. Let

consist of the subformulas of . Let


:=

. I.e.
= [ is a subformula of
Dene on operation on formulas such that is if is of the form , and
otherwise. It follows that is closed under . Lets call a collection U an atom if it is
a maximal KL-consistent collection. I.e., U ,
KL
and for every , or is in U.
It follows that if and U
KL
, then U.
Since is nite, the number of atoms is also nite. We shall describe a frame whose
nodes are the atoms. We will show that this frame is transitive and irreexive, from which
it follows that it is reverse well-founded as well. Then well show that is satisable at a
pointed model based on this frame.
If U and U

are two atoms, we say RUU

i the following two conditions are met:


1. U implies , U

2. there is some U such that , U

Its clear from the second clause that the resulting frame is irreexive. So lets turn to
transitivity. Let RU
1
U
2
and RU
2
U
3
, and well try to show RU
1
U
3
. Lets show the rst clause
rst. Let U
1
. Then U
2
by RU
1
U
2
. Then by RU
2
U
3
we get , U
3
.
Now for the second clause. We may introduce a U
1
such that , U
2
by RU
1
U
2
.
Thus, U
2
and we get U
3
by the rst clause applied to RU
2
U
3
. So , U
3
.
So weve seen that the frame is transitive and reverse well-founded. Now we introduce a
valuation on the frame. We say that U V (p) i p U. Under this denition, well show
by induction on that, for all , for all atoms U, U [= i U. The base case
of propositional letters follows directly from our denition of the valuation. Also, since U is
KL-consistent, it doesnt contain .
The boolean steps are trivial and so we turn to the modal step. Suppose we know the
result for and we want to show it for . Suppose U [= and . We want
U. We may introduce an atom U

such that RUU

and U

[= . By the inductive
hypothesis we have U

. Suppose for the sake of getting a contradiction, that U.


Then U

by the rst clause of RUU

.
Now suppose U. We want to show U [= . We would like to nd an atom U

that contains , [ U yet does not contain . It then will follow that
RUU

by the denition of R, and U

[= by the inductive hypothesis. So, we would like to


show that
:= , [ U
is KL-consistent. It suces to show that if U
KL
, then

57
is not the zero element of F/KL. If it were the zero element, then we would have
( ). From this it follows that ( ) ( ). Here we may make use
of the axiom L, ( ), to obtain that, as U, U
KL
( ). I.e.,
U
KL
. But it can syntactically be shown that L proves (see, e.g., p.
150 of Hughes and Cresswell), so that we have U
KL
, which contradicts the assumption
that U
KL
and U is KL-consistent.
So, weve seen that for every , and for every atom U, we have U [= U.
It follows that if we can nd some atom U with U, then U [= and so is K

-satisable.
The existence of such an atom follows from the fact that is KL-consistent: we may extend
to a maximal consistent collection U

by the ultralter theorem and then intersect U

with to get U.
We saw that the theory of transitive, reverse well-founded frames matches KL, but that
doesnt rule out that a smaller collection of frames could match too. Indeed, the argument
given above actually shows that the nite, transitive, reverse well-founded frames match.
I.e., the nite strict partial orders.
6.8 Incompleteness for K
t
ThoM
In this section we turn to an example of a normal modal logic which is not even weakly
complete with respect to any class of frames. We will supply an example in the basic
temporal language F, P. Since so far weve only considered normal modal logics in the
basic modal language, let me say a few words of reassurance on the topic. The denition is
the same except that we have axioms and inference rules for each modal operator. E.g., we
have both F = and P = as axioms. Recall also that G is the box version of F and
H is the box version of P.
The normal modal logic K
t
is the smallest normal modal logic in the basic temporal
language that contains the instances of the formulas p GPp and p HFp. It turns out
that K
t
is strongly complete with respect to the class of all frames in which the accessibility
relations corresponding to P and F are converses of each other. This can be proven in the
same kind of way as we did for K, S4, or S5.
K
t
Tho is the smallest normal modal logic extending K
t
that also contains the instances
of:
1. (Fp Fq) (F(p Fq) F(p q) F(Fp q))
2. Gp Fp
3. Pp P(p Pp)
The rst two of these have rst order frame correspondents. The rst is valid on frames
which have no branching to the right. The second is valid on frames which are unbounded to
the right. The third is the familiar Godel-Lob formula for P. Its valid on the P-transitive
frames that have no innite paths going to the left.
58
The normal modal logic K
t
Tho doesnt contain all formulas, as there are frames that
validate it. Consider the frame (N, <, >). I.e. the nodes are 0, 1, 2, . . . and < interprets F
and > interprets P in the expected way. It is very straightforward to check that his frame
validates all the axioms of K
t
Tho.
K
t
ThoM is the smallest normal modal logic extending K
t
Tho that also contains the
instances of the McKinsey formula for F, i.e. GFp FGp. This logic is not inconsistent.
There is a model which validates all the axioms. Consider the frame (N, <, >) again, and
make it into a model by setting each V (p) equal to either a nite subset or conite subset
(e.g., we could set V (p) = for each p). Now, it follows that

V () is nite or conite
for every formula . To see this, note that boolean combinations keep nite/conite sets
nite/conite. Also, if X N is nite, then FX (which consists of the elements less than
the greatest element of X is nite too, and PX (which consists of the elements greater than
the least element of X) is conite excepting the case where X = , in which case we have
FX = PX = . If X is conite, then FX = N is conite, and PX is conite too.
As we already observed, this frame validates K
t
Tho, so this particular model does too.
What about instances of the McKinsey formula? Let GF FG be such an instance.
Suppose we have n N with n [= GF. Then, for every k > n, we have k [= F. Thus

V () must be conite, and we may introduce an m N such that n < m and for all k > m
we have k [= . Thus, n [= FG.
So we see that K
t
ThoM is not inconsistent, so it doesnt correspond to the empty class
of frames. However, well now show that there are no frames that validate all of K
t
ThoM.
It follows that K
t
ThoM cant be weakly complete with respect to any class of frames.
Let W be any frame that validates K
t
Tho. Thus, we know its accessibility relations
< and > are converses of each other, there is no branching to the right, the frame is right
unbounded, there is no innite path to the left, and its transitive (both ways). Well show
that the McKinsey formula cant be validated too. Let u W, and let U = x W [ u < x.
It should seem fairly intuitive that we may introduce S U which is both conal and co-
conal in the sense that for every x U there is an s S such that s > x and there
is a y U S such that y > x. Anyways, well verify this in a moment. The point of
this observation is that we may dene V (p) := S. Then we have u [= GFp yet we have
u [= FGp.
Now we check that we may divide U up in this manner. This is just a set-theoretical
question. We note that (U, <) has the order type of a limit ordinal: its an unbounded
well-founded total order. We may identify the elements of U with the ordinals less than this
order type. We may dene S as the even ordinals less than U. (An ordinal is even if
there is an ordinal such that 2 = .) One can check this works: there is always a bigger
odd or even ordinal.
59
7 Completeness for a Version of First Order Modal
Logic
In this section we present one possible semantic realization of rst order modal logic. Our
models will consist of a frame as in propositional modal logic in the basic language ,
and will consist of a domain of individuals as in rst order logic, but each node of the frame
will make its own decisions about what atomic facts are to hold. So we are considering
constant domain semantics, where each node has the same domain of individuals associated
with it, though the properties of these individuals can vary from node to node. We will
assume that were working with countable rst order signatures without equality or constant
or function symbols. Our language has countably many variables. When evaluating the
truth of a formula, we need to give ourselves not only a node to evaluate it at, but also an
assignment of individuals to the variables (called a valuation). So, well be dening presently
a notion M, w, v [= , which says a model M and a node w and a valuation v think that a
formula is true. Our semantic notion of consequence is: [= i for every M, w, v we
have M, w, v [= = M, w, v [= . We will prove a completeness theorem by providing
a matching syntactic notion . The method we used to prove local completeness for K
(and S4 and S5) will work here too, modulo certain upgrades.
7.1 Constant Domain Semantics
First we present what our formulas are. We are given some countable rst order language L
without constant symbols and without function symbols and without equality. I.e., it only
has relation symbols of certain specied arities whose interpretations arent xed in advance
(as equalitys is). We have countably many variables, typically denoted x
1
, x
2
, . . .. Since
there are no constants or functions, the only terms are these variables. The atomic formulas
are things of the form Ry
1
y
n
where R is an n-ary relation symbol and the y
i
are variables.
We also allow as an atomic formula. The formulas are built up from the atomic formulas
inductively based on the following closure conditions:
1. If is a formula then so are and
2. If
1
and
2
are formulas, then so is
1

2
3. If is a formula and x is a variable, then x is a formula
An L-model M consists of the following parts: (1) a frame W with accessibility relation S,
(2) a nonempty set D called the domain, (3) for each n-ary relation symbol R of L and each
node w W, an interpretation R
w
D
n
of R. A valuation is a function v whose domain
is the set of variables and whose codomain is D. I.e. v assigns an individual v(x) D to
each variable x. We use v
d
x
to denote the valuation which assigns d to x but is otherwise the
same as v (and may indeed be exactly the same if v(x) = d).
We may introduce a unique relation [= between pointed-valuated-models (M, w, v) and
formulas such that:
60
1. M, w, v ,[=
2. M, w, v [= Ry
1
y
n
i R
w
v(y
1
) v(y
n
) for each n-ary relation symbol R and variables
y
1
, . . . , y
n
3. M, w, v [= i M, w, v ,[=
4. M, w, v [=
1

2
i M, w, v [=
1
and M, w, v [=
2
5. M, w, v [= i for every w

such that Sww

we have M, w

, v [=
6. M, w, v [= x i for every d D M, w, v
d
x
[=
If is a collection of formulas, we say [= if for every pointed-valuated-model (M, w, v)
we have M, w, v [= = M, w, v [= . This is our semantic consequence relation.
7.2 Proofs
We will describe rst a proof system for enumerating the validities, and then add in premises
propositionally. I.e., well syntactically dene a class of formulas K+BF and then say that
i K+BF
prop
.
In rst order modal logic, just like in rst order logic, we may dene what it means for a
variable x to occur free in a formula at a specic location. The idea is that x doesnt occur
within the scope of a quantier x. Similarly, we may dene what it means for a variable to
be substitutable for another variable in a formula. [y/x] denotes the formula obtained by
replacing all free occurrences of x in by y. y is substitutable for x in means that every
new y in [y/x] is free. I.e., when we replace x by y, y is not in the scope of some y.
If y is substitutable for x in , then one may obtain that M, w, v [= [y/x] i M, w, v
v(y)
x
[=
. This fact is relevant to the soundness of the proof system given below. It may be proven
by induction on the logical complexity of . Another useful fact is that if two valuations
agree on the free variables occurring in , then they give the same truth assignment to .
I.e., the truth of a formula only depends on its free variables. Also, if y is substituable for x
in and y does not occur free in , then for every d D, M, w, v
d
x
[= i M, w, v
d
y
[= [y/x].
We may describe K+BF by describing a collection of axioms and inference rules. (If
theres a little redundancy in the system, I dont mind, so long as we can save as much
syntactic work as possible.) The following are axioms:
1. Any tautology of K is an axiom (we may view atomic formulas and formulas of the
form x as proposition letters in basic modal logic)
2. (-elim) If is a formula and y is substitutable for x in , then x [y/x] is an
axiom
3. (Barcan formula, BF) For any formula and variable x, we have x x as
an axiom
61
4. If z doesnt occur free in and is substitutable for x in , then z([z/x] x) is
an axiom
5. If z doesnt occur free in then z( ) ( z) is an axiom
The following are inference rules:
1. (Modus Ponens) From and , we may infer
2. (-gen) From we may infer
3. (-intro) From , if x does not occur free in , then we may infer x
We dene K+BF to be the smallest collection of formulas that contains the axioms and
is closed under the rules of inference. For our syntactic proof notion to be really syntactic,
we want K+BF to be recursively enumerable. This is so because all the axioms and rules
above are appropriately eective. We dene to mean that K+BF
prop
.
Since
prop
is eective, our proof notion is too. I.e., if is recursively enumerable, then
[ = is recursively enumerable. We also of course have that if then there is a
nite subset
0
of such that
0
. Note that since K+BF contains all propositional
tautologies and is closed under modus ponens we have K+BF i .
The rst thing to check about our proof system is that its sound. So lets check that
if then [= . Well show that K+BF consists only of valid formulas. From this
it follows that if M, w, v [= , then M, w, v [= K+BF , and so M, w, v [= under the
assumption K+BF
prop
and using the fact that propositional proofs are sound. So
we only need to show that K+BF is valid. We show this by induction on proofs.
First, theres checking that the axioms are valid. Consider a tautology of K. Then
no matter what truth assignments M, w, v brings about for the atomic formulas and the
formulas of the form x, the underlying pointed-model (W, w) still obeys the usual modal
truth denition, and M, w, v [= .
-elim is checked as follows. Let y be substitutable for x in . Let M, w, v [= x. Then,
in particular, we have M, w, v
v(y)
x
[= and so M, w, v [= [y/x].
As for the Barcan formula, we observe:
M, w, v [= x M, w, v
d
x
[= for all d D
M, w

, v
d
x
[= for all w

with Sww

and d D
M, w

, v [= x for all w

with Sww

M, w, v [= x
Now suppose z doesnt occur free in and is substitutable for x in . We show that
z([z/x] x) is valid. Let M, w, v be some pointed-valuated-model. If M, w, v [= x,
then introduce a d D that oversees this. Otherwise, let d be arbitrary. Now, in the case
that M, w, v [= x, then M, w, v
d
x
[= . By our assumptions on z, it follows that
M, w, v
d
z
[= [z/x]. So M, w, v [= z([z/x] x). Now consider the case where
62
M, w, v [= x. Then, as z doesnt occur free in x, we have M, w, v
d
z
[= x, and so
M, w, v [= z([z/x] x) as desired.
Let z be a variable which doesnt occur free in and let M, w, v [= z( ) and
M, w, v [= . We want to show that M, w, v [= z. Let d D. We show M, w, v
d
z
[= . Well,
We have M, w, v
d
z
[= since z doesnt occur free in , and we also have M, w, v
d
z
[=
by assumption.
Now we check that the inference rules preserve validity. We skip over modus ponens and
-gen. Consider -intro. Suppose that is valid and that x does not occur free in
. Suppose M, w, v [= . We want to show M, w, v [= x. So, let d D and we show
M, w, v
d
x
[= . Since x does not occur free in , we have M, w, v
d
x
[= , and since is
valid, we have M, w, v
d
x
[= .
Weve completed showing that the proof system is sound.
7.3 Completeness
To show that [= implies , it suces to show that if is consistent then it is
satisable (as we have a deduction theorem). We will dene a sort of canonical model as we
did for K. Except here there is naturally a couple more things to deal with. We will dene
the frame to consist of certain maximally consistent sets of formulas, the domain D will be
the set of variables, and the canonical valuation will be v(x) = x. The main lemma to be
proven is that in this canonical model, U [= i U. This is done as usual by induction
on . In order to deal with the -step, we assume the nodes of our frame are not just
maximally consistent but also have the -property (to be dened shortly). This makes the
-step easy, but then it complicates the -step. However the diculties are surmountable,
as follows.
A set of formulas U has the -property means that for every formula and for every
variable x, there is some variable y such that y is substitutable for x in and [y/x]
x U. We think of y as a witness, if there is one, to x.
Now, suppose that were given a consistent collection of formulas in some language
L (as usual countable with no constants or functions). We wont always be able to nd a
maximally consistent extension U which also has the -property. For example, let, in a
language containing a unary R,
= xRx, Rx
1
, Rx
2
, Rx
3
, . . .
Then is consistent but there is no extension U with the desired properties. To get around
this problem, we enlarge the language by adding in countably many new variables. We let
L
+
be the language obtained from L by adding in countably many new variable symbols.
Lemma 38. If is a consistent collection of L-formulas, then may be extended to a
maximal consistent collection U of L
+
-formulas with the -property.
Proof. First we note that is a consistent collection of L
+
-formulas. This is not completely
obvious (though it is obvious that if is satisable as a collection of L
+
-formulas, then it
63
is satisable as a collection of L-formulas). Suppose
+
. Then let
0
be a nite subset
of such that
0

+
. There is some nite proof from
0
of . In this proof youll nd
only nitely many old variables x
1
, . . . , x
n
and nitely many new variables y
1
, . . . , y
m
.
As such, we may introduce old variables x
n+1
, . . . , x
n+m
to replace the new variables. We
may introduce these old variables so that theyre just as new to
0
and the old variables
already occurring in the proof. The resulting proof is an L-proof that
0
, because none
of our axioms or rules of inference distinguish between variables which dont already appear
in the context.
Next we extend the consistent collection of L
+
-formulas to a consistent collection that
has the -property. We may enumerate all the L
+
-formulas of the form x. We dene by
induction
1.
0
=
2.
n+1
=
n
[y/x] x where x is the (n + 1)
st
formula of that form, and y
is the rst new variable not occurring in
n
or x.
First note that this is well-dened since, at any step n,
n
only contains nitely many new
variables. Note also that at each step y is substitutable for x in since y does not occur
in . We show additionally that
n
is consistent for each n by induction. So suppose
n
is consistent, and we show
n+1
is consistent. If not, then there is a nite subset A of

n
such that A [y/x] x. Thus, by -intro, since y doesnt occur in A, we get
A y[y/x]. Also, since x is substitutable for y in [y/x], -elim gives us the axiom
y[y/x] ([y/x])[x/y], i.e. y[y/x] . Then -intro gives us, as x does not occur
free in y[y/x], y[y/x] x. Hence, A x, a contradiction, since A
n
is
consistent.
We see that

nN

n
is consistent and has the -property. Then we may extend

to a maximal consistent collection U using the usual methods, and then U automatically
has the -property too.
So given a language L, the canonical model for L is actually a model in the language L
+
.
The nodes of the frame are the maximal consistent collections U of L
+
-formulas which have
the -property. We say that a node U
1
sees another node U
2
, SU
1
U
2
, i U
1
implies
U
2
. The domain D is the set of variables of L
+
. The interpretation of an n-ary relation
symbol R is given as follows. R
U
y
1
y
n
i Ry
1
y
n
U. The canonical valuation v is
dened by v(x) = x.
Lemma 39. Let v be the canonical valuation for the canonical model, and let U be a node
of the canonical model. Then U, v [= i U
Proof. This is proven by induction on . The atomic case follows directly by the denition
of the canonical model. The boolean cases follow from the fact that U is maximal consistent.
Lets consider the -step. Our inductive hypothesis allows us to know that for all formulas
with fewer logical connectives than x (such as ), we have for every node U, U, v [=
U.
64
First suppose that x U. We wish to show that U, v
y
x
[= for all y D. Let y
be given. Now, y might not be substitutable for x in . However, it is substitutable for
x in

where

is a provably equivalent alphabetic variant of (we may replace all the


bound instances of y in by some variable not occurring in ). So -elim gives us the
axiom x

[y/x] and the maximality of U gives us that

[y/x] U. By the induction


hypothesis, we get U, v [=

[y/x] and so U, v
y
x
[=

, and so U, v
y
x
[= .
Now suppose that x U. Then as U has the -property and is maximal, we may
introduce a variable y which is substitutable for x in such that [y/x] U. Thus, by
the induction hypothesis, U, v [= [y/x] and so U, v
y
x
[= , i.e. U, v [= x.
The -property made the preceding -step possible, but it makes the next argument for
the -step a bit longer than usual.
Suppose that U. We wish that show that U, v [= , i.e. for every node U

such
that SUU

, we have U

, v [= . Well, SUU

means that U implies U

. Thus, we
have U

. By the inductive hypothesis, we get U

, v [= .
Now suppose U. We wish to nd a node U

such that SUU

and U

.
Letting

(U) := [ U, it suces to show that

(U) may be extended


to a consistent collection of formulas with the -property. This is a bit more tricky than the
previous lemma because

(U) may contain all the variables of L


+
.
We list the formulas of the form x in some order. We dene
1.
0
:=
2.
n+1
:=
n
([y/x] x) where x is the (n + 1)
st
formula of its form and y is
some variable substitutable for x in such that

(U)
n+1
is consistent
We need to check that this is well-dened, but once we do then we get

(U)
n
[ n N
as a consistent extension of

(U) with the -property.


The usual argument shows that

(U)
0
is consistent, so we concentrate on the step
from n to n + 1. We assume that

(U)
n
is consistent (and well-dened) and try to
show there is a variable y, substitutable for x in where x is the (n + 1)
st
such formula,
with

(U)
n
([y/x] x) consistent. To get a contradiction, suppose there is no
such y. Then for each variable y, either y is not substitutable for x in or there is a nite
subset A
1
, . . . , A
m
of

(U) such that


(A
1
A
m
) (
n
([y/x] x))
In this case it follows that
(A
1
A
m
) (
n
([y/x] x))
and since the A
i
are in U, we have (
n
([y/x] x)) U. So for every variable
y, either it is not substitutable for x in , or (
n
([y/x] x)) U.
Let z be a variable not occuring in x or
n
. Let be dened to be the formula
(
n
([z/x] x))
65
Since U has the -property, we may introduce a variable y such that
[y/z] z U
and y is substitutable for z in . Note that
[y/z] = (
n
([y/x] x))
since z does not occur in
n
or x. We also have that y is substitutable for x in , since y
is substitutable for z in . So we must have, as already observed, [y/z] U, from which it
follows that z U. I.e.
z(
n
([z/x] x)) U
Using the Barcan formula, we obtain
z(
n
([z/x] x)) U
Thus,
z(
n
([z/x] x))

(U)
Since z doesnt occur free in
n
, we may use the fth type of axiom for K+BF and obtain

(U)
n
z([z/x] x)
So that

(U)
n
z([z/x] x)
But, since z doesnt occur in , we have the axiom
z([z/x] x)
And so we have obtained that

(U)
n
is inconsistent, a contradiction.
Our completeness result follows immediately from these two lemmas. If we begin with
a consistent collection of formulas in some language L, then we may nd a maximal
consistent collection of L
+
-formulas U with the -property that extends . Then the node
U of the canonical model with the canonical valuation satises . I.e., is satisable.
66

Das könnte Ihnen auch gefallen