Beruflich Dokumente
Kultur Dokumente
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS WITH EDGE
PROBABILITIES DECAYING WITH DISTANCE. PART I
SAHARON SHELAH
Abstract. The work prepares the abstract frame for analyzing the
following problem. Let Gn be the random graph on [n] = {1, . . . , n}
with the possible edge {i, j} having probability being p
|ij|
= 1/|i j|
,
(0, 1) irrational. We prove that the zero one law (for rst order
logic) holds.
0. Introduction
On 01 laws see expository papers e.g., Spencer [Sp]. In Luczak, Shelah
[LuSh 435] the following probabilistic context was investigated. Let p =
p
i
: i N) be a sequence of probabilities, i.e. real numbers in the interval
[0, 1]
R
. For each n we draw a graph G
n, p
with set of nodes [n]
def
= 1, . . . , n;
for this we make the following independent drawing:
for each (unordered) pair i, j of numbers from [n] we draw yes/no
with probabilities p
|ij|
/ 1 p
|ij|
, and let
R
n
= i, j : i, j are in [n] and we draw yes.
We consider R
n
a symmetric irreexive 2-place relation. So we have gotten
a random model /
0
n, p
= ([n], R
n
) (i.e. a graph), but we also consider
the graph expanded by the successor relation /
1
n, p
= ([n], S, R
n
) where
S = (, +1) : N, (more exactly we use S
n
= S [n]), and we may also
consider the graph expanded by the natural order on the natural numbers
/
2
n, p
= ([n], <, R
n
). (Here we will give a little background on this structure
below. But the question whether 0 1 law holds is not discussed here).
Though we shall start dealing generally with random models, the reader
can restrict himself to the case of graphs without losing comprehensibility.
In [LuSh 435] much information was gotten, on when the 0-1 law holds
(see Denition 1.1(1)) and when the convergence law holds (see Denition
1.1(2)), depending on conditions such as
iN
p
i
< and
iN
ip
i
< .
The sequences p considered in [LuSh 435] were allowed to be quite chaotic,
and in those circumstances the theorems were shown to be the best possible,
The research partially supported by the United States Israel Binational Science Foun-
dation; Publication no 467; SAHARON (08.6.27) CHECK THE END..
0
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 1
e.g. counterexamples were gotten by replacing p by p
j
=
_
p
k
j = i
k
0 (k)j ,= i
k
.
In [Sh 463] a new version of the 0-1 law was introduced, the very weak
zero one law (see 0.1(3), the h variant says that the dierence between the
probabilities for n and for m
n
when [n m
n
[ h(n), converges to zero)
and it was proved for /
2
n, p
when
i
p
i
< (we omit h when h(n) = 1,
m
n
= n + 1 and investigate only the very weak 0-1 law). In [Sh 548] the
very weak zero one law was proved for models with a random two place
function and for ordered graphs; Boppana and Spencer [BoSp] continue this
determining the best h for which this holds.
Naturally arise the question what occurs if the p
i
s are well behaved.
As in Shelah, Spencer [ShSp 304] this leads to considering p
i
= 1/i
(inde-
pendently of n). By the results of [LuSh 435], and (essentially) [ShSp 304],
the real cases are (on the denition of /
n, p
see above):
(A) /
0
n, p
where p
i
= 1/i
, (0, 1)
R
irrational
(C) /
2
n, p
where p
i
= 1/i
, (1, 2)
R
The main aim of this work is to show that in the case (A) we have the
0-1 law, also in case (B) we prove the convergence law but at present we
do not know the answer to problem (C) (actually analysis indicates that
the problem is whether there is a formula (x) which holds in /
2
n
for x
small enough and fails for n x, x small enough). Here we didnt consider
linear order case. For external reasons the work is divided to two parts, the
second is [Sh 517]. Note: if we let p
i
= 1/i
for i 1, surely , +1 is an
edge, so it is ne, just case (A) becomes similar to case (B). To preserve
the distinction between (A) and (B) we set p
1
= 1/2
+
1
2
|ij|
.
So the probability basically has two parts
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
2 SAHARON SHELAH
1) (
1
2
|ij|
): Depends only on the distance, but decays fast, so the average
valency it contributes is bounded.
2) (
1
n
): Does not depend on the distance, locally is negligible (i.e. for any
particular i, j) but has large integral. Its contribution for the valency
of a node i is on the average huge (still n).
We can think of this as two kinds of edges. The edges of the sort n
are as in the paper [ShSp 304]. The other ones still give large probability
for some i to have valency with no a priori bound (though not compared to
n, e.g. log n). In this second context the probability arguments are simpler
(getting the same model theory), but we shall not deal with it here.
Note: If we look at all the intervals [i, i + k), and want to get some graph
there (i.e. see on H below) and the probability depends only on k (or at
least has a lower bound > 0 depending only on k), then the chance that for
some i we get this graph (by second kind edges) is 1, essentially this
behavior stops where k (log(n))
b
for some appropriate b > 0 (there is no
real need here to calculate it). Now for any graph H on [k] the probability
that for a particular i < [n k] the mapping i + embeds H into /
n
is (
1
k
)
(
k
2
)
but is (
1
(k/3)
)
(k/3)
2
(exactly
{,m}J
1
(
1
|lm|
)
{,m}J
2
(1
1
|lm|
)p
|{:(,+1) is an edge}|
1
(1p
1
)
|{:(,+1) is not an edge}|
where , m k and J
1
= , m: (, m) is an edge and [m[ > 1, J
2
=
, m: (, m) is not an edge and [ m[ > 1). Hence the probability that
for no i < n/k the mapping (k i + ) does embed H into /
n
is
_
1
_
1
k
_
(
k
2
)
_
n/k
. Hence if k
(
k
2
)
= n/k that is = (
n
k
(
k
2
)
+1
)
then this probability is e
. This is because e
_
1 (
n
)
_
n
. We
obtain (
k
n
) (
1
k
(
k
2
)
). So the probability is small, i.e. large if k (
2
log n)
1/2
; note that the bound for the other direction has the same order
of magnitude. So with parameters, we can interpret, using a sequence of
formulas and parameter a, quite long initial segment of the arithmetic
(see denition below). This is very unlike [ShSp 304], the irrational case,
where rst order formula ( x) really says little on x: normally it says just
that the cl
k
closure of x is x itself or something on the few elements which
are in cl
k
( x) (so the rst order sentences say not little on the model, but
inside a model the rst order formula says little). So this sound more like
the rational case of [ShSp 304]. This had seemed like a sure sign of failure
of the 0-1 law, but if one goes in this direction one nds it problematic to
dene a
0
such that with the parameter a
0
denes a maximal such initial
segment of arithmetic, or at least nd ( y) such that for random enough
/
n
, there is a
0
satisfying ( y) and if a
0
satises ( y) then with such
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 3
parameter dene an initial segment of arithmetic of size, say, > log log log n.
To interpret an initial segment of arithmetic of size k in /
n
for and
a
0
, mean that =
0
(x, y
0
),
1
( x
0
, y),
2
( x
1
, y),
3
( x
2
, y)) is a sequence of
(rst order) formulas, and a
0
is a sequence of length g( y) such that: the
set x : /
n
[=
0
(x, a
0
) has k elements, say b
0
, . . . , b
k1
, satisfying:
/
n
[=
1
(x
0
, x
1
, a
0
)
<m<k
(x
0
, x
1
) = (b
, b
m
),
/
n
[=
2
(x
0
, x
1
, x
2
, a
0
)
0
,
1
,
2
<k
2
=
0
+
1
(x
0
, x
1
, x
2
) = (b
0
, b
1
, b
2
),
/
n
[=
3
(x
0
, x
1
, x
2
, a
0
)
0
,
1
,
2
<k
2
=
0
1
(x
0
, x
1
, x
2
) = (b
0
, b
1
, b
2
).
But it is not a priori clear whether our rst order formulas distinguish
between large size and small size in such interpretation.
Note: all this does not show why the 0 1 law holds, just explain the
situation, and show we cannot prove the theory is too nice (as in [ShSp 304])
on the one hand but that this is not sucient for failure of 0 1 law on the
other hand. Still what we say applies to both contexts, which shows that
results are robust. A nice result would be if we can characterize p
i
: i N)
such that Probi, j = p
i
0 1 holds (see below).
Our idea (to show the 0 1 law) is that though the algebraic closure
(suitably dened) is not bounded, it is small and we can show that a rst
order formula ( x) is equivalent (in the limit case) to one speaking on the
algebraic closure of x.
Model theoretically we do not get in the limit a rst order theory which
is stable and generally low in the stability hierarchy, see Baldwin, Shelah
[BlSh 528], for cases with probability n
i
B and
A
s
B are dened in terms of w. The intension is that
i
is
i
etc, thus
we will have direct characterization of the later.
5 contains the major probability estimates. The appropriate is dened
and thus the interpretations of <
i
and <
s
in the rst context (/
0
n
, p
i
=
1
i
).
Several proofs are analogous to those in [ShSp 304] and [BlSh 528], so we
treat them only briey. The new point is the dependence on distance, and
hence the equivalence relations .
In 6 it is shown that the <
i
and <
s
of 5 agree with the <
i
and <
s
of 1.
Further, if cl
k
is dened from the weight function in 4, these agree with <
i
,
<
s
as in 2 and we prove the simple almost niceness of Denition 2.12,
so the elimination of quantiers modulo quantication on (our) algebraic
closure result applies. This completes the proof of the 01 law for the rst
context. The model theoretic considerations in the proof of this version of
niceness (e.g. the compactness) were less easy than I expect.
7 deals with the changes needed for /
1
n, p
where only the convergence
law is proved.
Note: our choice /
n
has set of element [n] is just for simplicity (and
tradition), we could have /
n
has set of elements a nite set (not even
xed) and replace n
by |/
n
|
in Denition 1.2 is
the most natural but not unique case. The paper is essentially self contained,
assuming only basic knowledge of rst order logics and probability.
Notation 0.1. N is the set of natural numbers (0, 1, 2, . . .)
R is the set of reals
Q is the set of rationals
i, j, k, , m, n, r, s, t are natural numbers and
p, q are probabilities
, , , are reals
, , are positive reals (usually quite small) and also c (for constant
in inequalities)
is an equivalence relation
M, N, A, B, C, D are graphs or more generally models (that is struc-
tures, nite of xed nite vocabulary, for notational simplicity with
predicates only, if not said otherwise; the reader can restrict himself
to graphs)
[M[ is the set of nodes or elements of M, so |M| is the number of
elements.
/ denotes a random model,
denotes a distribution (in the probability sense),
[n] is 1, . . . , n
A B means A is a submodel of B i.e. A is B restricted to the set
of elements of A (for graphs: induced subgraph)
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
6 SAHARON SHELAH
We shall not always distinguish strictly between a model and its set
of elements. If X is a set of elements of M, M X is M restricted
to X.
a, b, c, d are nodes of graphs / elements of models
a,
b, c,
d are nite sequences of nodes / elements
x, y, z variables
x, y, z are nite sequences of variables
X, Y , Z are sets of elements
is a vocabulary for simplicity with predicates only (we may restrict
a predicate to being symmetric and/or irreexive (as for graphs)),
/ is a family of models of xed vocabulary, usually =
K
the vocabulary of a model M is
M
,
a
b or a
) dom(f),
x
2
dom(g
3
) x
1
,= x
2
.
Acknowledgements: We thank John Baldwin and Shmuel Lifsches and
C igdem Gencer and Alon Siton for helping in various ways and stages to
make the paper more user friendly.
1. Weakly nice classes
We interpret here few by: for each for every random enough /
n
,
there are (for each parameter) < n
n
(/) : / /
n
= 1, so
n
is called a distribution and /
n
the
random model for
n
, so we restrict ourselves to nite or countable /
n
. We
omit
n
when clear from the context.
(iv) We call (/, (/
n
,
n
) : n < )) a 0 1 context and denote it by K
and usually consider it xed; we may forget to mention /. So
(v) The probability of /
n
[= ; Prob(/
n
[= ) is
n
(/) : / /
n
, / [= .
(vi) The meaning of for every random enough /
n
we have is
Prob(/
n
[= ) : n < ) converges to 1;
alternatively, we may write almost surely /
n
[= .
(vii) We call K a 0 1 context if it is as above.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 7
Denition 1.2. (1) The 0 1 law (for K) says: whenever is a f.o.
(=rst order) sentence in vocabulary ,
Prob(M
n
[= ) : n < )n < ) converges to 0 or to 1.
(2) The convergence law says: whenever is a f.o. sentence in ,
Prob(/
n
[= ) : n < ) is a convergent sequence.
(3) The very weak 0 1 law says: whenever is a f.o. sentence in ,
lim
n
[Prob(/
n+1
[= ) Prob(/
n
[= )] = 0.
(4) The h-very weak 0 1 law for h : N N 0 says: whenever is
a f.o. sentence in ,
0 = lim
n
max
,k[0,h(n)]
[Prob(/
n+k
[= ) Prob(/
n+
[= )[
Notation 1.3. f : A B means: f is an embedding of A into B (in the
model theoretic sense, for graphs: isomorphism onto the induced subgraph).
Denition 1.4. (1) Let
/
=
_
A : A is a nite -model
0 < lim
n
sup[Prob((f)(f : A /
n
))]
_
recall (1.1(v)) that Prob((f)(f : A /
n
)) =
n
(/
n
) : /
n
/
n
and there is an embedding f : A /
n
, n < .
Also let T
=
df
: is a f.o. sentence in the vocabulary of /
such that every random enough /
n
satises it.
(2) A B means: A, B /
.
_
_
_
_
Also let ex(f
0
, B, M) = ex(f
0
, A, B, M) =
df
f : f is an embedding
of B into M extending f
0
.
(4) A
s
B means: A B and there is no C with A <
i
C B.
(5) A <
pr
B means: A <
s
B and there is no C with A <
s
C <
s
B (pr
abbreviates primitive).
(6) A <
a
B means that A B and, for every R
+
for every random
enough /
n
, for no f : A /
n
do we have n
pairwise disjoint
extensions g of f satisfying g : B /
n
.
(7) A
s
m
B means A B are from / and for every X B with m
elements, we have A (A X)
s
(B X).
(8) A
i
k,m
B means A B are from / and for every X B with
k elements there is Y , X Y B with m elements such that
A (A Y )
i
(B Y ).
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
8 SAHARON SHELAH
(9) For h : N R
+
R
+
, we dene A
h
i
B as in part (3) replacing
n
2
> 0 for every n large enough h(n,
2
) h(n,
2
) h(n,
1
).
(3) Why do we restrict ourselves to /
? As for A /
, for quite
random /
n
, and f : A /
n
the set cl
k
(f(A), /
n
) may be quite
large, say with log(n) elements, so it (more exactly the restriction of
/
n
to it) is not necessarily in /
[=a copy of B
occurs, as /
n
may not be random enough for B. Still for the statements
like
(x
1
, x
2
, x
3
)(cl
k
(x
1
, x
2
, x
3
) [= )
the model /
n
may be random enough. The point is that the size of B could
be computed only after we have /
n
.
Another way to look at it: models M
of T
B : B M, B A
i
B, and [B[ k,
(b) cl
k,0
(A, M) = A,
(c) cl
k,m+1
(A, M) = cl
k
(cl
k,m
(A, M), M).
Observation 1.7. 1) For all , k N and R
+
we have
1 = lim
n
_
Prob
_
A /
n
, [A[ [cl
k
(A, /
n
)[ < n
_
.
2) Moreover, for every k N and R
+
for some R
+
(actually, any
< /(k + 1) will do) we have
1 = lim
n
_
Prob(A /
n
, [A[ n
[cl
k
(A, /
n
)[ < n
.
Remark 1.8. 1.7 is true for cl
k,m
too, but we can use claim 1.16 instead.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 9
Denition 1.9. K = /
n
: n < ) is weakly nice if whenever A <
s
B (so
A ,= B), there is R
+
with
1 = lim
n
_
_
Prob
_
_
if f
0
: A /
n
then there is F with [F[ n
and
(i) f F f
0
f : B /
n
(ii) f
,= f
F Rang(f
) Rang(f
) = Rang(f
0
)
_
_
_
_
.
If clause (ii) holds we say the f F are pairwise disjoint over f
0
or over A.
In such circumstances we say that witnesses A <
s
B.
Remark 1.10. Being weakly nice means there is a gap between being pseudo
algebraic and non-pseudo algebraic (both in our sense), so we have a strong
dichotomy.
Fact 1.11. For every A, B, C in /
:
(1) A
i
A,
(2) A
i
B, B
i
C A
i
C,
(3) A
s
A,
(4) if A
1
B
1
, A
2
B
2
, A
1
A
2
, B
1
B
2
, B
1
A
1
= B
2
A
2
then
A
2
s
B
2
A
1
s
B
1
and A
1
i
B
1
A
2
i
B
2
,
(5) A <
i
B i for every C we have A C < B C <
a
B.
Proof Easy (e.g. 1.11(5) by the -system argument (for xed size of
the sets and many of them); note [B[ is constant).
1.11
Claim 1.12. If A <
s
B <
s
C then A <
s
C
Proof First proof:
If not, then for some B
we have A <
i
B
C. If B
B we get
contradiction to A <
s
B, so assume B
B) <
i
B
, where i
, and f
0
f
i
1
and
f
i
1
: B /
n
and the f
i
1
s are pairwise disjoint over A.
Now, almost surely for every i we have f
i,j
2
: j < j
i
with f
i
1
f
i,j
2
and f
i,j
2
: C /
n
and, xing i, the f
i,j
2
s are pairwise disjoint over B and
j
i
n
.
Clearly (when the above holds) for
= n
we can nd j
k
: k
such
that f
k,j
k
2
: k <
_
Rang(f
i,j
i
2
(C B)) : i < k;
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
10 SAHARON SHELAH
at stage k, the number of inappropriate j < n
is
[C B[ k +[B A[
[C[
= [C[ n
).
1.12
Fact 1.13. Suppose A B C.
(1) If A
i
C then B
i
C.
(2) If A
s
C then A
s
B.
(3) If A <
pr
C and A
s
B
s
C then either B = A or B = C.
Proof Reread the denitions.
Fact 1.14. (1) If A
s
B then there is some n < and a sequence
A
l
: l n) such that A = A
0
<
pr
A
1
<
pr
. . . <
pr
A
n
= B (possibly
n = 0).
(2) If A <
pr
C and A < B < C then B <
i
C
Proof For proving (2), choose a maximal B
such that B
i
B
C, it
exists as C is nite (being in /
), and as B
i
B (by 1.11(1)). It follows
that if B
< B
C then B
i
B
s
C. But
A <
pr
C hence by the Denition 1.4(5) we have A <
s
C so by 1.13(2)
A <
s
B
= C, so B
i
B
= C as
required. Part (1) is clear as B is nite (being in /
pr
.
1.14
Claim 1.15. / is weakly nice i whenever A <
pr
B there is R
+
such
that
1 = lim
n
_
Prob
_
if f
0
: A /
n
then there is F with [F[ n
and
f
1
F f
0
f
1
: B /
n
__
Proof is obvious (as A <
pr
B implies A <
s
B).
Let us prove : we have A
s
B and by fact 1.14(1) there is a sequence
A = A
0
<
pr
A
1
<
pr
<
pr
A
k
= B. The proof is by induction on k. The
induction step for k > 1 is by the second proof of 1.12 and k = 0 is 1.11(3).
So assume k = 1, hence A <
pr
B. By fact 1.14(2) if A < B
B then
B
i
B. Fix p (0, 1)
R
. If n is large enough then the probability of having
both
(a) for every f
0
: A /
n
there are at least n
dierent extensions f
i
1
satisfying f
0
f
i
1
: B /
n
and
(b) for every a B A and f
+
0
: A a /
n
there are at most n
/2
dierent extensions f
i
2
satisfying f
+
0
f
i
2
: B /
n
is 1 p (for clause (b) use A a <
i
B for every a B A which holds
by 1.14(2)). Let f
0
: A /
n
, and let f
j
1
: j < j
) be a maximal family
of pairwise disjoint extensions of f
0
to an embedding of B into /
n
. Let
F = f : f is an embedding of B into /
n
extending f
0
. By (b) we have
n
[F[ j
[B A[ [B A[ n
/2
.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 11
Hence if n is large enough, j
> n
/3
(with probability 1 p), and this is
enough.
1.15
Claim 1.16. cl
k,m
(A, M) cl
k
(A, M) where k
= k
m
.
Proof For m let k() = k
. For m dene A
= cl
k,
(A, M). Now
if x A
m
then there is some < m such that x A
+1
A
. Let us prove
by induction on m that x A
x cl
k()
(A, M). For = 0 and = 1
this is clear. If x A
+1
A
<
i
C. By the induction hypothesis, for y C A
we have
y cl
k()
(A, M) hence there is C
y
with [C
y
[ k() such that y C
y
and
C
y
A <
i
C
y
. Let C
0
=
yCA
C
y
A, C
1
=
yCA
C
y
and C
2
= C
1
C.
As [C[ k, we get
[C
2
[ k() [C A
[ +[C A
[ k() k k( + 1),
so (as x C
2
) it suces to show that C
0
i
C
2
and by transitivity (i.e.
by 1.11(2)) it suces to show that C
0
i
C
1
and that C
1
i
C
2
. Why
C
1
i
C
2
? Because C A
i
C and C A
C
1
A
and hence
C
1
i
C
1
C = C
2
by 1.11(4). Why C
0
i
C
1
? Let C A
= y
s
: s < r.
Now C
0
i
C
0
C
y
0
by 1.11(4) because A C
y
0
i
C
y
0
and A C
y
0
C
0
and similarly by induction
C
0
i
C
0
C
y
0
i
C
0
C
y
0
C
y
1
i
. . .
i
C
0
_
s<r
C
ys
= C
1
.
So as
i
is transitive (1.11(2)) we are done.
1.16
Claim 1.17. For every R
+
and , k, m we have
1 = lim
n
_
Prob
_
if A /
, [A[ and f : A /
n
then [cl
k,m
(f(A), /
n
)[ < n
__
.
Proof By the previous claim 1.16, w.l.o.g. m = 1. This holds by
Denition 1.4(3) and Denition 1.6.
1.17
Fact 1.18. (1) For every A and m, k, for any M / if f : A M
then
() cl
k,m
(f(A), M)
i
1,k
cl
k,m+1
(f(A), M),
() for some m
= m
(k, m) we have
f(A)
i
k,m
cl
k,m
(f(A), M)
(we can get more),
() f(A)
i
cl
k,m
(f(A), M
n
) or the second is not in /
.
(2) For every m, k, for some r we have:
for any A /
,
1 = lim
n
_
Prob
_
if f : A /
n
then f(A)
i
,r
cl
k,m
(f(A), /
n
)
__
.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
12 SAHARON SHELAH
Remark 1.19. In our main case / = /
.
Recall for 1.18(1)() that cl
k,m
(f(A), /
n
) is in general not necessarily in
/
.
Proof 1) We leave the proof of () and () to the reader. For proving
clause (), let A
0
= f(A) and for m let A
= cl
k,
(f(A), M), and assume
A
m
/
j<m
C
,j
with [C
,j
[ k
and A
+1
C
,j
i
C
,j
. It follows by 1.11(4) that A
i<j
C
,i
: j m
) is
i
-increasing and A
i
A
+1
. By induction we get A
0
i
A
m
which is the
desired conclusion.
2) Read the proofs of 1.18(1) + 1.16.
1.18
Remark 1.20. In a more general context the previous conclusion is part of
the denition of / is nice and also
B
C
2
means: they are all submodels of D /, and
C
1
C
2
B and for every relation symbol R in , if a C
1
B C
2
and
R( a) holds then a C
1
B or a C
2
B (possibly both).
When D is clear from the context we may omit it.
2. Abstract Closure Context
Here we are inside the 0-1-context but without the
i
and
s
as dened in
1, however cl
k
is given. The main result is a sucient condition for having
0-1 law or at least convergence. We have here some amount of freedom, so
we give two variants of the main result of this section: 2.16, 2.17, we shall use
2.17. Thus on a reading one may skip Denitions 2.8 (possible), 2.9 and
2.10, Remark 2.11 and Lemma 2.16 in favour of the alternative development
in Denitions 2.12, 2.13 and 2.17. Lemma 2.15 is needed in both cases and
we have made the two independent at the price of some repetition. We
want to eliminate quantiers in a restricted sense: in the simple form we
quantify only on the closure so each ( x) is equivalent to some
in which
quantiers are over cl
k,m
( x); all this is for a random enough model where
cl
k,m
is small, still it is not necessarily tiny. The closure does not need
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 13
to be in /
.
Context 2.1. In this context in addition to K (dened in 1.1) we have an
additional basic operation cl which is a closure operation for / (see 2.2), so
cl is in general not dened by Denition 1.6 and
i
,
s
,
a
are dened by
Denition 2.5 and in general are not the ones dened in Denition 1.4. How-
ever, we use /
(X, M).
(d) the relation b cl
k
(A, M) is preserved by isomorphism.
2) We say that the closure operation cl is f.o. denable if (e) below is true
(and we assume this when not said otherwise)
(e) the assertion b cl
k
(a
0
, . . . a
l1
, M) is f.o. denable in / that
is there is a formula (y, x
0
, ..., x
l1
) such that if M / and
b, a
0
, ..., a
l1
M then b cl
k
(a
0
, ..., a
l1
, M) i M [= [y, x
0
, ..., x
l1
].
3) We say cl is transitive if for every k for some m, for every X M /
we have cl
k
(cl
k
(X, M), M) cl
m
(X, M).
Denition 2.3. (1) For X M and k, m N we dene cl
k,m
(X, M)
by induction on m:
cl
k,0
(X, M) = X
cl
k,1
(X, M) = cl
k
(X, M)
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
14 SAHARON SHELAH
cl
k,m+1
(X, M) = cl
k,1
(cl
k,m
(X, M), M)
(if we write cl
k,m1
(X, M) and m = 0 we mean cl
k,0
(X, M) = X).
(2) We say the closure operation cl
k
is (, r)-local when:
for M /, X M and Z M if Z cl
k
(X, M), [Z[ then
for some Y we have Z Y , [Y [ r and cl
k
(Y X, M Y ) = Y .
(3) We say the closure operation cl is local if for every k, for some r, cl
k
is (1, r)-local. We say that cl is simply local if cl
k
is (1, k)-local for
every k.
Remark 2.4. (1) Concerning possible in K(from Denition 2.8 below),
in the main case /
0
n, p
, it is degenerate, i.e. if a N /
, B N
then (N, B, a, k, m) is possible. But for the case with the successor
relation it has a real role.
(2) Note: if cl
k
is (1, r)-local and y cl
k
(x
1
, . . . , x
r
, M) is f.o. de-
nable then for every m, s we have y cl
k,m
(x
1
, . . . , x
s
, M) is
f.o. denable.
(3) Clearly cl
k,m
1
(cl
k,m
2
(X, M)) = cl
k,m
1
+m
2
(X, M) and k
1
k
2
m
1
m
2
cl
k
1
,m
1
(A, M) cl
k
2
,m
2
(A, M).
(4) Note that if cl
k
is (
1
, r
1
)-local and r
2
mr
1
and
2
m
1
then cl
k
is (
2
, r
2
)local.
Denition 2.5 (For our 0-1 context (/, cl) with cl as a basic operation).
(1) A
i
B if and only if A B /
pr
,
s
m
,
i
k,m
as in 1.4(5), (7), (8) respectively and A <
a
B means
A < B and (A <
s
B).
(3) (K, cl) is weakly nice if for every A C /
, A C N, B
N
A
C,
then B <
i
B C A <
i
C
(note that is always true).
(5) We say that cl
k
is r-transparent if
A
i
B & [B[ r cl
k
(A, B) = B.
We say that cl is transparent if for every r for some k we have: cl
k
is r-transparent. We say that cl is simply transparent if for every k,
cl
k
is k-transparent.
1
Smoothness is not used in [Sh 550], but the closure there has a priori bound, so the
denitions there will be problematic here. See more in [Sh:F192].
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 15
Fact 2.6. Assume K is a 0 1 context (see 1.1) and cl is dened in 1.6
then
() cl is a closure operation for /
(see Def.2.2(1)),
() cl is f.o. denable (for /),
() cl
k,m
as dened in 1.6(c) and as dened 2.3 are equal,
() cl is transitive,
() cl is simply local (see Def.2.3(2),(3)),
() cl is transparent, in fact simply transparent,
()
i
as dened in 2.5(1) and in 1.4 are equal,
() If in 1, K is weakly nice (see Def.1.9) then (/
b) : B /
has +1 elements,
b is a sequence of length
+ 1 listing the elements of B without repetitions. On B the relation
)
= (B
) if there is an isomorphism
h from B
onto B
mapping
b
onto
b
. Now
= is an equivalence relation
on B. B/
= is nite. So let (B
i
,
b
i
) : i < i
be a set of representatives.
Now i
b
i
) 1
i
(x
0
, ..., x
k
) =
(x
0
, ..., x
k
) : is a basic formula (possibly with dummy variables) and
B
i
[= [b
0
, ..., b
k
].
Lastly
(y, x
0
, ..., x
1
) =
_
m<
y = x
m
_
(z
0
, . . . , z
k1
)(
_
m<
_
t<k
x
m
= z
t
i
(z
0
, ..., z
k1
, y)) :
B
i
has exactly k + 1 members and B
i
b
i
t
: t <
i
B
i
() Trivial.
() By 1.16.
() Now, we will show that (/
be given
and Z cl
k
(X, M) such that [Z[ 1. If Z = let Y = . So assume
Z = y. As y Z cl
k
(X, M) there is a witness set Y for y cl
k
(X, M)
so Y X
i
Y , [Y [ k. As Y X
i
Y , clearly cl
k
(X Y, Y ) = Y and
Z = y Y and [Y [ k so we are done.
() Trivial by the denition of cl (Def.1.6) and of transparency (Def.2.5(5)).
() First assume A
i
B by Def. 2.5 and we shall prove that A
i
B by
Def. 1.4. So for some k, m we have:
() for every random enough /
n
and embedding g : B /
n
we have
g(B) cl
k,m
(g(A), /
n
).
Let > 0. Let /
n
be random enough and f : A /
n
. By (*)
and 1.16 if g is an embedding of B into /
n
extending f then we have
g(B) cl
k
m
(g(A), /
n
), hence
[ex(f, B, /
n
)[ [cl
k
m
(g(A), /
n
)[
|B\A|
. Let = /([B A[ + 1), now if
/
n
is random enough, then by 1.17 for every g : B /
n
we have
[cl
k
m
(g(A), /
n
)[
|B\A|
n
, hence ex(f, B, /
n
)[ [n
[
|B\A|
n
. As > 0
was arbitrary, we have proved that A
i
B by Def.1.4.
Next assume A
i
B by Def. 1.4 and we shall prove that A
i
B
by Def.2.5. Choose k = [B[ and m = 1, so cl
k,m
= cl
k
. So let /
n
be
random enough, and g : B /
n
. Recall that cl
k
(g(A), /
n
) = C :
C /
n
, [C[ k and C g(A)
i
C, so g(B) can serve such C, hence
g(B) cl
k
(g(A), /
n
).
() We shall use clause () freely. First assume that / is weakly nice
by Def.1.9 and we shall prove that (/, cl) is weakly nice by Def.2.5(3). So
assume A B. We can nd C such that A
i
C B and for no C
,
A
i
C
B, C C
; C
exist as A
i
A B and B is nite. By 1.11(2)
for no C
, do we have C <
i
C
B hence C
s
B by Def.1.4, so it is enough
to prove that C
s
B by Def.2.5(2), and w.l.o.g. C ,= B so C <
s
B. Let
k, m be given. As we are assuming that / is weakly nice by Def.1.9 and
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 17
C <
s
B by Def.1.4(4) we have that there is an R
+
such that
1 = lim
n
_
_
Prob
_
_
if f
0
: A /
n
then there is F with [F[ n
and
(i) f F f
0
f : B /
n
(ii) f
,= f
F Rang(f
) Rang(f
) = Rang(f
0
)
_
_
_
_
.
As /
n
is random enough and f : A /
n
, there is F as above for B with
[F[ n
(f(A), /
n
)[
for l = k
m
and by 1.7 we have
[cl
k
m
(f(A), /
n
)[ < n
so [cl
k,m
(f(A), /
n
)[ < n
.
As the sequence Rang(g) Rang(f) : g F) list a family of n
>
[cl
k,m
(f(A), /
n
)[ pairwise disjoint subsets of /
n
, for some g F, we have:
Rang(g) Rang(f) is disjoint to cl
k,m
(f(A), /
n
). So g is as required in
Def.2.5(2); so we have nished by proving C
s
B by Def.2.5, hence we
have nished proving (/
, (0, 1)
R
irrational (except
p
1
= 1/2
) and /
n
is the random graph with probability 1/2 if n
is odd. Now in 1, /
( x) and k = k
, m = m
such that
()
( x).
Naturally enough we shall do it by induction on the quantier depth
of and the non-trivial case is ( x) = (y)
1
( x, y), and we as-
sume
1
( x, y), k
1
, m
1
are well dened. So we should analyze the
situation: /
n
is random enough, a
lg( x)
(/
n
), /
n
[= [ a] so
there is b /
n
such that /
n
[=
1
[ a, b], and we split it to two
cases according to the satisfaction of a suitable statement on a suit-
able neighbourhood of a i.e., cl
k
,m
( a, /
n
). If b belongs to a small
enough neighbourhood of a this should be clear. If not we would
like to nd a suitable situation (really a set of possible situation,
with a bound on their number depending just on ) to guaran-
tee the existence of an element b with cl
k
1
,m
1
( ab, /
n
) satisfying
1
( a, b). Now in general the cl
k
1
,m
1
can be of large cardinality
(for , i.e. depending on /
n
). In the nice case we are analyzing,
to nd such a witness b outside a small neighbourhood of a it will
suce to look at cl
k,m
( ab, /
n
) essentially with small cardinality.
Why only essentially? As may be cl
k
1
,m
2
( a, /
n
) is already large,
so what we should have is something is like: cl
k
1
,m
1
( ab, /
n
)
cl
k
1
,m
2
( a, /
n
) can be replaced by a set of small cardinality. For
this we need
, b)
bB
by similar enough (B
, b)
bB
(in particular when B
so we
need to express in such situation something like B
exists over B
(we can say such B exists by clause (b) of 2.8(4) using quantiers
on cl
k,m
( a, /
n
)). Well, B
s
B
B
2
where B
2
= cl
k
1
,m
2
( a, /
n
),
obeys a version of the addition theorem, and secondly that B
sit
in /
n
in a way where the closure is right. All this is carried out
in Def.2.8(4) (of good saying: we have a tuple in a situation which
exist whenever a copy of B as above exist) and 2.9 (when there are
B etc. as above). The proof is carried out in 2.16.
(5) Dening good, by demanding the existence of the embedding g :
B
/
n
extending f : B /
n
, we demand on f only lit-
tle: it is an embedding. We may impose requirements of the form
cl
k
i
,m
i
(f(B
i
), /
n
) f(B) or cl
k
i
,m
i
(f(B
i
), /
n
) f(B) = f(C
i
) for
some B
i
,C
i
B. This make it easier for a tuple to be good. Thus
giving a version of almost nice covering more cases. In other pos-
sible strengthening we do not replace B
by B
of bounded
cardinality but look at it as a family of possible ones all similar in
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 19
the relevant sense. On the other hand we may like simpler version
which are pursued in 2.13, 2.17.
(6) Note that if cl
k
is r-transparent and A M / then cl
k
(A, M)
C M : C A
i
C and [C[ r. [Why? if C M, C A
i
C
and [C[ r then : rst cl
k
(C A, C) = C as cl
k
is r-transparent;
second cl
k
(C A, C) cl
k
(C A, M) by (b)(ii) of Def. 2.2 (1),
third cl
k
(C A, M) cl
k
(A, M) as C A M by clause (a) of
Def.2.2(1); together we are done]. Note that if cl
k
is (1, r)-local we
can prove the other inclusion. So obviously if (K, cl) is simply local
and simply transparent (and
K
is nite or at least locally nite
of course), then cl is f.o. denable. If we omit the simple we can
eliminate the assumption cl is f.o. denable in 2.16, 2.17.
Denition 2.8. (1) We say (N, B,
B, k) is possible for (K, cl) if:
(a)
B = B
i
: i < lg(
B)), B
i
N /
, B N and cl
k
(B
i
, N)
B
i+1
for i < g(
B) 1
(b) it is not true that:
for every random enough /
n
, for no embedding f : N /
n
,
do we have:
for i < g(
B) 1, cl
k
(f(B
i
), /
n
) f(cl
k
(B
i
), N) cl
k
(f(B), /
n
).
(2) If we write (N, C, B, k) we mean (N, C, B, cl
k
(B, N)), k).
(3) We say (N, B, a, k, m) is possible for K if (N, B,
B, k) is possible for
K where
B = cl
k,i
( a, N) : i m).
(4) We say that the tuple (B
, B, B
0
, B
1
, k, m
1
, m
2
) is good for (/, cl) if
(a) B B
and, B
0
B
1
B
/
n
and
() g(B
) cl
k,m
2
(f(B), /
n
) = f(B),
() cl
k,m
1
(g(B
0
), /
n
) g(B
1
) cl
k,m
2
(g(B), /
n
)))
() /
n
g(B
)
/
n
/
n
f(B)
/
n
cl
k,m
2
(f(B), /
n
).
Denition 2.9. The 0-1 context K with closure cl (or the pair (K, cl) or K
when cl is understood) is almost nice if it is weakly nice and
(A) the universal demand:
for every k, m
0
and ,
there are
m
= m
(k, m
0
, ,
) > m
0
, k
= k
(k, m
0
, ,
) k and t = t(k, m
0
, ,
)
such that, for every random enough /
n
we have:
if a
[/
n
[ and b /
n
cl
k
,m
( a, /
n
)
then there are m
2
[m
0
, m
] and m
1
m
m
2
and B
cl
k,m
1
( a, /
n
) and B
/
n
such that:
() [B[ t and a B,
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
20 SAHARON SHELAH
() B
= [cl
k,m
0
( ab, /
n
) cl
k,m
2
(B, /
n
)] B so necessarily b B
and
a B
or at least:
for every rst order formula = (. . . , x
a
, . . .)
aB
of quantier depth
there is B
(so B
) and
B
[= (. . . , a, . . .)
aB
i B
[= (. . . , a, . . .)
aB
,
() /
n
B
/
n
/
n
B
/
n
cl
k,m
2
(B, /
n
),
() (B
, B, ab, B
cl
k,m
0
( ab, /
n
)), k, m
0
, m
2
) is good for (/, cl) or at
least for some B
, B
we have
2
(i) (B
, B, ab, B
, k, m
1
, m
2
) is good for (/, cl)
(ii) (B
, cl
k,m
0
( ab, /
n
) B
), b, c)
cB
(B
, B
, b, c)
cB
,
() for m m
0
we have
cl
k,m
( ab, B
) = B
cl
k,m
( ab, /
n
).
Denition 2.10. If in Def 2.9 above, k
? Because
b Rang( ab) cl
k,m
0
( ab, /
n
) and
cl
k,m
2
(B, /
n
) cl
k,m
2
(cl
k,m
1
( a, /
n
), /
n
) cl
k,m
1
+m
2
( a, /
n
)
cl
k,m
( a, /
n
) cl
k
,m
( a, /
n
)
and b does not belong to the later.
(3) Why do we use cl
k,m
2
(B, /
n
)? Part of our needs is that this set is
denable from B without b.
(4) In clause (), Denition 2.9 clause (A), there is one B
we have B
s
B
,
and (B
, c)
cB
(B
, c)
cB
. So we could have phased clause (ii) of
(A)() in the same way as clause ().
In our main case, also the following variant of the property applies (see
2.18 below).
Denition 2.12. 1) We say that the quadruple (N, B, B
0
, B
1
), k) is simply
good for (K, cl) if (B, B
0
, B
1
N /
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 21
(i) g(N) cl
k
(f(B), /
n
) = f(B),
(ii) g(N)
f(B)
cl
k
(f(B), /
n
),
(iii) cl
k
(g(B
0
), /
n
) g(B
1
) cl
k
(g(B), /
n
)
(natural but not used is cl
k
(g(B
0
), /
n
)g(N) = g(cl
k
(B
0
, N))). If we write
B
0
instead of B
0
, B
1
), we mean B
1
= N.
2) We say that (N, B, B
0
, B
1
), k, k
cl
k
(g(B
0
), /
n
) g(B
1
) cl
k
(g(B), /
n
).
Denition 2.13. 1) The 0-1 context with closure (K, cl) is simply almost
nice if it is weakly nice and
(A) the universal demand:
for every k and ,
there are
m
= m
(k, ,
), k
= k
(k, ,
) k and t = t(k, ,
)
such that for every random enough /
n
we have:
if a
[/
n
[ and b /
n
cl
k
,m
( a, /
n
)
then there are B cl
k
,m
( a, /
n
) and B
/
n
such that:
() [B[ t and a B and cl
k
(B, /
n
) cl
k
,m
( a, /
n
),
() B
= [cl
k
( ab, /
n
) cl
k
(B, /
n
)] B
(or at least B
[cl
k
( ab, /
n
) cl
k
(B, /
n
)] B),
() B <
s
B
(so B
there is B
(so B
) and:
B
[= (b, . . . , c, . . .)
cB
i B
[= (b, . . . , c, . . .)
cB
(or even , but actually equivalently, (B
, b, . . . , c, . . .)
cB
(B
, b, . . . , c, . . .)
cB
),
() /
n
B
/
n
/
n
B
/
n
cl
k
(B, /
n
)
() B
and (/
n
B
, b
we have:
(i) (B
, B, ab
, b, . . . , c, . . .)
cB
(B
, b
, . . . , c, . . .)
cB
.
2) If above always k
and k N then (B
, B, B
, k) is simply good.
(C) /
= / (or at least if A /
.
Similarly in Denition 2.9 for nice.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
22 SAHARON SHELAH
Remark 2.14. 1) In 2.13(1) we can weaken the demands (and call (/, cl)
simply
= k
(k, ,
) N replace in clause ()
cl
k
(B, /
n
) by cl
k
(B, /
n
) and replace () by
(
) (B
, B, ab, k, k
, b
we have:
(i) (B
, B, ab
, k, k
) is simply
good
(ii) (B
, b, . . . , c, . . .)
cB
(B
, b
, . . . , c, . . .)
cB
The parallel change in 2.13(3) (that is dening simply
nice) is
(B)
= k
, B, B
, k, k
) is simply good.
This does not change the conclusions i.e (2.13, 2.17, 2.18, 2.19).
2) We can change Denition 2.9 as we have changed Denition 2.13(1) in
2.13(3) and/or in 2.14(1).
3) If c is transparent we can without loss of generality demand in 2.13(1)(A)
that m
(k, ,
, as if cl
k
( a, M)
cl
k
,m
( a, M) whenever a
[M[, M / then k
will do.
4) We can omit clause () in Def.2.13(1), but it is natural. Similarly in
Def.2.9 (i.e. those omitting do not change the later claims).
Lemma 2.15 below (the addition theorem, see [?] or [Gu] and see more
[Sh 463]) is an immediate a corollary of the well known addition theorem;
this is the point where
is used.
Lemma 2.15. For nite vocabulary and f.o. formula (in ) ( z, z
1
, z
2
),
z = z
1
, . . . , z
s
), there are i
N and - formulas
1
i
( z, z
1
) =
1
i,
( z, z
1
),
2
i
( z, z
2
) =
2
i,
( z, z
2
) for i < i
N
0
N
2
, N
1
N
2
= N
0
, N
1
N
2
= N and
the set of elements of N
0
is c
1
, . . . , c
s
, c = c
1
, . . . , c
s
) and
c
1
g z
1
(N
1
) and c
2
g z
2
(N
2
)
then:
N [= [ c, c
1
, c
2
] i for some i < i
, N
1
[=
1
i
[ c, c
1
] and N
2
[=
2
i
[ c, c
2
].
Main Lemma 2.16 (Context as above). Assume (K, cl) is almost nice and
cl is f.o. denable.
1) Let ( x) be a f.o. formula in the vocabulary
K
. Then for some m
N
and k = k
( x) we have:
()
( a).
2) Moreover, if for simplicity we will consider y cl
k,m
( x, M) as an
atomic formula when computing the q.d.
3
of
.
Proof We shall ignore (2), (which is not used and is obvious if we un-
derstand the proof below). We prove the statement in part (1) by induction
on r = q.d.(( x)) and rst note (by clause (e) of Def.2.2 as y cl
k,m
( x)
is f.o. denable in /) that ()
implies
()
+
in ()
, possibly changing
= 0, k
= 0 or whatever. So cl
k,m
( a, /
n
) = a for our k
, m
. Assume
/
n
[= ( a) and we let
= . Now as a /
n
cl
k,m
( a, /
n
) /
n
we have /
n
[= ( a) i cl
k,m
( a, /
n
) [=
( a) as required].
Case 2: is a Boolean combination of atomic formulas and the formulas of
the form x
= m
(k
1
, m
1
, g( x),
), k
= k
(k
1
, m
1
, g( x),
),
t = t(k
1
, m
1
, g( x),
)
be as guaranteed in Def.2.9 with
:= m
+m
1
. Let
1
1
be such that it witness ()
1
, and let
2
1
be such
that it witness ()
+
1
.
So it is enough to prove the following two statements:
Statement 1: There is
1
( a)
()
1
/
n
[= there is b cl
k,m
( a, /
n
) such that
1
( a, b) holds
(i.e. b belongs to a small enough neighbourhood of a).
Statement 2: There is
2
( a, /
n
) [=
2
( a)
()
2
/
n
[= there is b /
n
cl
k,m
( a, /
n
) such that
1
( a, b)
holds (i.e. b is far from a)
(note: ()
1
, ()
2
are complementary, but it is enough that always at least
one of them holds).
Note that as y cl
k,m
= m
+m
1
m
, by
2.2 and clause (e), we can in ()
2
replace m
by m
, changing
2
to
2.5
.
Clearly these two statements are enough and
1
( x)
2.5
( x) is as required.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
24 SAHARON SHELAH
Proof of statement 1:
Easy, recalling that k
= k
1
by clause (A) of Def.2.9, by the induction
hypothesis as (assuming b c
k
1
,m
( a, M
n
))
cl
k
1
,m
1
( ab, /
n
) cl
k,m
+m
1
( a, /
n
) = cl
k,m
( a, /
n
)
and by the fact that the closure is suciently denable.
Proof of statement 2:
We will use a series of equivalent statements
1
is ()
2
2
there are m
2
[m
1
, m
], m
1
m
m
2
, b, B, B
and B
such that:
() b /
n
, b / cl
k,m
( a, /
n
), a B cl
k
1
,m
1
( a, /
n
), [B[ t,
() B
= B [cl
k
1
,m
1
( ab, /
n
) cl
k
1
,m
2
(B, /
n
)] [hence B = B
cl
k
1
,m
2
(B, /
n
)] and
() B
s
B
and B
= B
or at least (B
, b, c)
cB
(B
, b, c)
cB
(see 2.11(4)) and
()
B
/
n
B
cl
k
1
,m
2
(B, /
n
)
() (B
, B, ab, B
cl
k
1
,m
0
( ab, /
n
), k
1
, m
0
, m
2
) is good,
() for m m
1
we have cl
k
1
,m
( ab, B
) = B
cl
k
1
,m
( ab, /
n
)
and
2
/
n
[=
1
( a, b)
()
2
1
2
Why? The implication is trivial as
2
is included in
2
, the impli-
cation holds by clause (A) in the denition of almost nice 2.9, except
b / cl
k,m
( a, /
n
) which is explicitly demanded in ()
2
.
3
like
2
but replacing
2
by
3
/
n
cl
k
1
,m
1
( ab, /
n
) [=
1
1
( a, b).
()
3
2
3
Why? By the induction hypothesis.
4
like
3
replacing
3
by
4
/
n
[B
cl
k
1
,m
2
(B, /
n
)] [=
2
1
( a, b).
()
4
3
4
Why? By ()
+
1
in the beginning of the proof, the denition of B
and
the choice of
2
1
(Let
3
be true. As by the choice of B
, B above,
cl
k
1
,m
1
( ab, /
n
)cl
k
1
,m
2
(B, /
n
) B
cl
k
1
,m
2
(B, /
n
) /
n
we have
/
n
[=
1
( a, b) i
B
cl
k
1
,m
2
(B, /
n
) [=
2
1
( ab) by ()
+
1
). So ()
4
holds.)
For notational simplicity we assume B ,= , and similarly assume a is
with no repetition and we shall apply the lemma 2.15 several times.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 25
First, for m m
1
we apply 2.15 to the case s = t, z = z
1
, . . . , z
t
), z
1
=
z
1
1
, z
1
2
), z
2
empty and the formula z
1
2
cl
k
1
,m
( z, z
1
1
) and get i
1,m
N
and formulas
1
1,m,i
( z, z
1
1
, z
1
2
) and
2
1,m,i
( z) for i < i
1,m
. Let
u
1
= (m, i) : m m
1
, i < i
1,m
.
Second for m m
1
we apply 2.15 to the case s = t, z
2
= z
2
1
), z
1
= z
1
1
),
z = z
1
, . . . , z
t
) and the formula z
2
1
cl
k
1
,m
( z, z
1
1
) and get i
2,m
N and
formulas
1
2,m,i
( z, z
1
) and
2
2,m,i
( z, z
2
1
), for i < i
2,m
.
Let
=
K
P
1
, P
2
, with P
1
, P
2
new unary predicates: for L[
]
let
[P
]
be restricting the quantiers to P
. Let
3
where
1
=:
2
1
(z
1
, . . . , z
g( x)
, z
1
1
)
2
=:
_
mm
1
(y)
_
y cl
k
1
,m
1
(z
1
, . . . , z
g( x)
, z
1
1
)
,1
2,m
(z
1
, . . . , z
t
, z
1
1
, y)
,2
2,m
(z
1
, . . . , z
t
, z
1
1
, y)
_
_
,
where
,1
2,m
(z
1
, . . . , z
t
, z
1
1
, y) =:
i<i
1,m
(
1
1,m,i
(z
1
, . . . , z
t
, z
1
1
, y)
[P
1
]
2
1,m,i
(z
1
, . . . , z
t
)
[P
2
]
)
,2
2,m
(z
1
, . . . , z
t
, z
1
1
, y) =:
i<i
2,m
(
2
1,m,i
(z
1
, . . . , z
t
, z
1
1
)
[P
1
]
2
2,m,i
(z
1
, . . . , z
t
, y)
[P
2
]
)
and let
3
=: (y)
_
P
1
(y) [
t
=1
y = z
(y cl
k
1
,m
1
(z
1
, . . . , z
g( x)
, z
1
1
)
y / cl
k
1
,m
2
(z
1
, ..., z
lg( x)
))]
_
.
So we have dened
), and ( z, z
1
, z
2
) = ( z, z
1
1
) =
(z
1
, . . . , z
g
( x)
), z
1
1
) and get i
1
3,i
( z, z
1
) and
2
3,i
( z, z
2
) as there. Let
5
like
4
but replacing
4
by
5
letting c
1
, . . . , c
t
list B possibly with repetitions but such that
c
1
, . . . , c
g( x)
) = a and letting
P
1
= B
and P
2
= cl
k
1
,m
2
(c
1
, . . . , c
t
, /
n
)
we have
() (/
n
(P
1
P
2
), P
1
, P
2
) [=
[c
1
, . . . , c
t
, b] (the model is
a
-model).
Now
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
26 SAHARON SHELAH
()
5
4
5
.
Why? Look at what the statements mean recalling /
n
P
1
/
n
B
/
n
P
2
.
Next let
6
like
5
but replacing
5
by
6
letting c
1
, . . . , c
t
list B possibly with repetitions but such that
c
1
, . . . , c
g( x)
) = a and letting
P
1
= B
and P
2
= cl
k
2
,m
1
(c
1
, . . . , c
t
, /
n
)
there is i < i
such that:
(i) (/
n
P
1
, P
1
, P
2
P
1
) [=
1
3,i
[c
1
, . . . , c
t
), b],
(ii) (/
n
P
2
, P
1
P
2
, P
2
) [=
2
3,i
[c
1
, . . . , c
t
)].
Now
()
6
5
6
Why? By the choice of
1
3,i
,
2
3,i
(i < i
).
However in the two
-models appearing in
6
, the predicates P
1
, P
2
are interpreted in a trivial way: as the whole universe of the model or as
c
1
, . . . , c
t
.
So let:
(a)
1
4,i
(z
1
, . . . , z
t
, y) be
1
3,i
(z
1
, . . . , z
t
, y) with each atomic formula of
the form P
1
() or P
2
() being replaced by = or
_
t
r=1
= z
r
respectively,
(b)
2
4,i
(z
1
, . . . , z
t
) be
2
3,i
(z
1
, . . . , z
t
) with each atomic formula of the form
P
1
() or P
2
() being replaced by
_
t
r=1
= z
r
or = respectively.
So let (recall B
is mentioned in
2
, a replacement to B
7
like
6
but replacing
6
by
7
letting c
1
, . . . , c
t
list B possibly with repetitions but such that
c
1
, . . . , c
g( x)
) = a, there is i < i
such that
(i) /
n
B
[=
1
4,i
[c
1
, . . . , c
t
), b] and
(ii) /
n
cl
k
1
,m
2
(c
1
, . . . , c
t
), /
n
) [=
2
4,i
(c
1
, . . . , c
t
)).
()
7
6
7
Why? By the choice of the
1
4,i
,
2
4,i
and the property of B
(stated in
2
).
Let T = (N, c
1
, . . . , c
t
) : N /
and i < i
choose if possible
(N
j,i
, c
j
1
, . . . , c
j
t
, b
j
i
) such that:
(i) N
j
s
N
j,i
(in /
),
(ii) b
j
i
N
j,i
N
j
,
(iii) N
j,i
[=
1
4,i
(c
j
1
, . . . , c
j
t
), b
j
i
) and
(iv) (N
j,i
, B, c
j
i
: i = 1, . . . , g( x) b
j
i
, k, m
0
, m
2
) is good for K.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 27
Let
w = (i, j) : i < i
, j < j
and (N
j,i
, c
j
1
, . . . , c
j
t
, b
j
i
) is well dened.
Let
8
there are m
2
m
, m
1
m
m
2
, such that m
2
m
1
and, there
are b, B such that:
a B cl
k
,m
2
( a, /
n
), [B[ t(k
1
, m
1
, g( x)), b / cl
k
,m
( a, /
n
),
b /
n
, and
8
for some c
1
, . . . , c
t
listing B such that a = c
1
, . . . , c
g x
) there
are i < i
, j < j
be as in
7
, let
j < j
be such that (/
n
B, c
1
, . . . , c
t
)
= (N
j
, c
j
1
, . . . , c
j
t
). The main point
is that B
( x) we have:
() for every random enough /
n
and a
g( x)
[/
n
[
() /
n
[= ( a) if and only if /
n
cl
k
( a, /
n
) [=
( a)
2) The number of alternation of quantiers of
is.
Remark 2.18. (1) Of course we do not need to assume that closure op-
eration is denable, it is enough if there is a variant cl
which is
denable and for every k, m there are k
1
, m
1
, k
2
, m
2
such that al-
ways cl
k,m
(A, M) cl
k
1
,m
1
(A, M) cl
k
2
,m
2
(A, M).
(2) Similarly in 2.16 (using Def.2.10).
(3) We can weaken simply almost nice as in Remark 2.14(1) and still
part (1) is true, with essentially the same proof.
(4) The proof of 2.17 is somewhat simpler than the proof of 2.16.
Proof 1) We prove the statement by induction on r = q.d.(( x)). First
note (by clause (e) of 2.2)
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
28 SAHARON SHELAH
()
+
(x y) with q.d.(
= m
(k
1
, g( x),
), k
= k
(k
1
, g( x),
), t = t(k
1
, g( x),
)
with
1
dened below) and let k
be
4
such that:
()
1
[A[ g( x)+1 & A N / cl
k
1
(cl
k
,m
(A, N), N) cl
k
(A, N).
Let
1
1
( x, y) be such that it witness ()
1
holds, and let
2
1
( x, y) be such
that it witness ()
+
1
.
It is enough to prove the following two statements (see below):
Statement 1: There is
1
( a)
()
1
/
n
[= there is b cl
k
,m
( a, /
n
) such that
1
( a, b) holds.
Statement 2: There is
2
,m
( a, /
n
) [=
2
( a)
()
2
/
n
[= there is b /
n
cl
k
,m
( a, /
n
) such that
1
( a, b)
holds
(note: ()
1
, ()
2
are complementary, but it is enough that always at least
one holds).
Note that as y cl
k
,m
we can in ()
2
replace cl
k
,m
by cl
k
, changing
2
to
2.5
; (just
as from () we have deduced ()
+
).
Clearly these two statements are enough as
1
( x)
2.5
( x) is as required.
Proof of statement 1:
4
if we change clause (A) of 2.13(1) a little, k = k
,m
,m
, the
two versions are equivalent. m
= m
(k, ,
) = m
(k, ,
,m
( a, Mn) and c cl
k
( ab, Mn) c cl
m
+k
( a, Mn) = cl
m
( a, Mn) hence b
cl
k
,m
( a, Mn) cl
k
( ab, Mn) cl
k,m
( a, Mn) hence cl
k
( ab, Mn) cl
k,m
( a, Mn)
b / cl
k
,m
. Of course our
new assumption for m
,m
( a, /
n
), /
n
) cl
k
( a, /
n
)
and by the fact that the closure is suciently denable. So in this case
1
( a, y).
Proof of statement 2:
We will use a series of equivalent statements
1
is ()
2
2
there are b, B and B
, B
such that:
() b /
n
, b / cl
k
,m
( a, /
n
),
() a B cl
k
,m
( a, /
n
), moreover cl
k
1
(B, /
n
) cl
k
,m
( a, /
n
),
and [B[ t,
() B
B [cl
k
1
( ab, /
n
) cl
k
1
(B, /
n
)] and
() B
s
B
and: B
= B
or just (B
, b, c)
cB
(B
, b, c)
cB
(see 2.11(4)) and
() B
/
n
B
cl
k
1
(B, /
n
) (and so B = B
cl
k
1
(B, /
n
)) and
() (B
, B, ab, k
1
) is simply good
() cl
k
1
( ab, B
) B = B
cl
k
1
( ab, /
n
) cl
k
1
(B, /
n
), actually this
follows from clauses (), (), and
2
/
n
[=
1
( a, b)
()
2
1
2
Why? The implication is trivial as
2
is included in
2
, the implication
holds by clause (A) in the denition 2.13 of simply almost nice.
3
like
2
but replacing
2
by
3
/
n
cl
k
1
( ab, /
n
) [=
1
1
( a, b).
()
3
2
3
Why? By the induction hypothesis and our choices.
4
like
3
replacing
3
by
4
/
n
[B
cl
k
1
(B, /
n
)] [=
2
1
( a, b).
()
4
3
4
Why? By ()
+
1
in the beginning of the proof, the requirements on B
and
the choice of
2
1
.
For notational simplicity we assume B ,= , and similarly assume a has
no repetitions and apply the lemma 2.15 with the vocabulary
K
to the case
s = t, z
2
empty, z
1
= z
1
1
), z = z
1
, . . . , z
t
), and ( z, z
1
, z
2
) = ( z, z
1
1
) =
1
(z
1
, . . . , z
g( x)
), z
1
1
) and get i
,
1
i
( z, z
1
) and
2
i
( z) for i < i
as there; in
particular the quantier depth of
1
i
,
2
i
for i < i
1
.
Next let
5
like
4
but replacing
4
by
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
30 SAHARON SHELAH
5
letting c
1
, . . . , c
t
list B possibly with repetitions but such that
c
1
, . . . , c
g( x)
) = a, i < i
such that:
(i) B
[=
1
i
[c
1
, . . . , c
t
), b]
(ii) cl
k
1
(B, /
n
) [=
2
i
[c
1
, . . . , c
t
)]
Now
()
5
4
5
Why? By the choice of
1
i
,
2
i
for i < i
, so by lemma 2.15.
Let T = (N, c
1
, . . . , c
t
) : N /
and i < i
choose if possible
(N
j,i
, c
j
1
, . . . , c
j
t
, b
j
i
) such that:
(i) N
j
s
N
j,i
(in /
),
(ii) b
j
i
N
j,i
N
j
,
(iii) N
j,i
[=
1
i
(c
j
1
, . . . , c
j
t
), b
j
i
) and
(iv) (N
j,i
, c
j
1
, . . . , c
j
t
, c
j
1
, . . . , c
j
g( x)
, b
j
i
, k
1
) is simply good for K.
w = (i, j) : i < i
, j < j
and (N
j,i
, c
j
1
, . . . , c
j
t
, b
j
i
) is well dened.
Let
6
like
5
replacing
5
by
6
like
5
adding
(iii) for some j, (i, j) w and (B, c
1
, ..., c
t
)
= N
j,i
()
6
5
6
Why? By the denition of w.
Let
7
there is B such that: b /
n
, a B cl
k
,m
( a, /
n
), cl
k
1
(B, /
n
)
cl
k
,m
( a, /
n
), [B[ t, and
7
for some c
1
, . . . , c
t
listing B such that a = c
1
, . . . , c
g( x)
)
there are i < i
, j < j
, j < j
be as
in
6
, let j < j
be such that (/
n
B, c
1
, . . . , c
t
)
= (N
j
, c
j
1
, . . . , c
j
t
). The
main point is that B
is from
2
,
and if B
= B
).
For proving
7
6
use denition of simply good tuples in Denition
2.12(1).
We now have nished as
7
can be expressed as a f.o. formula straight-
forwardly. So we have carried the induction hypothesis on the quantier
depth thus nishing the proof.
2) Similar
2.17
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 31
Conclusion 2.19. (1) Assume (K, cl) is almost nice or simply almost
nice and cl is f.o. denable.
Then: K satises the 0-1 law i for any k, m we have
()
k,m
/
n
cl
k,m
(, /
n
) : n < ) satises the 0-1 law.
(2) Similarly with convergence and the very weak 0 1 law.
Proof 1) We rst prove the only if. There is a f.o. formula (x) such
that for every random enough /
n
, (x) dene cl
k,m
(, /
n
). Hence for every
f.o. sentence there is a f.o. sentence
M a : M [= [a] [= .
Now for every random enough /
n
we have a /
n
/
n
[= [a] a
cl
k,m
(, /
n
), hence together
/
n
[=
/
n
cl
k,m
(, /
n
) [= .
As we are assuming that K satises the 0-1 law, for some truth value t for
every random enough /
n
/
n
[=
t
hence (as required)
/
n
cl
k,m
(, /
n
) [= = t.
The other direction is similar by the main lemma 2.16 when (/, cl) is almost
nice, 2.17 when (/, cl) is simply almost nice.
2) Similar, so left to the reader.
2.19
Denition 2.20. (1) The tuple (N,
b, ( x), B
0
, B
1
), k, k
1
) is simply
, cl
k
(B
0
, N) B
1
,
b
g( x)
N,
( x) a f.o. formula and k, k
1
N and for every random enough
/
n
, for every
b
g( x)
(/
n
) such that /
n
cl
k
1
(
, /
n
) [= (
),
letting B
= /
n
Rang(
b) =
b
(ii) g(N) cl
k
1
(
, /
n
) = B
(iii) g(N)
cl
k
1
(
, /
n
)
(iv) cl
k
(g(B
0
), /
n
) g(B
1
) cl
k
1
(B
, /
n
).
(2) We may write B
0
instead B
0
, B
1
) if B
1
= N.
(3) We say normally simply
, /
n
) = g(cl
k
(B
0
, N)) B.
Denition 2.21. The 0-1 context with closure (K, cl) is (normally) simply
there are m
= m
(k, ,
), k
= k
(k, ,
), t =
t(k, ,
), k
0
= k
0
(k, ,
), k
1
= k
1
(k, ,
,m
( a, /
n
) then there are B
cl
k
,m
( a, /
n
) and B
/
n
such that
() [B[ t, a B, cl
k
1
(B, /
n
) cl
k
,m
( a, /
n
) and
() B
B [cl
k
( ab, /
n
) cl
k
1
(B, /
n
)]
() B <
s
B
(so B
) or at least there is B
,
(B
, b, c)
(B
, b, c)
() /
n
B
/
n
/
n
cl
k
1
(B, /
n
)
() letting c list the element of B and
( x) =
( x) : /
n
cl
k
1
( c, /
n
) [= ( x) and q.d.(( x)) k
0
we have (/
n
B
, c, ( x), ab, k, k
1
) is (normally) simply
good
or at least for some B
, b
we have
(i) (B
, c, ( x), ab, k, k
1
) is (normally) simply
good
(ii) (B
, b, c)
(B
, b, c)
Remark 2.22. We may restrict e.g. demand that it is in
1
(most natural
in the cases we have.
Claim 2.23. In 2.17 we can replace simply by simply
, i.e.
1) Assume (K, cl) is simply
( x) we have:
for every random enough /
n
and a
g( x)
[/
n
[
() /
n
[= ( a) if and only if /
n
cl
k
( a, /
n
) [=
( a).
2) We have [
n
n
], [
n
n
].
Conclusion 2.24. (1) Assume that the 0-1 context with closure (K, cl)
is (normally) simply
,m
( a, /
n
) as free amalgamation over some
B, small enough (with a priori bound depending on g( a) and k only, there
C = cl
k
(B, /
n
)). Now this basis, B, of free amalgamation is included in
cl
k
,m
( a, /
n
) so it is without elements from cl
k,m
( ab, /
n
)cl
k
,m
( a, /
n
).
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 33
Suppose we allow this and rst we deal with the case /
n
is a graph. Hence
a member d of cl
k,m
( ab, /
n
) may code a subset of cl
k
,m
( a, /
n
): the set
c cl
k
,m
( a, /
n
) : the pair c, d is an edge.
So though we are interested in f.o. formulas ( x) speaking on /
n
, we
are drawn into having
([/
n
[). Still possibly cl
k,m+1
( a, /
n
) is not larger than
cl
k,m
( a, /
n
).
However there is a big dierence between the monadic (e.g. graph where
the relations coded on cl
k
,m
( a, /
n
) by members of cl
k
( ab, /
n
) are monadic)
case and the more general case. For monadic logic addition theorems like
2.15 are known, but those are false for second order logic.
So we have good enough reason to separate the two cases. For readability
we choose here to generalize the simply almost nice with / = /
case
only.
Context 3.1. As in 2 for (K, cl).
Denition 3.2. 1) The 0-1 context with a closure operation, (K, cl) is s.m.a.
(simply monadically almost) nice if it is weakly nice, / = /
, cl is transitive
smooth local transparent (see Denitions 2.3(3),2.5(2),(3) and 2.9(4),(5))
and
(A) for every k and , there are r = r(k, ), k
= k
(k, ) and t
1
= t
1
(k, ),
t
2
= t
2
(k, ) such that:
for every /
n
random enough we have:
if a
(/
n
), b /
n
, cl
k
( ab, /
n
) cl
k
( a, /
n
)
then there are B
, B
1
, B
2
such that:
() a B
1
and cl
r
(B
1
, /
n
) cl
k
( a, /
n
) and [B
1
[ t
1
,
() B
1
B
2
, B
2
cl
r
(B
1
, /
n
) = B
1
, [B
2
[ t
2
, b B
2
,
() B
[cl
k
( ab, /
n
) cl
r
(B
1
, /
n
)] B
2
, and B
1
S
B
and
cl
k
( ab, /
n
) B
(hence cl
k
( ab, B
) = cl
k
( ab, /
n
)),
() /
n
B
/
n
/
n
B
2
/
n
(B
2
cl
r
(B
1
, /
n
)) (also here
is the
relation of being in free amalgamation),
() if Q is a predicate from
K
and /
n
[= Q( c), Rang( c) cl
r
(B
1
, /
n
)
B
2
then: Rang( c) B
2
B
1
or Rang( c) B
2
has at most one
member; if this holds we say B
2
is monadic over cl
r
(B
1
, /
n
)
inside /
n
,
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
34 SAHARON SHELAH
() (B
, B
1
, B
2
, a, b, k, r) is m.good (see below, m stands for monad-
ically), so clearly B /
.
2) We say (B
, B
1
, B
2
, a, b, k, r) is m.good when: B
, B
1
, B
2
/
and
B
1
B
2
B
, a B
1
, b B
2
and for every random enough /
n
, and
f : B
1
/
n
, and C
1
/
such that /
n
cl
r
(f(B
1
), /
n
) C
1
,
and f
+
: B
2
C
1
extending f such that C
1
= f
+
(B
2
) cl
r
(f(B
1
), /
n
)
and f
+
(B
2
) is monadic over cl
r
(f(B
1
), /
n
) inside C
1
(see above, but not
necessarily C
1
/
n
) there are g
+
: C
1
/
n
and g : B
/
n
such
that g B
2
= (g
+
f
+
) B
2
and
g(B
g(B
2
)
g
+
(C
1
) and cl
k
(g( ab), /
n
) g(B
) cl
r
(g(B
1
), /
n
).
3) Assume E (C, B
1
, B
2
) : B
1
B
2
C / is closed under iso-
morphism. We say B
2
is E-over D inside N if B
2
N /, D N and
(N (B
2
D), B
2
D, B
2
) E.
4) We say (B
, B
1
, B
2
, a, b, k, r) is E-good when B
, B
1
, B
2
/
and B
1
B
2
B
, a B
1
, b B
2
and for every random enough /
n
and f : B
1
/
n
and C
1
/
such that /
n
c
r
(f(B
1
), /
n
) C
1
and f
+
: B
2
C
1
extending f such that C
1
= f
+
(B
2
) cl
r
(f(B
1
), /
n
) and f
+
(B
2
) is E-over
cl
r
(f(B
1
), /
n
) inside C
1
(see above but not necessarily C
1
/
n
) there are
g
+
: C
1
/
n
and g : B
/
n
such that g B
2
= (g
+
f
+
) B
2
and
g(B
g(B
2
)
g
+
(C
1
) and cl
k
(g( ab), /
n
) g(B
) cl
r
(g(B
1
), /
n
).
5) We say K is s.E.a nice if in 3.2(1) we replace clauses (), () by
()
B
2
is E-over cl
r
(B
1
, /
n
) inside /
n
()
(B
, B
1
, B
2
, a, b, k, r) is E-good.
6) We say E is monadic if it is as in part (3) and (C, B
1
, B
2
) E implies
( a Q
C
Rang( a) B
2
B
1
) ([Rang( a) B
2
[ 1).
7) We say E as in 3.2(3) is simply monadic if it is monadic and for any
B
1
B
2
/, letting
B
2 =
_
(y,
b) :
b B
2
is with no repetition, (y, x) is an atomic formula,
each variable actually appearing
_
we have: the class
_
(D, R
(y,
b)
, c)
(y,
b)
B
2
,cB
1 : D /,
B
1
D, R
(y,
b)
is a subset of D B
1
and
there are C
1
, f such that:(C
1
, B
1
, B
2
) E
D C
1
/, f : B
2
C
1
, f(B
2
) D = B
1
,
f B
1
= id
B
1, and for (y,
b) we have
R
(y,
b)
= d D B
1
: C
1
[= [d, f(
b)]
_
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 35
is denable by a monadic formula
5
.
8) We say that cl is monadically denable for / if for each k, letting x =
x
B
1
B
2
, B
1
B
2
C.
Lemma 3.3. Assume (K, cl) is s.E.a. nice and E is simply monadic and
cl is f.o. denable or at least monadically denable (see 3.2(8)). Then for
every f.o. formula ( x) there are k and a monadic formula
( x) such that:
()
( x)
for every random enough /
n
, for every a
g( x)
[/
n
[ we have
/
n
[= ( a) /
n
cl
k
( a, /
n
) [=
( a).
Discussion 3.4. Some of the assumptions of 3.3 are open to manipulations;
others are essential.
1) As said above, the monadic is needed in order to use an addition theo-
rem (see 3.5), the price of removing it is high: essentially above we need that
after nding the copy g(B
2
) realizing the required type over cl
k
(B
2
, /
n
),
we need to nd g(B
), or a replacement like B
B
2
cl
r
(B
1
, /
n
) B
2
not only the small formulas satised by (B
2
, b)
bB
1 are important but also
e.g. the answer to B
= cl
r
(B
1
, /
n
).
It is natural to demand that all possibilities for the set of small formulas
in second order logic satised by B
cl
r
(B
1
, /
n
) occur so this may include
cases where B
s
if Y B
cl
r
(B
1
, /
n
) and Y B
B
2
, Y cl
r
(B
1
, /
n
) B
1
,
then Y is not s-connected, that is for some Y
1
, Y
2
, we have Y = Y
1
Y
2
, [Y
1
Y
2
[ s and Y
1
Y
1
Y
2
Y
2
(i.e. /
n
Y
1
/
n
Y
1
Y
2
/
n
Y
2
).
In this case we can allow e.g. quantication on 2-place relations R such that
/
n
Dom(R) is s-connected.
5
We can restrict ourselves to the cases C = cl
k
(B, C).
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
36 SAHARON SHELAH
2) If E is monadic but not simply monadic, not much is changed: we should
allow new quantiers in
. Let C
1
<
E
B
C
2
if B C
1
C
2
and (C
2
, B, B
(C
2
C
1
)) E. We want the quantier to say for (C
1
, R
(y,
b)
, c)
( y,
b),cB
that it codes C
2
with C
1
E
B
C
2
where =
B(C
2
\C
1
)
, but then the logic
should be dened such that we would be able to iterate.
The situation is similar to the case that in 2, we have: cl is denable or
at least monadically denable.
3) In 3.3 we essentially demand
() for each t, for random enough /
n
, for every B /
n
, [B[ t,
if /
n
cl
k
( a, /
n
) <
E
a
C then C is embeddable into /
n
over
cl
k
( a, /
n
).
Of course we need this just for a dense set of such Cs, dense in the sense
that a monadic sentence is satised, just like the use of B
in 2.12. That is
we may replace clause () of Denition 3.2(1)(A) by
()
there is B
such that (B
, c, b)
cB
2
(B
, c, b)
cB
2
and (B
, B
1
, B
2
, a, k)
is m. good (and
1
in main
case).
4) As we have done in 2.16(2), 2.17(2), we can add that the number of
alternation of quantiers of and the number of (possibly) alternation of
monadic quantier of
, j
,j
with probability p
n
i,j,i
,j
. The ippings
are independent and nally for i
< j
, (i
, j
,j
. For our case let ( (0, 1)
R
is irrational):
Distribution 1
p
n
i,j
= p
|ij|
=
_
1/[i j[
when [i j[ > 1
1/2
if [i j[ = 1
and p
n
i,j,i
,j
=
1
2
|ii
|+|jj
|
;
Distribution 2
p
n
i,j
is as above and
p
n
i,j,i
,j
=
_
1
2
|ii
|+|jj
|
if i = i
j = j
0 if otherwise.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 37
Now distribution 2 seems to give us an example as in Lemma 3.3, distribution
1 ts the non-monadic case. Distribution 1 will give us, for some pre-edges
(i, j), a lot of edges in the neighbourhood of it; of course for the average
pre-edge there will be few. This give us a lot of
i
extensions in that
neighbourhood. We may wonder whether actually the 01 law holds. It is
intuitively clear that for distribution 2 the answer is yes, for distribution
1 the answer is no.
6) Why in distribution 1 from (5) the 01 law should fail (in fact fails
badly)? It seems to me that for distribution 1 we can nd A B such
that for every random enough /
n
, for some f : A /
n
, the number of
g : B /
n
extending f is quite large, and on the set of such g we can
interpret an initial segment N
f
of arithmetic even with f(A) a segment,
N
f
in its neighbourhood. The problem is to compare such N
f
1
, N
f
2
with
possibly distinct parameters, which can be done using a path of pre-edges
from f
1
(A) to f
2
(A). But this requires further thoughts.
The case of distribution 2 should be similar to this paper.
We intend to return to this.
7) If E is trivial, then the claim above becomes (a variant) of the main
claims in section 2 (the variant fulll promises there).
Proof of 3.3:
This proof is similar to that of Lemma 2.16 and 2.17. We say in the
claim that
( x) or
( x), k
witness ()
( x)
. We prove the statement by
induction on q.d.(( x)) and rst note (by clause (d) of Denition 2.2) that
()
( x)
=()
+
( x)
where
[ a].
Case 1: Let ( x) be an atomic formula. Trivial.
Case 2: ( x) a Boolean combination of atomic formulas and formulas ( x)
of the form y
( x, y),
( x,y)
holds.
Clearly follows by case 3 and case 1.
Case 3: ( x) = (y)
1
( x, y). Let k
1
,
1
be a witness for ()
( x)
of
3.3 and let k
1
2
1
be witness for ()
+
1
( x)
holds for it (for
1
). Let r =
r(k
1
, g( x)), k
= k
(k
1
, g( x)), t
1
= t
1
(k
1
, g( x)) and t
2
= t
2
(k
1
, g x)
be as in Denition 3.2(1)(A), more exactly its 3.2(4) variant. Let k
be k
.
It is enough to prove the following two statements:
Statement 1: There is
1
( a, /
n
) [=
1
( a)
()
1
/
n
[=there is b satisfying cl
k
1
( ab, /
n
) cl
k
( a, /
n
) such
that
1
( a, b) holds.
Statement 2: There is
2
( a, /
n
) [=
2
( a)
()
2
/
n
[= there is b satisfying cl
k
1
( ab, /
n
) cl
k
( a, /
n
) such
that
1
( a, b) holds
(note: ()
1
, ()
2
are complementary, but it is enough that always at least
one holds).
Note that as y cl
k
we can in ()
2
replace cl
k
by cl
k
, changing
2
to
2.5
, and
similarly in ()
1
replace cl
k
by cl
k
changing
1
to
1.5
.
Clearly these two statements are enough and
1.5
( x)
2.5
( x) is as required.
Proof of statement 1:
Easily, by the induction hypothesis and by the fact that the closure is su-
ciently denable.
Proof of statement 2:
We will use a series of equivalent statements
1
is ()
2
,
2
there are b and B
, B
1
, B
2
such that:
b /
n
, cl
k
1
( ab, /
n
) cl
k
( a, /
n
), a B
1
cl
k
( a, /
n
),
cl
r
(B
1
, /
n
) cl
k
( a, /
n
), [B
1
[ t
1
, [B
2
[ t
2
, B
1
B
2
B
,
b B
, B
B
2
disjoint to cl
r
(B
1
, /
n
) and
6
B
1
s
B
and B
/
n
B
2
cl
r
(B
1
, /
n
) B
2
) and cl
k
1
( ab, /
n
) B
(hence
cl
k
1
( ab, B
) = cl
k
1
( ab, /
n
)) and
(B
, B
1
, B
2
, a, b, k, r) is Egood and
2
/
n
[=
1
( a, b).
()
2
1
2
Why? The implication is trivial, the implication holds by clause (A)
in the denition 3.2.
3
like
2
but replacing
2
by
3
/
n
cl
k
1
( ab, M
n
) [=
1
( a, b).
()
3
2
3
Why? By the induction hypothesis i.e. choice of k
1
.
4
like
3
replacing
3
by
4
/
n
[B
cl
k
1
(B
1
, /
n
)] [=
2
1
( a, b).
()
4
3
4
Why? By ()
+
1
being witnessend by
2
1
, k
1
see the beginning of the proof,
the denition of B
1
.
6
the B
), and ( z, z
1
, z
2
) =
( z, z
1
1
) =
2
(z
1
, . . . , z
g( x)
), z
1
1
) and get i
,
1
i
( z, z
1
) and
2
i
( z) for i < i
as
there.
Next let
5
like
4
but replacing
5
by
5
letting c
1
, . . . , c
t
2
list B
2
possibly with repetitions but such that
c
1
, . . . , c
t
1
= B
1
and c
1
, . . . , c
g( x)
) = a and there is i < i
such that:
(i) B
[=
1
i
[c
1
, . . . , c
t
2
), b]
(ii) /
n
(B
2
cl
k
(B
1
, /
n
)) [=
2
i
[c
1
, . . . , c
t
2
)].
Now
()
5
4
5
Why? by the choice of
1
i
,
2
i
(i < i
).
Let T = (N, c
1
, . . . , c
t
2
) : N /
and i < i
choose if possible
(N
j,i
, c
j
1
, . . . , c
j
t
2
, b
j
i
) such that:
(i) N
j
s
N
j,i
(in /
),
(ii) b
j
i
N
j,i
N
j
,
(iii) N
j,i
[=
1
i
(c
j
1
, . . . , c
j
t
2
), b
j
i
) and
(iv) (N
j,i
, c
j
1
, . . . , c
j
t
1
, c
j
1
, . . . , c
j
t
2
, c
j
1
, . . . , c
j
g x
, b
j
i
, k) is Egood.
Let
w = (i, j) : i < i
, j < j
and (N
j,i
, c
j
1
, . . . , c
j
t
, b
j
i
) is well dened.
Let = (y, x) : is a basic formula, x x
1
, . . . , x
t
2
.
As E is simply monadic (see Denition 3.2(4)) we have: for some monadic
formula
3
i
such that
(*) if d
1
, . . . , d
t
1
C / letting =
df
(y, . . . , x
i()
, . . . )
<()
:
an atomic formula for
K
, every variable actually appear and i()
1, . . . , t
2
;
the following are equivalent:
(a) there are subsets R
6
there are b, B
1
such that: b /
n
, cl
k
1
( ab, /
n
) cl
k
( a, /
n
), a
B
1
cl
k
( a, /
n
), cl
r
(B
1
, /
n
) cl
k
( a, /
n
), [B
1
[ t
1
(k
1
, g( x)),
and
6
for some c
1
, . . . , c
t
1
listing B
1
such that a = c
1
, . . . , c
g( x)
) there
are i < i
, j < j
such that:
(i) (/
n
B
1
, c
1
, . . . , c
t
1
)
= (N
j
, c
j
1
, . . . , c
j
t
1
) i.e. the mapping
c
j
1
c
1
, c
j
2
c
2
embeds N
j
into /
n
,
(ii) /
n
cl
k
1
(B
1
, /
n
) [=
3
i
(c
1
, . . . , c
t
1
)).
()
6
5
6
.
Why? For proving
5
6
let c
1
, . . . , c
t
as well as i < i
be as in
5
, let
j < j
be such that (/
n
B
1
, c
1
, . . . , c
t
1
)
= (N
j
, c
j
1
, . . . , c
j
t
1
). A main point
is that B
N and monadic -
formulas
1
i
( z, z
1
) =
1
i,
( z, z
1
),
2
i
( x, z) =
2
i,
( z, z
2
) for i < i
each of
quantier depth at most that of such that:
if N
1
N
N
0
N
2
, N
1
N
2
= N
0
, N
1
N
2
= N and the set of
elements of N
0
is c
1
, . . . , c
s
, c = c
1
, . . . , c
s
) and c
1
g( z
1
)
(N
1
) and c
2
g( z
2
)
(N
2
)
then
N [= [ c, c
1
, c
2
] i for some i < i
, N
1
[=
1
i
[ c, c
1
] and N
2
[=
2
i
[ c, c
2
].
References
[Bl96] John T. Baldwin. Near model completeness and 01 laws. preprint, 1996.
[BlSh 528] John T. Baldwin and Saharon Shelah. Randomness and Semigenericity.
Transactions of the American Mathematical Society, 349:13591376, 1997.
math.LO/9607226.
[BoSp] Ravi B. Boppana and Joel Spencer. Smoothness laws for random ordered
graphs. In Logic and random structures (New Brunswick, NJ, 1995), volume 33
of DIMACS Ser. Discrete Math. Theoret. Comput. Sci., pages 1532. Ameri-
can Mathematical Society, Providence, Rhode Island, 1997.
[Gu] Yuri Gurevich. Monadic SecondOrder Theories. In J. Barwise and S. Fefer-
man, editors, Model Theoretic Logics, Perspectives in Mathematical Logic,
chapter XIII, pages 479506. Springer-Verlag, New York Berlin Heidelberg
Tokyo, 1985.
(
4
6
7
)
r
e
v
i
s
i
o
n
:
2
0
0
8
-
0
8
-
0
2
m
o
d
i
f
i
e
d
:
2
0
0
8
-
0
8
-
0
3
ZERO ONE LAWS FOR GRAPHS... PART I 41
[LuSh 435] Tomasz Luczak and Saharon Shelah. Convergence in homogeneous
random graphs. Random Structures & Algorithms, 6:371391, 1995.
math.LO/9501221.
[Sh 550] Saharon Shelah. 01 laws. Preprint. math.LO/9804154.
[Sh 637] Saharon Shelah. 0.1 Laws: Putting together two contexts randomly . in prepa-
ration.
[Sh:F192] Saharon Shelah. Lecture notes on 01 laws, October95, Rutgers.
[Sh 581] Saharon Shelah. When 01 law hold for Gn, p, p monotonic. in preparation.
[Sh 463] Saharon Shelah. On the very weak 0 1 law for random graphs with orders.
Journal of Logic and Computation, 6:137159, 1996. math.LO/9507221.
[Sh 548] Saharon Shelah. Very weak zero one law for random graphs with order and
random binary functions. Random Structures & Algorithms, 9:351358, 1996.
math.LO/9606230.
[Sh 517] Saharon Shelah. Zero-one laws for graphs with edge probabilities decay-
ing with distance. Part II. Fundamenta Mathematicae, 185:211245, 2005.
math.LO/0404239.
[ShSp 304] Saharon Shelah and Joel Spencer. Zero-one laws for sparse random graphs.
Journal of the American Mathematical Society, 1:97115, 1988.
[Sp] Joel Spencer. Survey/expository paper: zero one laws with variable probabil-
ities. Journal of Symbolic Logic, 58:114, 1993.
Institute of Mathematics, The Hebrew University of Jerusalem, 91904 Jerusalem,
Israel, and Department of Mathematics, Rutgers University, New Brunswick,
NJ 08854, USA
E-mail address: shelah@math.huji.ac.il
URL: http://www.math.rutgers.edu/shelah