Sie sind auf Seite 1von 7

Step-Indexed Logical Relations∗

Robert Harper

Spring 2022

1 Introduction
The computability method, aka the theory of logical relations, gives meaning to proofs and pro-
grams by associating behavioral properties with types that ensures that well-typed terms enjoy the
associated properties. The “logical” part of the theory refers to the canonical choice of interpre-
tation of a compound type in terms of the interpretation of its constituent types. For example,
the interpretation of a product type is, in a sense dependent on the setting, the product of their
interpretations.1
The method works well for “pure” type theories, but is less effective in the presence of compu-
tational effects. Control effects expressible by continuations are managable by making the context
explicit (see Harper (2022a)). Partiality (the possibility of non-terminating expressions that have
no value) poses more significant challenges. For example, the language PCF introduces partiality
through a fixed point operation that allows for recursive definitions that might, when executed, loop
forever. The language FPC adds general recursive types (without any positivity restriction), with
which fixed points are definable, also demands partiality. Extending further to store effects raises
even further complications.
This note is concerned with partiality arising in its own right via a fixed point operation, and
with unrestricted recursive types, which give rise to such fixed points en passant. The method
of step-indexing is, at a high level, based on a “compactness” property of evaluation of complete
programs stating that the computation of an answer can only rely on some finite number of unrollings
of any recursive computation within it. That is, the same outcome can be achieved by bounding
recursive computations within it to some finite depth determined by the computation itself. Thus,
for example, the computation of a factorial of n requires only n recursive calls to itself, so that if
recursion is cut off at n or more steps, the same answer is returned as if it were not artificially
constrained.
Putting this idea into practice in the definition of logical relations requires indexing the relations
by the “steps remaining” to determine the outcome of a condition. With no steps remaining, the
computability conditions are trivially true, because it is not possible to refute it, because to do so
would require a computation step that cannot be taken. Otherwise the conditions are formulated
as usual, albeit up to the limitation of the remaining number of steps. The fundamental theorem
states that well-typed terms are well-behaved with respect to all finite numbers of steps, and that
is all that is required in any terminating computation.

Copyright © Robert Harper. All Rights Reserved
1
See Harper (2022b) for a fuller development of this idea.

1
(n)
⟨⟩ ∈val 1 ⇐⇒ (true)
(n) (n) (n)
⟨V1 , V2 ⟩ ∈val A1 × A2 ⇐⇒ V1 ∈val A1 and V2 ∈val A2
(n) (k)
λ(x . M ) ∈val A1 ⇀ A2 ⇐⇒ ∀k < n if V1 ∈val A1 then [V1 /x]M ∈(k)
exp A2

(n−k)
M ∈(n) −→(k) V val then V ∈val
exp A ⇐⇒ ∀k < n if M 7− A

(n) (n)
γ ∈ctx Γ ⇐⇒ γ(x) ∈val Ax (for each Γ ⊢ x : Ax )

(n)
Γ ≫(n) M ∈ A ⇐⇒ if γ ∈ctx Γ then γ̂(M ) ∈(n)
exp A

Γ ≫ M ∈ A ⇐⇒ ∀n ≥ 0 Γ ≫(n) M ∈ A

Figure 1: Step-Indexed Logical Predicate for Simple Types

2 Step-Indexing for Partiality


The original method of step-indexing was introduced by Appel and McAllester, and developed
fully by Ahmed, on which this account is based.2 For the sake of simplicity only unary relations
(predicates) are considered, but the reader is warned that the extension to binary relations is not
trivial due to fundamental asymmmetries in the definition in that case.
(n) (n)
The logical predicates V ∈val A, for V a closed value of type A and n ≥ 0, and M ∈exp A, for
M a closed term of type A, are defined in Figure 1. The former states that the closed value V is a
computable element of type A at stage n; the latter states that the closed term M is a computable
computation of type A insofar as can be determined within n steps of computation. Note that
(0) (0)
M ∈exp A for any closed term M : A, and that V ∈val A for any closed value V : A. Moreover, if
(n)
M diverges (has no value), then M ∈exp A for all n ≥ 0.
The relations are defined Kripke-style (with worlds being natural numbers) such that the fol-
lowing anti-monotonicity property holds.
(n) (m) (n)
Lemma 1 (Anti-Monotonicity). If m ≤ n, then V ∈val A implies V ∈val A, and M ∈exp A implies
(m)
M ∈exp A.
Proof. Informally, have fewer remaining steps implies fewer restrictions for being computable at a
type. The quantification in the function type is there to ensure this closure property.

Exercise 1. Prove Lemma 1 by induction on the structure of the type A.


The first objective is to prove the fundamental theorem for the partial function type. The
introductory form for A1 ⇀ A2 is the self-referential abstraction funA1 ,A2 (f, x.M ) with statics
fun
Γ, f : A1 ⇀ A2 , x : A1 ⊢ M2 : A2
Γ ⊢ funA1 ,A2 (f, x.M2 ) : A1 ⇀ A2
2
See Ahmed (2006) for history and related work.

2 Draft of July 25, 2022


and dynamics
ap(funA1 ,A2 (f, x.M2 ), V1 ) 7−−→ [funA1 ,A2 (f, x.M ), V1 /f, x]M2 .

Theorem 2. If Γ ⊢ M : A, then Γ ≫ M ∈ A.

Proof. By induction on typing, with each case being proved by induction on the stage n ≥ 0,
assuming the premises for all n ≥ 0 in each case. The base case is always trivial, and is omitted,
with the understanding that n > 0 unless specified otherwise. To lighten notation the substitution
instance γ̂(M ) of M by γ is written Mc whenever γ is clear from context.
For variables the result is immediate, by assumption. The product types are left as exericses.

1. Suppose Γ ⊢ ap(M1 , M2 ) : A because Γ ⊢ M1 : A2 ⇀ A and Γ ⊢ M2 : A2 . Thus, by induction


hypothesis,

(a) Γ ≫ M1 ∈ A2 ⇀ A, and
(b) Γ ≫ M2 ∈ A2 .
(n)
Suppose that n > 0, the case for n = 0 being trivial, and that γ ∈ctx Γ, with the intent to
show that ap(M c2 ) ∈(n)
c1 , M c c −→(k) V ,
exp A. To this end fix k < n and suppose that ap(M1 , M2 ) 7−
for some value V . By the definition of transition this sequence is structured as follows:

ap(M c2 ) 7−−→(k1 ) ap(V1 , M


c1 , M c2 )

7−−→(k2 ) ap(V1 , V2 )
7−−→ [V1 , V2 /f, x]M

7−−→(k3 ) V

where

(a) V1 = funA2 ,A (f, x.M ),


7 −→(k1 ) V1 ,
c1 −
(b) M

7 −→(k2 ) V2 , and
c2 −
(c) M

(d) k = k1 + k2 + 1 + k3 , and each ki < k < n.

Specializing the inductive hypotheses to the given n,

c1 ∈(n)
(a) M exp A2 ⇀ A, and

c2 ∈(n)
(b) M exp A2 ,

(n−k −k −1) (n−k −k −1)


it follows that V1 ∈val 1 2 A2 ⇀ A and V2 ∈val 1 2 A2 . Noting that k3 < n − k1 −
(k3 ) (n−k1 −k2 −k3 −1)
k2 − 1, it follows that [V1 , V2 /f, x]M ∈exp A, and so V ∈val A, which is to say
(n−k)
V ∈val A, as desired.

3 Draft of July 25, 2022


2. Suppose that Γ ⊢ funA1 ,A2 (f, x.M2 ) : A1 ⇀ A2 , because Γ, f : A1 ⇀ A2 , x : A1 ⊢ M2 : A2 . The
(outer) inductive hypothesis states that

∀m ≥ 0 Γ, f : A1 ⇀ A2 , x : A1 ≫(m) M2 : A2 .

Writing F for the recursive abstraction, the goal is to prove

∀n ≥ 0 Γ ≫(n) F : A1 ⇀ A2 .

Proceed by induction on n. The base case is trivially true, by definition. Assume the statement
(n+1)
for n, and, towards proving it for n + 1, suppose that γ ∈ctx Γ; it suffices to show that
(n+1) (n+1)
Fb ∈exp A1 ⇀ A2 . Because Fb is a value, it suffices to show Fb ∈val A1 ⇀ A2 . Suppose
(k) (k)
that k < n + 1, and V1 ∈val A1 . To prove [F , V1 /f, x]M2 ∈exp A2 , proceed by cases according
b c
to whether k < n or k = n:
(n)
(a) Assume k < n. By downwards closure γ ∈ctx Γ, and hence by the inner inductive
(n) (n)
hypothesis Fb ∈exp A1 ⇀ A2 , which is to say Fb ∈val A1 ⇀ A2 . But then by the definition
c2 ∈(k)
of computability at partial function type, [Fb, V1 /f, x]M exp A2 .
(n)
c2 ∈exp A2 . By downwards
(b) Otherwise k = n, and the goal is to show that [Fb, V1 /f, x]M
(n) (n)
closure γ ∈ctx Γ and by the inner inductive hypothesis Fb ∈val A1 ⇀ A2 , and so by the
(n)
c2 ∈exp
outer inductive hypothesis, taking m = n, [Fb, V1 /f, x]M A2 , as desired.

Exercise 2. Prove the remaining cases of the fundamental theorem for nullary and binary product
types.
Exercise 3. Extend the fundamental theorem to nullary and binary sum types.

3 Step-Indexing for Recursive Types


The ad hoc treatment of recursive function abstractions in the preceding section serves to illustrate
the mechanics of the method. A more general formulation is in terms of (unrestricted) recursive
types, with which recursive functions may be defined (see, for example, the account of “self types”
in Harper (2016).)
Recursive types, by their nature, are self-referential. This poses a difficulty for the computability
method, which defines a type-indexed family of sets of computable elements by induction on the
structure of their type. A naive definition of computability that is sufficient for the fundamental
theorem illustrates the difficulty:
???
foldX.A (V ) ∈val rec(X.A) ⇐⇒ V ∈val [rec(X.A)/X]A

The intended inductive structure breaks down; step-indexing provides a solution (Dreyer et al.,
2011). Specifically, to account for recursive types add the following clause to the definition of the
logical relation given in Figure 1:
(n+1) (n)
foldX.A (V ) ∈val rec(X.A) ⇐⇒ V ∈val [rec(X.A)/X]A

4 Draft of July 25, 2022


That is, the values of a recursive type at stage n + 1 is defined in terms of the values of its unfolding
at stage n. (At stage 0 the relation is defined to be trivially true, there being no “fuel” remaining
with which to impose behavioral conditions.) Thus, the definition of computability is justified by
noting that the value relation for a type at a stage is defined in terms of either the values of any
type at a “future” stage (with less “fuel”), or the computable values of the constituents of that type.
Exercise 4. Check that the anti-monotonicity requirement (with respect to stage) continues to hold
for the type interpretation extended with recursive types.
The statement of the fundamental theorem remains as before, as does the proof, with the addition
of two cases for the introduction and elimination rules for recursive types.
Theorem 3. If Γ ⊢ M : A, then Γ ≫ M ∈ A.
Proof. By induction on typing, there being two rules to consider governing recursive types. Let R
be the recursive type rec(X.A) for some X and A.
The introduction rule for R states that Γ ⊢ foldX.A (M ) : R follows from Γ ⊢ M : [R/X]A. By
induction Γ ≫ M ∈ [R/X]A, and the goal is to show that Γ ≫ foldX.A (M ) ∈ R. Suppose n ≥ 0,
(n) (n)
and γ ∈ctx Γ. Writing M c) ∈exp
c for γ̂(M ), the goal is to show foldX.A (M R. The case of n = 0
being trivial, assume that n > 0, and suppose that k < n is such that foldX.A (M c) 7−−→(k) V for
(n−k)
some value V , with the goal to show V ∈val R. By definition of the (by-value) dynamics V is
′ c (k) c ∈(n)
foldX.A (V ) for some value V such that M 7−−→ V ′ . By the inductive hypothesis M

exp [R/X]A,
(n−k) (n−k−1)
and so V ′ ∈val [R/X]A. By downwards closure V ′ ∈val [R/X]A, from which the desired
result follow by definition.
The elimination rule for R states that Γ ⊢ unfold(M ) : [R/X]A follows from Γ ⊢ M : R. By
(n)
induction Γ ≫ M ∈ R; the goal is to show Γ ≫ unfold(M ) ∈ [R/X]A. Fix n ≥ 0 and γ ∈ctx Γ, and
suppose that unfold(M c) 7−−→(k) V for some k < n. The goal is to show that V ∈(n−k) [R/X]A. The
val
evaluation to a value must take the form
c) 7−−→(k−1) unfold(foldX.A (V )) 7−−→ V,
unfold(M

c 7−−→(k−1) foldX.A (V ). Applying the inductive hypothesis at n − 1, and noting


where k > 0 and M
(n−1−k+1) (n−k)
that k − 1 < n − 1, it follows that V ∈val [R/X]A, which is to say V ∈val [R/X]A.

With recursive types in hand there is of course no need to treat recursive functions as a special
construct, for one may define funA1 ,A2 (f, x.M ) in terms of recursive types as follows:

A1 ⇀rec A2 ≜ rec(X.X → A1 ⇀ A2 )
funA1 ,A2 (f, x.M ) ≜ foldX.X⇀A1 ⇀A2 (λA1 ⇀A2 (F . [λA1 (x . funap(F, x))/f ]M ))
funap(V, V1 ) ≜ ap(ap(unfold(V ), V ), V1 )

The type A1 ⇀rec A2 is used here to isolate recursively defined functions from the underlying partial
function type to make it easier to define in isolation. Equivalently, one could define A1 ⇀rec A2 to
be self(A1 ⇀ A2 ), the type of self-referential partial functions with the given domain and range. The
type self(A) is the recursive type rec(X.X ⇀ A), with introductory form self A (x.M ) and eliminatory
form unself(M ), as in PFPL.

5 Draft of July 25, 2022


(n)
⟨⟩ ∈val 1 ⇐⇒ (true)
(n) (n) (n)
⟨V1 , V2 ⟩ ∈val A1 × A2 ⇐⇒ V1 ∈val A1 and V2 ∈val A2
(n) (k)
λ(x . M ) ∈val A1 ⇀ A2 ⇐⇒ ∀k < n if V1 ∈val A1 then [V1 /x]M ∈(k)
exp A2
(n+1) (n)
foldX.A (B) ∈val rec(X.A) ⇐⇒ V ∈val [rec(X.A)/X]A

(n) (k)
K ∈stk A ⇐⇒ ∀k ≤ n V ∈val A implies K ⊥ V

(k)
M ∈(n)
exp A ⇐⇒ ∀k ≤ n K ∈stk A implies K ⊥ M

Figure 2: Biorthogonal Step-Indexed Logical Predicate for Simple and Recursive Types

Exercise 5. Replay the verification that recursive functions inhabit the relation when defined as
above in terms of recursive types.

4 Bi-Orthogonal Step-Indexing
The concept of biorthogonality in logical relations, a re-formulation that makes use of a control stack
much as described in Harper (2016). Informally, a stack K may be considered to be “orthogonal” to
a term M in the sense that, when juxtaposed to form a machine state, they successfully compute
an answer: K ⊥ M iff K ▷ M 7−−→∗ α, where α is either yes or no. The terminology is suggested
by an analogy with linear algebra in which the formation of the machine state corresponds to an
“inner product” that evaluates to a “scalar”, the final outcome of a complete computation.3
The definitions given in Figure 1 may be re-formulated using stacks as in Figure 2. Computability
for values has much the same form as that given earlier, but the computability of expressions is
re-formulated using stacks, and the computability of stacks is as given in the figure. Put simply,
a computable expression is on that “behaves properly” on all stacks, and a computable stack is on
that “behaves properly” on all values, in both cases taking account of step-indexing.
Exercise 6. Formulate and prove the fundamental theorem for the logical predicates as defined in
Figure 2. Hint: Refer to the statics of control stacks as given in Harper (2016), extended with
recursive types for the typing of (closed) stacks.

References
Amal Ahmed. Step-indexed syntactic logical relations for recursive and quantified types. In Peter
Sestoft, editor, ESOP 2006, 2006.
Derek Dreyer, Amal Ahmed, and Lars Birkedal. Logical step-indexed logical relations *. Logical
Methods in Computer Science, 7:1–37, 2011. doi: 10.2168/LMCS-7. URL www.lmcs-online.org.
3
From this point of view the machine state might well be written in “bra-ket” notation as ⟨K | M ⟩.

6 Draft of July 25, 2022


Robert Harper. Practical Foundations for Programming Languages. Cambridge University Press,
Cambridge, England, Second edition, 2016.

Robert Harper. Continuations, aka contradictions, aka contexts, aka stacks. Unpublished lecture
note., February 2022a. URL https://www.cs.cmu.edu/~rwh/courses/atpl/pdfs/tlc-cont.
pdf.

Robert Harper. Tait computability. Unpublished lecture note., Spring 2022b. URL https://www.
cs.cmu.edu/~rwh/courses/atpl/pdfs/tait-comp.pdf.

Andrew M Pitts. Step-indexed biorthogonality: a tutorial example. In Dagstuhl Seminar Proceed-


ings. Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2010.

7 Draft of July 25, 2022

Das könnte Ihnen auch gefallen