Sie sind auf Seite 1von 102

Bruce K.

Driver

Math 247A (Topics in Analysis, W 2009) Rough Path Theory


March 11, 2009 File:rpaths.tex

Contents

Part I Rough Path Analysis


1

From Feynman Heuristics to Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


1.1 Construction and basic properties of Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5
6

p
2.1
2.2
2.3
2.4
2.5

Variations and Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Computing Vp (x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Brownian Motion in the Rough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Bounded Variation Obstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Banach Space Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9
10
13
14
15
18

The Bounded Variation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


3.1 Integration Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 The Fundamental Theorem of Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Calculus Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4 Bounded Variation Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5 Some Linear ODE Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5.1 Bone Yard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21
21
23
25
27
28
34

Banach Space p variation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37


4.0.2 Proof of Theorem 4.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Youngs Integration Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


5.1 Additive (Almost) Rough Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Youngs ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 An a priori Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 Some p Variation Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5 An Existence Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43
48
51
51
53
54

Contents

5.6 Continuous dependence on the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56


5.7 Towards Rougher Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6

Rough Paths with 2 p < 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


6.1 Tensor Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Algebraic Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 The Geometric Subgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4 Characterizations of Algebraic Multiplicative Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59
59
60
61
64

Homogeneous Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1 Lie group p variation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Homogeneous Metrics on G (V ) and Ggeo (V ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Carnot Caratheodory Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69
69
70
73

Rough Path Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


8.1 Almost Multiplicative Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Path Integration along Rough Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3 Spaces of Integrands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4 Appendix on Taylors Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77
77
79
83
88

Rough ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.1 Local Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.2 A priori-Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

10 Some Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97


11 Remarks from Terry Lyons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Page:

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

Part I

Rough Path Analysis

Here are a few suggested references for this course, [12,15,1]. The latter two
references are downloadable if you are logging into MathSci net through your
UCSD account. For a proof that all p variation paths have some extension
to a rough path see, [14] and also see [6, Theorem 9.12 and Remark 9.13]. For
other perspectives on the the theory, see [3] and also see Gubinelli [7, 8] Also
see, [9, 4, 7] look interesting. A recent paper which deals with global existence
issues for non-bounded vector fields is Lejay [11].

1
From Feynman Heuristics to Brownian Motion
1
Zt

In the physics literature one often finds the following informal expression,
dT () =

RT 0
2
1
1
e 2 0 | ( )| d DT for WT ,
Z (T )

(1.1)

where WT is the set of continuous paths, : [0, T ] R (or Rd ), such that


(0) = 0,
Y
DT =
m (d (t)) (m is Lebesgue measure here)

Z
=

= C (s, t)

s
( (t) (s)) and ( ) := ( ) ( ) ,
ts
(t)(s)
,
ts

( ) ( ) d =
s

 1 Es ()
Z Z
 1 |y(s)|2
C (s, t)
e 2
F |[0,s] , y e 2 ts dy
D[0,s]
Zt
Z (s)
Rd

Z Z
2
1 |y(s)|
C (s, t)
=
F (, y) e 2 ts dy ds () .
Zt
Ws
Rd

0 ( ) 0 ( ) d

Taking F 1 in this equation then implies,



Z Z
2
1 |y(s)|
C (s, t)
1=
e 2 ts dy ds ()
Zt
W
Rd
Z sh
i
C (s, t)
C (s, t)
d/2
d/2
=
(2 (t s))
ds () =
(2 (t s)) .
Zt
Z
t
Ws

(t) (s)
( (t) (s)) = 0.
ts

Thus it follows that




(t) (s) 2

(t s) + E[s,t] ()
E[s,t] () = E[s,t] () + E[s,t] () =

ts
2

| (t) (s)|
+ E[s,t] () .
ts

Thus if f () = F |[0,s] , (t) , we will have,
=

 e 12 Es () 1 |(t)(s)|2
ts
F |[0,s] , (t)
e 2
.
Z (s)

D(s,t)

Wt

(s) = (t) = 0, and hence



|(t)(s)|2
+E[s,t] ()
Es ()+
ts

Multiplying this equation by Z1t D[0,s] d (t) and integrating the result then
implies,
Z

F |[0,s] , (t) dt ()

If we decompose ( ) as ( ) + ( ) where

then we have, 0 ( ) =

 21
F |[0,s] , (t) e
Z

( ) := (s) +

 1
F |[0,s] , (t) e 2 Et () Dt
Wt
Z
 1
1
F |[0,s] , (t) e 2 [Es ()+E[s,t] ()] Dt
=
Z t Wt

and now fixing |[0,s] and (t) and then doing the integral over |(s,t) implies,
Z
 1
F |[0,s] , (t) e 2 [Es ()+E[s,t] ()] D(s,t)

0<tT

and Z (T ) is a normalization constant such that T (WT ) = 1.


We begin by giving meaning to this expression. For 0 s t T, let
Z t
2
E[s,t] () :=
| 0 ( )| d.

(1.2)

Thus the heuristic expression in Eq. (1.1) leads to the following Markov property for t , namely.
Proposition 1.1 (Heuristic). Suppose that F : Ws Rd R is a reasonable
function, then for any t s we have

1 From Feynman Heuristics to Brownian Motion

and for s < t,


F |[0,s] , (t) dt ()
Wt

Z Z
F (, y) pts ( (s) , y) dy ds () ,
=
Ws

E [Bt Bs ] = E [(Bt Bs ) Bs ] + EBs2 = E (Bt Bs ) EBs + s = s

Rd

and
p


ps (x, y) :=

1
2 (t s)

d/2
e

|yx|2
12 ts

Z
WT

(Rd )n

f (y1 , . . . , yn )

n
Y

psi si1 (yi1 , yi ) dyi

i=1

(1.4)
where by convention, y0 = 0.
Theorem 1.3 (Wiener 1923). For all t > 0 there exists a unique probability measure, t , on Wt , such that Eq. (1.4) holds for all n and all bounded
n
measurable f : Rd R.
Definition 1.4. Let Bt () := (t) . Then {Bt }0tT as a process on (WT , T )
R
is called Brownian motion. We further write Ef for WT f () dT () .
The following lemma is useful for computational purposes involving Brownian motion and follows readily form from Eq. (1.4).
Lemma 1.5. Suppose that 0 = s0 < s1 < s2 < < sn = t and fi : Rd R
are reasonable functions, then
" n
#
n
Y
Y



E
fi Bsi Bsi1 =
E fi Bsi Bsi1 ,
(1.5)
i=1

i=1

E [f (Bt Bs )] = E [f (Bts )] ,
and
E [f (Bt )] = Ef


tB1 .

As an example let us observe that


Z
EBt = ypt (y) dy = 0,
Z
2
2
EBt = tEB1 = t y 2 p1 (y) dy = t 1,

p/2

(1.8)

1.1 Construction and basic properties of Brownian motion

Z
f ( (s1 ) , . . . , (sn )) dT () =

E [|B1 | ] = Cp |t s|

(1.3)

Corollary
1.2 (Heuristic). If 0 = s0 < s1 < s2 < < sn = T and f :
n
Rd R is a reasonable function, then

Page:

p/2

E [|Bt Bs | ] = |t s|

where

job:

rpaths

(1.6)
(1.7)

In this section we sketch one method of constructing Wiener measure ore equivalently Q
Brownian motion. We begin with the existence of a measure T on the

T :=
W
0sT R which satisfies Eq. (1.4) where R is a compactification of R

for example either one point compactificatoin so that R


= S1.
Theorem 1.6 (Kolmogorovs Existence Theorem). There exists a proba T such that Eq. (1.4) holds.
bility measure, T , on W

n, R ,
Proof. For a function F () := f ( (s1 ) , . . . , (sn )) where f C R
define
Z
n
Y

psi si1 (yi1 , yi ) dyi .
I (F ) :=
f (y1 , . . . , yn )
Rn

i=1

Using the semi-group property;


Z
pt (x, y) ps (y, z) dy = ps+t (x, z)
Rd

R
along with the fact that Rd pt (x, y) dy = 1 for all t > 0, one shows that
I (F ) is well defined independently of how we represent F as a finitely based
continuous function.
T is a compact Hausdorff space. By the Stone
By Tychonoffs Theorem W
Weierstrass
Theorem,
the
finitely
based continuous functions are dense inside



of C WT . Since |I(F )| kF k for all finitely based continuous functions, we




T .
may extend I uniquely to a positive continuous linear functional on C W
An application of the Riesz Markov theorem now gives the existence of the
desired measure, T .
Theorem 1.7 (Kolmogorovs Continuity Criteria). Suppose that
t : S is a process for t [0, T ] where
(, F, P ) is a probability space and X
(S, ) is a complete metric space. Assume there exists positive constants, , ,
and C, such that
t, X
s ) ] C |t s|1+
E[(X
(1.9)

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03


for all s,
 t [0, T] . Then for any (0, /) there is a modification, X, of X
t = 1 for all t) which is H
(i.e. P Xt = X
older continuous. Moreover, there
is a random variable K such that,

(Xt , Xs ) K |t s|
and EKp < for all p <

for all s, t [0, T ]

(1.10)

1 .

t : W
T R be the projection map, B
t () = (t) . Then
Corollary 1.8. Let B
n o
t for which t Bt is H
there is a modifications, {Bt } of B
older continuous T almost surely for any (0, 1/2) .
Proof. Applying Theorem 1.7 with := p n
and o := p/2 1 for any p
t which is almost surely
(2, ) shows there is a modification {Bt }t0 of B
Holder continuous for any


p/2 1
= (0, 1/2 1/p) .
(0, /) = 0,
p
Letting p shows that {Bt }t0 is almost surely Holder continuous for
all < 1/2.
We will see shortly that these Brownian paths are very rough. Before we
do this we will pause to develop a quantitative measurement of roughness of a
continuous path.

2
p Variations and Controls
Let (E, d) be a metric space which will usually be assumed to be complete.
Definition 2.1. Let 0 a < b < . Given a partition :=
{a = t0 < t1 < < tn = b} of [a, b] and a function Z C ([a, b] , E) , let
(ti ) := ti1 , (ti )+ := ti+1 , with the convention that t1 := t0 = a and
tn+1 := tn = T. Furthermore for 1 p < let

1/p
!1/p
n
X
X


p
p
Vp (Z : ) :=
d Ztj , Ztj1
=
d Zt , Zt
.
(2.1)
j=1

||0 t

Thus we may conclude Z (s) = Z (0) , i.e. Z must be constant.


n

Lemma 2.4. Let {ai > 0}i=1 , then


n
X

Definition 2.2. and Z C ([a, b] , E) . For 1 p < , the p - variation of


Z is;

1/p
n
X

dp Ztj , Ztj1 .
Vp (Z) := sup Vp (Z : ) = sup
(2.2)
P(a,b)

P(a,b)

j=1

Moreover if Z C ([0, T ] , E) and 0 a b T, we let



p
Z,p (a, b) := p Z|[a,b]
=

sup

n
X


dp Ztj , Ztj1 .

(2.3)

Remark 2.3. We can define Vp (Z) for p (0, 1) as well but this is not so interesting. Indeed if 0 s T and P (0, T ) is a partition such that s ,
then
X
X
d (Z (s) , Z (0))
d (Z (t) , Z (t )) =
d1p (Z (t) , Z (t )) dp (Z (t) , Z (t ))
t
1p

max d

(Z (t) , Z (t ))

Vpp

is decreasing in p and

(p) := ln

n
X

!
api

is convex in p.

i=1

Proof. Let f (i) = ai and ({i}) = 1 be counting measure so that


n
X

api = (f p ) and (p) = ln (f p ) .

i=1
d p
f = f p ln f, it follows that and
Using dp

(f p ln f )
and
(f p )
 
2
f p ln2 f
(f p ln f )
00 (p) =

.
(f p )
(f p )
0 (p) =

P(a,b) j=1

!1/p
api

i=1

Furthermore, let P (a, b) denote the collection of partitions of [a, b] . Also let
mesh () := maxt |t t | be the mesh of the partition, .

d (Z (s) , Z (0)) lim max d1p (Z (t) , Z (t )) Vpp (Z) = 0.

(Z : )

max d1p (Z (t) , Z (t )) Vpp (Z) .


t

Using the uniform continuity of Z (or d (Z (s) , Z (t)) if you wish) we know that
lim||0 maxt d1p (Z (t) , Z (t )) = 0 and hence that,

Thus if we let EX := (f p X) / (f p ) , we have shown, 0 (p) = E [ln f ] and




2
00 (p) = E ln2 f (E [ln f ]) = Var (ln f ) 0
which shows that is convex in p.
Now let us shows that kf kp is decreasing in in p. To this end we compute,

10

2 p Variations and Controls



i
d h
d 1
1
1
ln kf kp =
(p) = 0 (p) 2 (p)
dp
dp p
p
p
1
= 2
[p (f p ln f ) (f p ) ln (f p )]
p (f p )
1
= 2
[ (f p ln f p ) (f p ) ln (f p )]
p (f p )
 

1
fp
p
= 2
f ln
.
p (f p )
(f p )

Proof. Given Lemma 2.4, it suffices to prove Eq. (2.4) and the continuity assertion on p Vp (Z) . Since p Vp (Z) is a decreasing function,
we know that limpp0 Vp (Z) and limpp0 Vp (Z) always exists and also that
limpp0 Vp (Z) = supp>p0 sup Vp (Z : ) . Therefore,
lim Vp (Z) = sup sup Vp (Z : ) = sup sup Vp (Z : ) = sup Vp0 (Z : ) = Vp0 (Z)

pp0

n
X

ap+r
j

1
p

ln Vp (Z)

follows

directly from the fact that ln Vp (Z) is convex in p and that convex functions
are continuous (where finite).
p
Here is a proof for this case. Let (p) := ln Vp (Z) , 1 p0 < p1 such that
Vp0 (Z) < , and ps := (1 s) p0 + sp1 , then
(ps ) (1 s) (p0 ) + s (p1 ) .
Letting s 1 then implies ps p1 and (p1 ) (p1 ) , i.e. Vp1 Vp1
Vp1 . Therefore Vp1 = Vp1 and along with Eq. (2.4) proves the continuity of
p Vp (Z) .

r X
n
r
p
q
max aj
apj kakp kakp = kakp ,
j

j=1

p>p0

which proves Eq. (2.4). The continuity of Vp (Z) = exp

Up to now our computation has been fairly general. The point where being
counting measure comes in is that in this
casei (f p ) f p everywhere and
h
fp
d
therefore ln (f p ) 0 and therefore, dp ln kf kp 0 as desired.
Alternative proof that kf kp is decreasing in p. If we let q = p + r, then
q
kakq

p>p0

j=1

wherein we have used,



max aj =
j

max apj
j

1/p

n
X

2.1 Computing Vp (x)

1/p
apj

= kakp .

j=1

Remark 2.5. It is not too hard to see that the convexity of is equivalent to
the interpolation inequality,
1s

kf kps kf kp0 kf kp1 ,

How do we actually compute Vp (x) := Vp (x; 0, T ) for a given path x


C ([0, T ] , R), even a very simple one? Suppose x is piecewise linear, with corners at the points 0 = s0 , s1 , . . . , sm = T. Intuitively it would seem that the
p-variation should be given by choosing the corners to be the partition points.
That is, if S = {s0 , . . . , sn } is the partition of corner points, we might think
that Vp (x) = Vp (x; S). Well, first we would have to leave out any corner which
is not a local extremum (because of Lemma 2.8 below). But even then, this is
not generally true as is seen in Example 2.9 below.
Lemma 2.7. For all a, b 0 and p 1,

where 0 s 1, 1 p0 , p1 , and

(a + b)p ap + bp

1
1
1
:= (1 s)
+s .
ps
p0
p1

and the inequality is strict if a, b > 0 and p > 1.


Proof. Observe that (a + b)p ap + bp happens iff
p 
p

a
b
1
+
a+b
a+b

This interpolation inequality may be proved via H


olders inequality.
Corollary 2.6. The function Vp (Z) is a decreasing function of p and ln Vpp (Z)
is a convex function of p where they are finite. Moreover, for all p0 > 1,
lim Vp (Z) = Vp0 (Z) .

pp0

and p Vp (Z) is continuous on the set of ps where Vp (Z) is finite.

Page:

10

job:

rpaths

(2.5)

(2.4)

which obviously holds since



p 
p
b
a
b
a
+
= 1.
+

a+b
a+b
a+b a+b
Moreover the latter inequality is strict if if a, b > 0 and p > 1.
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

2.1 Computing Vp (x)

Lemma 2.8. Let x be a path, and D = {t0 , . . . , tn } be a partition. Suppose


x is monotone increasing (decreasing) on [ti1 , ti+1 ]. Then if D0 = D\{ti },
Vp (x : D0) Vp (x : D). If x is strictly increasing and p > 1, the inequality is
strict.
Proof. From Eq. (2.5) it follows
p

Vp (x : D0) Vp (x : D) = (x(ti+1 ) x(ti1 ))p (x(ti+1 ) x(ti ))p (x(ti ) x(ti1 ))p
p
p
p
= ti x + ti+1 x (ti x) ti+1 x 0
and the inequality is strict if ti x > 0, ti+1 x > 0 and p > 1.
In other words, on any monotone increasing segment, we should not include
any intermediate points, because they can only hurt us.
Example 2.9. Consider a path like the following: If we partition [0, T ] at the

11

Proof. Let D = {0 = t0 < t1 < < tr = T } P (0, T ) be an arbitrary


partition of [0, T ] . We are going to prove by induction that there is a partition
S such that Vp (x : D) Vp (x : ) . The proof will be by induction on
n := # (D \ S) . If n = 0 there is nothing to prove. So let us now suppose that
the theorem holds at some level n 0 and suppose that # (D \ S) = n + 1. Let
1 k < r be chosen so that tk D \ S. If x (tk ) is between x (tk1 ) and x (tk+1 )
(i.e. (x (tk1 ) , x (tk ) , x (tk+1 )) is a monotonic triple), then according Lemma
2.8 we will have Vp (x : D) Vp (x : D \ {tk }) and since # [(D \ {tk }) \ S] = n,
the induction hypothesis implies there exists a partition, S such that
Vp (x : D) Vp (x : D \ {tk }) Vp (x : ) .
Hence we may now assume that either x (tk ) < min (x (tk1 ) , x (tk+1 )) or
x (tk ) > max (x (tk1 ) , x (tk+1 )) . In the first case we let tk (tk1 , tk+1 ) be a
point where x|[tk1 ,tk+1 ] has a minimum and in the second let tk (tk1 , tk+1 )
be a point where x|[tk1 ,tk+1 ] has a maximum. In either case if D := (D \ {tk })
{tk } we will have Vp (x : D) Vp (x : D ) and # (D \ S) = n. So again the
induction hypothesis implies there exists a partition S such that
Vp (x : D) Vp (x : D ) Vp (x : ) .
From these considerations it follows that
Vp (x : D) sup {Vp (x : ) : P (0, T ) s.t. S}
and therefore
Vp (x) = sup {Vp (x : D) : D P (0, T )}
sup {Vp (x : ) : P (0, T ) s.t. S} Vp (x) .

corner points, then


1
1
1
p
Vp (x : S) = ( + )p + (2)p + ( + )p 2( )p < 1
2
2
2
by taking  small. On the other hand, taking the trivial partition D = {0, T },
Vp (x : D) = 1, so Vp (x : S) < 1 Vp (x) and in this case using all of local
minimum and maximum does not maximize the p variation.
The clean proof of the following theorem is due to Thomas Laetsch.
Theorem 2.10. If x : [0, T ] R having only finitely many local extremum in
(0, T ) located at {s1 < < sn1 } . Then
Vp (x) = sup {Vp (x : D) : {0, T } D S} ,
where S = {0 = s0 < s1 < < sn = T }.
Page:

11

job:

rpaths

Let us now suppose that x is (say) monotone increasing (not strictly) on


[s0 , s1 ], monotone decreasing on [s1 , s2 ], and so on. Thus s0 , s2 , . . . are local
minima, and s1 , s3 , . . . are local maxima. (If you want the reverse, just replace
x with x, which of course has the same p-variation.)
Definition 2.11. Say that s [0, T ] is a forward maximum for x if x(s)
x(t) for all t s. Similarly, s is a forward minimum if x(s) x(t) for all
t s.
Definition 2.12. Suppose x is piecewise monotone, as above, with extrema
{s0 , s1 , . . . }. Suppose further that s2 , s4 , . . . are not only local minima but also
forward minima, and that s1 , s3 , . . . are both local and forward maxima. Then
we will say that x is jog-free.
Note that s0 = 0 does not have to be a forward extremum. This is in order
to admit a path with x(0) = 0 which can change signs.
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

12

2 p Variations and Controls

Remark 2.13. Here is another way to state the jog-free condition. Let x be
piecewise monotone with extrema s0 , s1 , . . . . Let i = |x(si+1 ) x(si )| . Then
x is jog-free iff 1 2 . . . . The idea is that the oscillations are shrinking.
(Notice that we dont need 0 1 ; this is because s0 = 0 is not required to be
a forward extremum.)
Remark 2.14. It is also okay if s1 , s2 , . . . are backwards extrema; this corresponds to the oscillations getting larger. Just reverse time, replacing x(t) by
x(T t), which again doesnt change the p-variation. Note that if i are as
above, this corresponds to having 0 1 2 . . . (note that 0 is included
now, but m1 would not be). This case seems less useful, however.
Lemma 2.15. Let x be jog-free with extrema s0 , . . . , sm . Let D = {t0 , . . . , tn }
be any partition not containing all the sj . Then there is some sj
/ D such that
if D0 = D {sj }, Vp (x : D0) Vp (x : D).
Proof. Let sj be the first extremum not contained in D (note s0 = 0 D
already, so j is at least 1 and sj is also a forward extremum). Let ti be the last
element of D less than sj . Note that sj1 ti < sj < ti+1 .
Now x is monotone on [sj1 , sj ]; say WLOG its monotone increasing, so
that sj is a local maximum and also a forward maximum. Since ti [sj1 , sj ],
where x is monotone increasing, x(sj ) x(ti ). And since sj is a forward maximum, x(sj ) x(ti+1 ).
Therefore we have
x(sj ) x(ti ) x(ti+1 ) x(ti )
x(sj ) x(ti+1 ) x(ti ) x(ti+1 ).

12

since one of the terms on the left is already the term on the right. This shows
p
p
that Vp (x : D0) Vp (x : D) .
In other words, we should definitely include the extreme points, because
they can only help.
Putting these together yields the desired result.
PropositionP
2.16. If x is jog-free with extrema S = {s0 , . . . , sm }, then Vp (x) =
Vp (x : S) = ( ip )1/p .
Proof. Fix  > 0, and let D be a partition such that Vp (x : D) Vp (x) .
By repeatedly applying Lemma 2.15, we can add the points of S to D one
by one (in some order), and only increase the p-variation. So Vp (x : D S)
Vp (x : D). Now, if t D\S, it is inside some interval [sj , sj+1 ] on which x is
monotone, and so by Lemma 2.8 t can be removed from D S to increase the
p-variation. Removing all such points one by one (in any order), we find that
Vp (x : S) Vp (x : D S). Thus we have Vp (x : S) Vp (x : D) Vp (x) ;
since  was arbitrary we are done.
Notice that we only considered the case of jog-free paths with only finitely
many extrema. Of course, in order to get infinite p-variation for any p we would
need infinitely many extrema. Lets just check that the analogous result holds
there.
Proposition 2.17. Suppose we have a sequence s0 , s1 , . . . increasing to T ,
where x is alternately monotone increasing and decreasing on the intervals
[sj , sj+1 ]. Suppose also that the sj are forward extrema for x. Letting j =
|x(sj+1 ) x(sj )| as before, we have

Vp (x) =

One of the quantities on the right is equal to |x(ti+1 ) x(ti )|, and so it follows
that
Page:

|x(sj ) x(ti )| + |x(sj ) x(ti+1 )| |x(ti+1 ) x(ti )|

Here is an example.

job:

rpaths

1/p
jp

j=0

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

2.2 Brownian Motion in the Rough

Actually, the extreme points sj can converge to some earlier time than T ,
but x will have to be constant after
Pm that time.
p
Proof. For any m, we have j=0 jp = Vp (x : D) for D = {s0 , . . . , sm+1 },
P
P
m

so Vp (x)p j=0 jp . Passing to the limit, Vp (x)p j=0 jp .


For the reverse inequality, let D = {0 = t0 , t1 , . . . , tn = T } be a partition
with Vp (x : D) Vp (x) . Choose m so large that sm > tn1 . Let S =
{s0 , . . . , sm , T }, then by the same argument as in Proposition 2.16 we find that
Vp (x : S) Vp (x : D). (Previously, the only way we used the assumption that S
contained all extrema sj was in order to have every ti D\S contained in some
monotone interval [sj , sj+1 ]. That is still the case here; we just take enough sj s
to ensure that we can surround each ti . We do not need to surround tn = T ,
since it is already in S.)
Pm1
P
p
But Vp (x : S) = j=0 jp j=0 jp , and so we have that

Proof. Let N be an N (0, 1) random variable, t := tt , t B := Bt Bt

and observe that t B tN. Thus we have,


X
X
2
EQm =
E (t B) =
t = T.
tm

tm

Let us define
Cov (A, B) := E [AB] EA EB and
h
i
2
2
Var (A) := Cov (A, A) = EA2 (EA) = E (A EA) .
and observe that
Var

1/p

X
p

Vp (x : D) Vp (x) .
j

13

n
X

!
Ai

n
X

i=1

Var (Ai ) +

i=1

Cov (Ai , Aj ) .

i6=j

As Cov (t B, s B) = 0 if s 6= t, we may use the above computation to conclude,


X
X
Var(Qm ) =
Var((t B)2 ) =
Var(t N 2 )

j=0

 was arbitrary and we are done.

t
2

2.2 Brownian Motion in the Rough

= Var(N )

(t) Var(N 2 ) |m |

t
2


Corollary 2.18. For all p > 2 and T < , Vp B|[0,T ] < a.s. (We will see
later that Vp B|[0,T ] = a.s. for all p < 2.)

= T Var(N ) |m | 0 as m .
(By explicit Gaussian integral computations,

Proof. By Corollary 1.8, there exists Kp < a.s. such that


1/p

|Bt Bs | Kp |t s|

for all 0 s, t T.

(2.6)

Thus we have
p X
X
X
p
1/p
|i B|
Kp |ti ti1 |

Kpp |ti ti1 | = Kpp T


i

Var(N 2 ) = EN 4 EN 2

and therefore, Vp B|[0,T ]

Kpp T

If

< a.s.

Proposition 2.19 (Quadratic Variation). Let {m }m=1 be a sequence of


partition of [0, T ] such that limm |m | = 0 and define Qm := V22 (B : m ) .
Then
h
i
2
lim E (Qm T ) = 0
(2.7)
m
P
and if m=1 mesh (m ) < then limm Qm = T a.s. This result is often
abbreviated by the writing, dBt2 = dt.
Page:

13

job:

rpaths

= 3 1 = 2 < .)

Thus we have shown


h
i
h
i
2
2
lim E (Qm T ) = lim E (Qm EQ) = lim Var(Qm ) = 0.

2

|m | < , then
"
#

X
X
X
2
2
E
(Qm T ) =
E (Qm T ) =
Var(Qm )

m=1

m=1

m=1

m=1

Var(N 2 ) T

mesh(m ) <

m=1

from which it follows that


0 almost surely.
macro:

svmonob.cls

m=1

(Qm T ) < a.s. In particular (Qm T )

date/time:

11-Mar-2009/12:03

14

2 p Variations and Controls

Proposition 2.20. If p > q 1 and Vq (Z) < , then lim||0 Vp (Z : ) =


0.
Proof. Let P (0, T ) , then
X
X
Vpp (Z : ) =
dp (Z (t) , Z (t )) =
dpq (Z (t) , Z (t )) dq (Z (t) , Z (t ))
t

Proof. According to Proposition 2.19 we may choose partition, m , such


that mesh (m ) 0 and Qm T a.s. If B were Holder continuous for
some > 1/2, then

 X
X
X
2
2
21
(t B) C
(t) C max [t]
t
Qm =
tm

max dpq (Z (t) , Z (t ))


t

pq

max d

(Z (t) , Z (t ))

dq (Z (t) , Z (t ))

t
Vqq (Z

C [|m |]

T 0 as m

which contradicts the fact that Qm T as m .

2.3 The Bounded Variation Obstruction

Thus, by the uniform continuity of Z|[0,T ] we have


lim sup Vp (Z : ) lim sup max dpq (Z (t) , Z (t )) Vqq (Z) = 0.
||0 t

Proposition 2.24. Suppose that Z (t) is a real continuous function such that
Z0 = 0 for simplicity. Define
Z T
Z T
f ( ) dZ ( ) :=
f ( ) Z (t) d + f (t) Z (t) |T0
0


Corollary 2.21. If p < 2, then Vp B|[0,T ] = a.s.
Proof. Choose partitions, {m } , of [0, T ] such that limm Qm = T a.s.
where Qm := V22 (B : m ) and let 0 := {limm Qm = T } so that P (0 ) =
1. If Vp B|[0,T ] () < for then by Proposition 2.20,
lim Qm () = lim V22 (B () : m ) = 0

tm

: )

max dpq (Z (t) , Z (t )) Vqq (Z) .

||0

tm
21



and
hence

,
i.e.
V
B|
()
<

0c . Therefore 0
0
p
[0,T
]



Vp B|[0,T ] () = and hence



P Vp B|[0,T ] () = P () = 1.

whenever f is a C 1 function. If there exists, C < such that


Z

T



f ( ) dZ ( ) C max |f ( )| ,

0 T
0

(2.8)

then V1 (Z) < (See Definition 2.2 above) and the best possible choice for C
in Eq. (2.8) is V1 (Z) .
Proof. Given a partition, := {0 = t0 < tP
1 < < tn = T } be a partition
n
n
of [0, T ] , {k }k=1 R, and f (t) := 1 1{0} + k=1 k 1(tk1 ,tk ] . Choose fm (t)
1
in C ([0, T ] , R) well approximating f (t) as in Figure 2.3. It then is fairly

Fact 2.22 If {Bt }t0 is a Brownian motion, then



P (Vp (B) < ) =

1 if p > 2
.
0 if p 2

See for example [17, Exercise 1.14 on p. 36].


Corollary 2.23 (Roughness of Brownian Paths). A Brownian motion,
{Bt }t0 , is not almost surely H
older continuous for any > 1/2.

Page:

14

job:

rpaths

easy to show,
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

2.4 Controls

fm ( ) Z (t) d

n1
X

Theorem 2.25 (L. C. Young 1936). Suppose that p, q > 0 with


1. Then there exists a constant, C () < such that

Z

T


f (t) dZ (t) C () (kf k + Vq (f )) Vp (Z)


0

(k+1 k ) Z (tk )

k=1

and therefore,
Z
m

fm (t) dZ (t) =

lim

n1
X

(k+1 k ) Z (tk ) + n Z (tn ) 1 Z (t0 )

k=1

n
X

Therefore we have,
Z


n
T


X




fm ( ) dZ ( )
k (Z (tk ) Z (tk1 )) = lim


m 0

k=1

C lim sup max |fm ( )| = C max |k | .


m 0 T

f ( ) Z (t) d =

The solution to this equation should be,


Z T
y (T ) =
B (t) dB (t)
0

which still does not make sense as a Youngs integral when B is a Brownian
motion because for any p > 2, p1 + p1 =: < 1. For more on this point view see
the very interesting work of Terry Lyons on rough path analysis, [13].

2.4 Controls

f (t) dZ (t) + f (t) Z

(t) |T0

where Z is the Lebesgue Stieltjes measure associated to Z. From this identity


and integration by parts for such finite variation functions, it follows that
Z T
Z T
f (t) dZ (t) =
f (t) dZ (t)
0

for all f C 1 . Thus if Vp (Z) < the integral extends to those f C ([0, T ])
such that Vq (f ) < .

Taking
k
=
sgn(Z (tk ) Z (tk1 )) for each k, then shows
Pn
|Z
(t
)

Z
(t
k
k1 )| C. Since this holds for any partition , it
k=1
follows that V1 (Z) C.
If V1 (Z) < , then
T

Notation 2.26 (Controls) Let


= {(s, t) : 0 s t T }.
A control, is a continuous function : [0, ) such that
1. (t, t) = 0 for all t [0, T ],
2. is super-additive, i.e., for all s t v we have

and

(s, t) + (t, v) (s, v).


Z
Z
Z
T
T

T



f (t) dZ (t) =
f (t) dZ (t)
|f (t)| d kZ k (t)

0
0

0
max |f ( )| kZ k ([0, T ]) = V1 (Z) max |f ( )|
0 T

0 T

Therefore C can be taken to be V1 (Z) in Eq. (2.8) and hence V1 (Z) is the best
possible constant to use in this equation.
Combining Fact 2.22
R t with Proposition 2.24 explains why we are going to
have trouble defining 0 fs dBs when B is a Brownian motion. However, one
might hope to use Youngs integral in this setting.
Page:

15

+ 1q =: >

Unfortunately, Youngs integral is still not sufficiently general to allow us to


solve the typical SDE that we would like to consider. For example, consider the
simple SDE,
y (t) = B (t) B (t) with y (0) = 0.

k (Z (tk ) Z (tk1 )) .

k=1

1
p

15

job:

rpaths

(2.9)

Remark 2.27. If is a control then and (s, t) is increasing in t and decreasing


in s for (s, t) . For example if s t, then (s, ) + (, t) (s, t)
and therefore, (, t) (s, t) . Similarly if s t , then (s, t) + (t, )
(s, ) and therefore (s, t) (s, ) .
Lemma 2.28. If is a control and C ([0, ) [0, )) such that (0) = 0
and is convex and increasing1 , then is also a control.
1

macro:

The assumption that is increasing is redundant here since we are assuming 00 0


and we may deduce that 0 (0) 0, it follows that 0 (x) 0 for all x. This assertion
also follows from Eq. (2.11).

svmonob.cls

date/time:

11-Mar-2009/12:03

16

2 p Variations and Controls

Proof. We must show is still superadditive. and this boils down to


showing if 0 a, b, c with a + b c, then

Notation 2.31 Given o E and p [1, ), let


Cp ([0, T ] , E) := {Z C ([0, T ] , E) : Vp (Z) < } and

(a) + (b) (c) .

C0,p ([0, T ] , E) := {Z Cp ([0, T ] , E) : Z (0) = o} .

As is increasing, it suffices to show,


(a) + (b) (a + b) .

(2.10)

Making use of the convexity of , we have,




a
b
(b) =
0+
(a + b)
a+b
a+b
b
b
a
(0) +
(a + b) =
(a + b)

a+b
a+b
a+b

Theorem 2.32. Let : [0, ) be a function and define,


(s, t) := (s, t) :=

V1 ( : ) =

(2.11)

Example 2.29. Suppose that u (t) is any increasing continuous function of t,


then (s, t) := u (t) u (s) is a control which is in fact additive, i.e.

(t , t) .

(2.13)

We now assume:
1. is continuous,
2. (t, t) = 0 for all t [0, T ] (This condition is redundant since next condition
would fail if it were violated.), and
3. V1 () := (0, T ) := supP(0,T ) V1 ( : ) < .
Under these assumptions, : [0, ) is a control.

(s, t) + (t, v) = (s, v) for all s t v.


So for example (s, t) = t s is an additive control and for any p > 1, (s, t) =
p
p
(t s) or more generally, (s, t) = (u (t) u (s)) is a control.
Lemma 2.30. Suppose that is a control, p [1, ), and Z C ([0, T ] , E) is
a function satisfying,
for all (s, t) ,

then Vpp (Z) (0, T ) < . More generally,



p,Z (s, t) := Vpp Z|[s,t] (s, t) for all (s, t) .
Proof. Let (s, t) and P ([s, t]) , then using the superadditivity of
we find
 X p
 X

Vpp Z|[s,t] : =
d Zt , Zt
Zt , Zt (s, t) .
t

(2.12)

where for any P (s, t) ,

Adding these last two inequalities then proves Eq. (2.10).

1/p

V1 ( : ) ,

and interchanging the roles of a and b gives,


a
(a)
(a + b) .
a+b
.

d (Zs , Zt ) (s, t)

sup
P(s,t)

We will give the proof of Theorem 2.32 after a Lemma 2.37 below.
Corollary 2.33 (The variation control). Let p [1, ) and suppose that
Z Cp ([0, T ] , E). Then Z,p : [0, ) defined in Eq. (2.3) is a control
1/p
satisfying, d (Z (s) , Z (t)) Z,p (s, t)
for all (s, t) .
Proof. Apply Theorem 2.32 with (s, t) := dp (Z (s) , Z (t)) and observe
that with this definition, Z,p = .
Lemma 2.34. Let : [0, ) satisfy the hypothesis in Theorem 2.32, then
= (defined in Eq. (2.12)) is superadditive.
Proof. If 0 u s v T and 1 P (u, s) , 2 P (s, v) , then
1 2 P (u, v) . Thus we have,
V1 ( : 1 ) + V1 ( : 2 ) = V1 ( : 1 2 ) (u, v) .

Taking the supremum over all 1 and 2 then implies,

Therefore,
p,Z (s, t) := Vpp Z|[s,t] =


sup

Vpp Z|[s,t] : (s, t) .




(u, s) + (s, v) (u, v) for all u s v,

P([s,t])

i.e. is superadditive.
Page:

16

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

2.4 Controls

17

Lemma 2.35. Let Z Cp ([0, T ] , E) for some p [1, ) and let := Z,p :
[0, ) defined in Eq. (2.3). Then is superadditive. Furthermore if p = 1,
is additive, i.e. Equality holds in Eq. (2.9).

Lemma 2.37. Suppose that : [0, ) is a continuous function such that


(t, t) = 0 for all t [0, T ] and > 0 is given. Then there exists > 0 such
that, for every [0, T ] and u [0, T ] such that dist (u, ) < we have,

Proof. The superadditivity of Z,p follows from Lemma 2.34 and since
Z,p (s, t) = (s, t) where (s, t) := dp (Z (s) , Z (t)) . In the case p = 1, it is
easily seen using the triangle inequality that if 1 , 2 P (s, t) and 1 2 ,
then V1 (X : 1 ) V1 (X, 2 ) . Thus in computing the supremum of V1 (X : )
over all partition in P (s, t) it never hurts to add more points to a partition.
Using this remark it is easy to show,

|V1 ( : ) V1 ( : {u})| < .

(u, s) + (s, v) =

sup

[V1 (X : 1 ) + V1 (X : 2 )]

1 P(u,s),2 P(s,v)

V1 (X : 1 2 )

sup

|V1 ( : ) V1 ( : {u})| = | (ti1 , ti ) (ti1 , u) (u, ti )|


| (ti1 , ti ) (ti1 , u)| + | (u, ti ) (ti , ti )| < .

1 P(u,s),2 P(s,v)

sup

Proof. By the uniform continuity of , there exists > 0 such that


| (s, t) (u, v)| < /2 provided |(s, t) (u, v)| < . Suppose that =
{t0 < t1 < < tn } [0, T ] and u [0, T ] such that dist (u, ) < . There
are now three case to consider, u (t0 , tn ) , u < t0 and u > t1 . In the first
case, suppose that ti1 < u < ti and that (for the sake of definiteness) that
|ti u| < , then

V1 (X : ) = (u, v)

P(u,v)

The second and third case are similar. For example if u < t0 , we will have,

as desired.

|V1 ( : {u}) V1 ( : )| = (u, t0 ) = (u, t0 ) (t0 , t0 ) < /2.

Lemma 2.36. Let : [0, ) and = be as in Theorem 2.32. Further


suppose (a, b) , P (a, b) , and let
:= (a, b) V1 ( : ) 0.
Then for any 0 P (a, b) with 0 , we have
X
[ (t , t) V1 ( : [t , t])] .

(2.14)

t 0

In particular, if (, ) 2 then
(, ) V1 ( : [, ]) + .

With these lemmas as preparation we are now ready to complete the proof
of Theorem 2.32.
Proof. Proof of Theorem 2.32. Let (s, t) := (s, t) be as in Theorem
2.32. It is clear by the definition of , the (t, t) = 0 for all t and we have
already seen in Lemma 2.34 that is superadditive. So to finish the proof we
must show is continuous.
Using Remark 2.27, we know that (s, t) is increasing in t and decreasing in s and therefore (u+, v) = limsu,tv (s, t) and (u, v+) =
lims u,,tv (s, t) exists and satisfies,

(2.15)
(u+, v) (u, v) (u, v+) .

Proof. Equation (2.14) is a simple consequence of the superadditivity of


(Lemma 2.34) and the identity,
X
V1 ( : [t , t]) = V1 ( : )
t 0

where t := t ( 0 ) . Indeed, using these properties we find,


X
X
[ (t , t) V1 ( : [t , t])] =
(t , t) V1 ( : )
t 0

(2.16)

The main crux of the continuity proof is to show that the inequalities in Eq.
(2.16) are all equalities.
1. Suppose that > 0 is given and > 0 is chosen as in Lemma 2.37 and
suppose that u < s < t < v with |s u| < and |v t| < . Further let
P (u, v) be a partition of [u, v] , then according to Lemma 2.37,
V1 ( : ) V1 ( : {s, t}) + 2
= (u, s) + (t, v) + V1 ( : [s, t] {s, t}) + 2
(u, s) + (t, v) + (s, t) + 2.

t 0

(a, b) V1 ( : ) = .

Letting s u and t v in this inequality shows,


Page:

17

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

18

2 p Variations and Controls

V1 ( : ) (u+, v) + 2
and then taking the supremum over P (u, v) and then letting 0 shows
(u, v) (u+, v) . Combined this with the first inequality in Eq. (2.16)
shows, (u+, v) = (u, v) .
2. We will now show (u, v) = (u, v+) by showing (u, v+) (u, v) .
Let > 0 and > 0 be as in Lemma 2.37 and suppose that s < u and t > v
with |u s| < and |t v| < . Let us now choose a partition P (s, t) such
that
(s, t) V1 ( : ) + .


Proof. Let (s, t) := p,x (s, t) = Vpp x|[s,t] be the control associated to x
and define h (t) := (0, t) . Observe that h is increasing and for 0 s t T
that h (s) + (s, t) h (t) , i.e.
(s, t) h (t) h (s) for all 0 s t T.
Let g : [0, h (T )] E be defined by g (h (t)) := x (t) . This is well defined
since if s t and h (s) = h (t) , then (s, t) = 0 and hence x|[s,t] is constant
and in particular x (s) = x (t) . Moreover it now follows for s < t such that
u := h (s) < h (t) =: v, that

Then applying Lemma 2.37 gives,

dp (g (v) , g (u)) = dp (g (h (t)) , g (h (s))) = dp (x (t) , x (s))


(s, t) h (t) h (s) = v u

(s, t) V1 ( : 1 ) + 3
where 1 = {u, v} . As above, let u and v+ be the elements in 1 just
before u and just after v respectively. An application of Lemma 2.36 then shows,
(u, v+) (u , v+ ) V1 ( : 1 [u , v+ ]) + 3
= V1 ( : 1 [u, v]) + (u , u) + (v, v+ ) + 3
(u, v) + 5.
As > 0 was arbitrary we may conclude (u, v+) (u, v) which completes
the proof that (u, v+) = (u, v) .
I now claim all the other limiting directions follow easily from what we have
proved. For example,

from which Eq. (2.17) easily follows.

2.5 Banach Space Structures


This section needs more work and may be moved later.
To put a metric on Holder spaces seems to require some extra structure on
the metric space, E. What is of interest here is the case E = G is a group
with a left (right) invariant metric, d. In this case suppose that we consider p variation paths, x and y starting at e G in which case we define,

(u, v) (u, v+) (u, v+) = (u, v) = (u, v+) = (u, v) ,


(u, v) = (u+, v) (u, v) (u, v) = (u, v) = (u, v) ,

!1/p

s u, t v

s u, t v

which shows (u+, v+) = (u, v) and

Proof. For each fixed partition D and each 1 i bpc, we have

s u, t v

Proposition 2.38 (See [6, Proposition 5.15 from p. 83.]). Let (E, d) be
a metric space, and let x : [0, T ] E be a continuous path. Then x is of
finite p-variation if and only if there exists a continuous
increasing (i.e. non

decreasing) function h : [0, T ] 0, Vpp (Z) and a 1/p H
older path g :


p
0, Vp (Z) E such that such that x = g h. More explicitly we have,


1/p
d (g (v) , g (u)) |v u|
for all u, v 0, Vpp (Z) .
(2.17)
18

d (t x, t y)

Lemma 2.39. (C0,p (, T (n) (V )), dp ) is a metric space.

so that (u, v) = (u, v) .

Page:

where t x := x1
t xt for all t . The claim is that this should now be a
complete metric space.

(u, v) = (u+, v) lim inf (s, t) lim inf (s, t) (u, v+) = (u, v)
s u, t v

P(0,T )

dpvar (x, y) :=

and similarly, (u, v) = (u, v) . We also have,


(u, v) = (u+, v) lim inf (s, t) lim sup (s, t) (u, v+) = (u, v)

sup

job:

rpaths

i
vD
(X)

r
p/i
X
i

Xt`1 t`

!i/p

`=1

is a semi-norm on C0,p (, T (n) (V )) and in particular satisfies the triangle inequality. Moreover,
i
i
i
i
i
vD
(X + Y ) sup[vD
(X) + vD
0 (Y )] sup vD 0 (X) + sup vD 0 (Y )
D0

macro:

svmonob.cls

D0

date/time:

D0

11-Mar-2009/12:03

and therefore
i
i
i
sup vD
(X + Y ) sup vD
(X) + sup vD
(Y )
D

which shows

i
sup vD
(X)
D

still satisfies the triangle inequality. (i.e., the supremum

of a family of semi-norms is a semi-norm). Thus we have


i
dp (X) := max sup vD
(X)
1ibpc D

is also a semi-norm on C0,p (, T (n) (V )). Thus dp (X, Y ) = dp (X Y ) satisfies


the triangle inequality. Moreover we have dp (X, Y ) = 0 implies that
i

Xst Ysti p/i = 0 1 i bpc
and (s, t) , i.e., X i = Y i for all 1 i bpc and we have verified dp (X, Y )
is a metric.

3
The Bounded Variation Theory
3.1 Integration Theory
Let T (0, ) be fixed,
S := {(a, b] : 0 a b T } {[0, b] R : 0 b T } .

(3.1)

Further let A be the algebra generated by S. Since S is an elementary set,


A may be described as the collection of sets which are finite disjoint unions of
subsets from S. Given any function, Z : [0, T ] V with V being a vector define
Z : S V via,

Proof. By definition of these finitely additive integrals,


Z
Z
Y ((a, b]) = Yb Ya =
f dZ
f dZ
[0,b]
[0,a]
Z
Z

1(a,b] f dZ .
=
1[0,b] 1[0,a] f dZ =
Therefore, it follows by the finite additivity of Y and linearity
that
Z
Z
Y (A) =
f dZ =
1A f dZ for all A A.

Z ((a, b]) := Zb Za and Z ([0, b]) = Zb Z0 0 a b T.


With this definition we are asserting that Z ({0}) = 0. Another common choice
is to take Z ({0}) = Z0 which would be implemented by taking Z ([0, b]) = Zb
instead of Zb Z0 .
Lemma 3.1. Z is finitely additive on S and hence extends to a finitely additive
measure on A.
Proof. See Chapter ?? and in particular make the minor necessary modifications to Examples ??, ??, and Proposition ??.
Let W be another vector space and f : [0, T ] End (V, W ) be an A
simple function, i.e. f ([0, T ]) is a finite set and f 1 () A for all
End (V, W ) . For such functions we define,
Z
Z
X
f (t) dZ (t) :=
f dZ =
Z (f = ) W.
(3.2)
[0,T ]

[0,T ]

End(V,W )

The basic linearity properties of this integral are explained in Proposition ??.
For later purposes, it will be useful to have the following substitution formula
at our disposal.
TheoremR 3.2 (Substitution formula). Suppose that f and Z are as above
and Yt = [0,t] f dZ W. Further suppose that g : R+ End (W, U ) is another
A simple function with finite support. Then
Z
Z
gdY =
gf dZ .
[0,T ]

[0,T ]

[0,T ]

[0,T ]

Therefore,
Z
gdY =
[0,T ]

X
Z

() dZ ,

End(W,U )

1{g=} f dZ
[0,T ]

Z
1{g=} f dZ =

gf dZ
[0,T ]

End(W,U )

as desired.
Let us observe that
Z





f (t) dZt

[0,T ]

Y (g = ) =
X

[0,T ]

[0,T ]

[0,T ]

End(W,U )

kk kZ (f = )k .

End(V,W )

Let us now define,



kZ k ((a, b]) := V1 Z|[a,b]

n
X



Zt Zt : a = t0 < t1 < < tn = b and n N
= sup
j
j1

j=1

be the variation measure associated to Z .


Lemma 3.3. If kZ k ((0, T ]) < , then kZ k is a finitely additive measure
on S and hence extends to a finitely additive measure on A. Moreover for all
A A we have,
kZ (A)k kZ k (A) .
(3.3)

22

3 The Bounded Variation Theory

Z





f (t) dZt

[0,T ]

Proof. The additivity on S was already verified in Lemma 2.34. Here is the
proof again for sake of convenience.
Suppose that = {a = t0 < t1 < < tn = b} , s (tl1 , tl ) for some l,
and 0 := {s} . Then

kZ k ((a, b]) :=
=

n
X

X
End(V,W )



Ztj Ztj1

j=1
n
X

kk kZ k (f = ) =

j=1:j6=l
n
X





Ztj Ztj1 + kZt Zs k + Zs Zt
l
l1

((a, b]) kZ k ((a, s]) + kZ k ((s, b]) .

Hence it follows that

kZ k ((a, b]) = sup kZ k ((a, b]) kZ k ((a, s]) + kZ k ((s, b]) .

Notation 3.5 In the future we will often write kdZk for d kZ k .


Theorem 3.6. If V and W are
R Banach spaces and V1 (Z) = kZ k ([0, T ]) < ,
we may extend the integral, [0,T ] f (t) dZt , by continuity to all functions which
are in the uniform closure of the A simple functions. In fact we may extend
the integral to L1 (kZ k) closure of the A simple functions. In particular,
if f : [0, T ] Hom (V, W ) is a continuous function,
Z
X
f (t) dZ (t) = lim
f (t ) (Z (t) Z (t )) .
(3.4)
||0

[0,T ]

Conversely if 1 is a partition of (a, s] and 2 is a partition of (s, b], then


:= 1 2 is a partition of (a, b]. Therefore,
2

((a, s]) + kZ k

((s, b]) = kZ k ((a, b]) kZ k ((a, b])

Proof. These results are elementary soft analysis except possibly for the
last assertion for the statement in Eq. (3.4). To prove this, to any partition,
P (0, T ) , let
X
f :=
f (t ) 1(t ,t] + f (0) 1{0}

in which case,

and therefore,

kZ k ((a, s]) + kZ k ((s, b]) kZ k ((a, b]) .

Z
f (t ) (Z (t) Z (t )) =

Lastly if A A, then A is the disjoint union of intervals, Ji from S and we


have,


X
X
X


kZ (A)k =
Z (Ji )
kZ (Ji )k
kZ k (Ji ) = kZ k (A) .


i

Corollary 3.4. If Z has finite variation on [0, T ] , then we have


Z
Z




f (t) dZt
kf ()k kZ k (d) kf k kZ k ([0, T ]) .

[0,T ]

[0,T ]
Proof. Simply observe that kZ (A)k kZ k (A) for all A AT and hence
from Eq. (3.2) and the bound in Eq. (3.3) we have
22

job:

rpaths

f (t) dZ (t) .
[0,T ]

Page:

kf ()k kZ k (d)
[0,T ]

kf k kZ k ([0, T ]) .




Zt Zt + Zt Zs + Zs Zt
j
j1
l
l1

= kZ k

X
End(V,W )

j=1:j6=l

kZ k

kk kZ (f = )k

This completes the proof since f f uniformly on [0, T ] as || 0 by the


uninform continuity of f.
Theorem 3.7 (Substitution formula II). Let Z : [0, T ] V be a finite
variation process, f : [0, T ] End (V, W ) and g : [0, T ] End (W, U ) be
continuous maps and define,
Z
Yt =
f dZ W.
[0,t]

Then Y is a continuous finite variation process and the following substitution


formula holds,
Z
Z
gdY =
gf dZ.
(3.5)
[0,T ]

[0,T ]

In short, dY = f dZ.
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

3.2 The Fundamental Theorem of Calculus

Proof. First off observe that


t

Z
kYt Ys k

kf k kdZk =: (s, t)
s

where the right side is a continuous control. This follows from the fact that
kdZk is a continuous measure. Therefore
Z

Let
A = {t [, b] : kf (t) f ()k (t )}

kf k kdZk < .
0

If g = 1(a,b] with End (W, U ) , then


Z

Z
gdY = (Yb Ya ) =

[0,T ]

Second Proof (with out Hahn Banach). Let > 0 and (a, b)
be given. (We will later let 0.) By the definition of the derivative, for all
(a, b) there exists > 0 such that




kf (t) f ( )k = f (t) f ( ) f( )(t ) |t | if |t | < . (3.6)

V1 (Y )

Z
f dZ =

Z
f dZ =

gf dZ.
0

For the sake of contradiction, suppose that t0 < b. By Eqs. (3.6) and (3.8),

Remark 3.8. If we keep the same hypothesis as in Theorem 3.7 but now take
RT
Yt := t f dZ instead. In this case we have,
Z
Z
gdY =
gf dZ.
[0,T ]

To prove this just observe that Yt = WT Wt where Wt :=


easy to see that
dYt = d (Wt ) = dWt = f dZ

Rt
0

f dZ. It is now

and the claim follows.

3.2 The Fundamental Theorem of Calculus


As above, let V and W be Banach spaces and 0 a < b T.

kf (t) f ()k kf (t) f (t0 )k + kf (t0 ) f ()k


(t0 ) + (t t0 ) = (t )
for 0 t t0 < t0 which violates the definition of t0 being an upper bound.
Thus we have shown b A and hence
kf (b) f ()k (b ).
Since > 0 was arbitrary we may let 0 in the last equation to conclude
f (b) = f () . Since (a, b) was arbitrary it follows that f (b) = f () for all
(a, b] and then by continuity for all [a, b], i.e. f is constant.
Theorem 3.10 (Fundamental Theorem of Calculus). Suppose that f
C([a, b], V ), Then
Rt
d
f ( ) d = f (t) for all t (a, b).
1. dt
a
2. Now assume that F C([a, b], V ), F is continuously differentiable on (a, b)
(i.e. F (t) exists and is continuous for t (a, b)) and F extends to a continuous function on [a, b] which is still denoted by F . Then
Z b
F (t) dt = F (b) F (a).
(3.9)
a

Proposition 3.9. Suppose that f : [a, b] V is a continuous function such


that f(t) exists and is equal to zero for t (a, b). Then f is constant.

Proof. Let h > 0 be a small number and consider



Z
Z
Z t

t+h
t+h



f ( )d
f ( )d f (t)h =
(f ( ) f (t)) d


a


a
t
Z t+h

k(f ( ) f (t))k d h(h),

Proof. First Proof. For ` V , we have f` := ` f : [a, b] R with

f` (t) = 0 for all t (a, b) . Therefore by the mean value theory, it follows that
f` (t) is constant, i.e. ` (f (t) f (a)) = 0 for all t [a, b] . Since ` V is
arbitrary, it follows from the Hahn Banach theorem that f (t) f (a) = 0, i.e.
f (t) = f (a) independent of t.
Page:

23

job:

rpaths

(3.7)

and t0 be the least upper bound for A. We will now use a standard argument
which is sometimes referred to as continuous induction to show t0 = b. Eq.
(3.6) with = shows t0 > and a simple continuity argument shows t0 A,
i.e.
kf (t0 ) f ()k (t0 ).
(3.8)

Thus Eq. (3.5) holds for all A simple functions and hence also for all uniform
limits of simple functions. In particular this includes all continuous g : [0, T ]
End (W, U ) .

[0,T ]

23

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

24

3 The Bounded Variation Theory

where (h) := max [t,t+h] k(f ( ) f (t))k. Combining this with a similar computation when h < 0 shows, for all h R sufficiently small, that

Z
Z t

t+h


f ( )d
f ( )d f (t)h |h|(h),


a
a
where now (h) := max [t|h|,t+|h|] k(f ( ) f (t))k. By continuity of f at t,
Rt
d
(h) 0 and hence dt
f ( ) d exists and is equal to f (t).
a
Rt
For the second item, set G(t) := a F ( ) d F (t). Then G is continuous

and G(t)
= 0 for all t (a, b) by item 1. An application of Proposition 3.9 shows
Rb
G is a constant and in particular G(b) = G(a), i.e. a F ( ) d F (b) = F (a).
Alternative proof of Eq. (3.9). It is easy to show
! Z
Z b
Z b
b
d

(` F ) (t) dt for all ` V .


F (t) dt =
` F (t) dt =
`
dt
a
a
a

Corollary 3.12 (Change


of
Variable
Formula).
Suppose that
f C([a, b], V ) and T : [c, d] (a, b) is a continuous function such that
T (s) is continuously differentiable for s (c, d) and T 0 (s) extends to a
continuous function on [c, d]. Then
Z

f (t) dt.
T (c)

d
F (T (s)) = F 0 (T (s)) T 0 (s) = f (T (s)) T 0 (s) .
ds
Integrating this equation on s [c, d] and using the chain rule again gives
d

T (d)

f (t) dt.
T (c)

Exercise 3.1 (Fundamental Theorem of Calculus II). Prove the fundamental theorem of calculus in this context. That is; if f : V W be a C 1
function and {Zt }t0 is a V valued function of locally bounded variation,
then for all 0 a < b T,

Equation (3.9) now follows from these identities after an application of the Hahn
Banach theorem.
Corollary 3.11 (Mean Value Inequality). Suppose that f : [a, b] V is
a continuous function such that f(t) exists for t (a, b) and f extends to a
continuous function on [a, b]. Then
Z b


kf (b) f (a)k
kf(t)kdt (b a) f .
(3.10)

T (d)

f (T (s)) T (s) ds = F (T (d)) F (T (c)) =

Combining the last two equations implies,


!
Z b

`
F (t) dt F (b) + F (a) = 0 for all ` V .

Rt
Proof. For t (a, b) define F (t) := T (c) f ( ) d. Then F C 1 ((a, b) , V )
and by the fundamental theorem of calculus and the chain rule,

Moreover by the real variable fundamental theorem of calculus we have,


Z b
d
(` F ) (t) dt = ` F (b) ` F (a) for all ` V .
a dt

Proof. By the fundamental theorem of calculus, f (b) f (a) =


and then (by the triangle inequality for integrals)
Z
Z
b

b


kf (b) f (a)k =
f(t)dt
kf(t)kdt
a

a
Z b


f dt = (b a) f .

f (T (s)) T 0 (s) ds =

Rb
a

f(t)dt

f (Zb ) f (Za ) =

f 0 (Z ) dZ ,

f (Z ) dZ :=
a

[a,b]

where f 0 (z) End (V, W ) is defined by, f 0 (z) v :=


it follows that f (Z (t)) has finite variation and

d
dt |0 f

(z + tv) . In particular

df (Z (t)) = f 0 (Z (t)) dZ (t) .


Solution to Exercise (3.1). Let P (0, T ) . Then by a telescoping series
argument,
X
f (Zb ) f (Za ) =
t f (Z )
t

where



t f (Z ) = f (Zt ) f Zt = f Zt + t Z f Zt
Z 1


=
f 0 Zt + st Z t Z ds = f 0 Zt t Z +
t t Z

Page:

24

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

3.3 Calculus Bounds

and

Solution to Exercise (3.2). First solution. Let us observe that h (t) :=


h1 (t) = ([a, t]) and hn (t) satisfies the recursive relation,
Z t
Z t
hn+1 (t) :=
hn (s) d (s) =
hn (s) dh (s) for all t a.

:=

Zt + st Z f

Zt



ds.

Thus we have,

f (Zb ) f (Za ) =

Zt t Z + =

k k

Zt dZ (t) + (3.11)

[a,b]

where :=

t t Z. Since,

1 n
Now let Hn (t) := n!
h (t) , by an application of Exercise 3.1 with f (x) =
n+1
x
/ (n + 1)! implies,
Z t
Z t
0
Hn+1 (t) = Hn+1 (t) Hn+1 (a) =
f (h ( )) dh ( ) =
Hn ( ) dh ( )
a

X
X
X


t kt Zk max

kt Zk
t
t t Z
t

25

and therefore it follows that Hn (t) = hn (t) for all t a and n N.


Second solution. If i 6= j, it follows by Fubinis theorem that


V1 (Z) ,
max
t
t

and

t :=

n ({(s1 , . . . , sn ) [a, b] : si = sj })
Z
n2
= ([a, b])

1si =sj d (si ) d (sj )


[a,b]2
Z
n2
= ([a, b])

({sj }) d (sj ) = 0.

 0


f Zt + st Z f 0 Zt ds.

0
0

Since g (s, , t) := k[f (Z + s (Zt Z )) f 0 (Z )]k is a continuous function


in s [0, 1] and , t [0, T ] with g (s, t, t) = 0 for all s and t, it follows by
uniform continuity arguments
that g (s, , t) is small whenever |t | is small.

= 0. Moreover, again by a uniform continuity arguTherefore, lim||0
t
ment, f 0 Zt f 0 (Zt ) uniformly as || 0. Thus we may pass to the limit
as || 0 in Eq. (3.11) to complete the proof.

3.3 Calculus Bounds


For the exercises to follow we suppose that is a positive finite measure
on [0, ), B[0,) such that is continuous, i.e. ({s}) = 0 for all s [0, ).
We will further write,
Z t
Z
Z
f (s) d (s) :=
f (s) d (s) =
f (s) d (s) ,
0

[0,t]

[a,b]

From this observation it follows that


X
1[a,b]n (s1 , . . . , sn ) =
1as1 s2 sn b n a.e.,
Sn

where ranges over the permutations, Sn , of {1, 2, . . . , n} . Integrating this


equation relative with respect to n and then using Fubinis theorem gives,
X Z
n
n
([a, b]) = n ([a, b] ) =
1as1 s2 sn b dn (s)
Sn

X Z

(0,t)

X Z
Sn

wherein the second equality holds since is continuous. Although it is not


necessary, you may use Exercise 3.1 with Zt := ([0, t]) to solve the following
problems.
Exercise 3.2. Show for all 0 a < b < and n N that
Z
n
([a, b])
hn (b) :=
d (s1 ) . . . d (sn ) =
.
n!
as1 s2 sn b
Page:

25

job:

rpaths

(3.12)

1as1 s2 sn b d (s1 ) . . . d (sn )

Sn

d (s1 ) . . . d (sn )

as1 s2 sn b

Z
= n!

d (s1 ) . . . d (sn ) .
as1 s2 sn b

Exercise 3.3 (Gronwalls Lemma). If (t) and f (t) are continuous nonnegative functions such that
Z t
f (t) (t) +
f ( ) d ( ) ,
(3.13)
0

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

26

3 The Bounded Variation Theory

then

If we further assume that is increasing, then from Eq. (3.17) and Exercise 3.2
we have
Z
X
f (t) (t) + (t)
d (s1 ) . . . d (sk )

Z
f (t) (t) +

([,t])

( ) d ( ) .

(3.14)

If we further assume that is increasing, then


f (t) (t) e([0,t]) .

(3.15)

Solution to Exercise (3.3). Feeding Eq. (3.13) back into itself implies

Z t
Z
f (t) (t) +
( ) +
f (s) d (s) d ( )
0

f (s2 ) d (s1 ) d (s2 )

(t) +


Z
(s2 ) +

Z
(s1 ) d (s1 ) +
0s2 s1 t

= (t) +
Z
+

0s2 s1 t

Alternatively if we let Zt := ([0, t]) , then

Z
(s1 ) d (s1 ) +

s2

(s1 ) d (s1 ) +

([,t])

d eZt Z

dZ =

t
eZt Z 0

Z
=e

Zt

1.


f (t) (t) + (t) eZt 1 = (t) eZt .

Exercise 3.4. Suppose that {n (t)}n=0 is a sequence of non-negative continuous functions such that
Z t
n+1 (t)
n ( ) d ( ) for all n 0
(3.18)

0s3 s2 s1 t

(3.16)

0sk s2 s1 t

where, using Exercise 3.2,


Z
RN (t) :=

Zt Z

Therefore,

(s2 ) d (s1 ) d (s2 )

f (s3 ) d (s1 ) d (s2 ) d (s3 ) .

k=1

d ( ) =

Continuing in this manner inductively shows,


N Z
X
(sk ) d (s1 ) . . . d (sk ) + RN (t)
f (t) (t) +

0s2 s1 t


f (s3 ) d (s3 ) d (s1 ) d (s2 )

([0, t])
= (t) e([0,t]) .
k!

k=1

0
t

= (t) +
Z

= (t) + (t)

0sk s2 s1 t

k=1

and (t) = max0 t 0 ( ) . Show


n

n (t) (t)

f (sk+1 ) d (s1 ) . . . d (sk ) d (sk+1 )

([0, t])
for all n 0.
n!

0sk+1 s2 s1 t

Solution to Exercise (3.4). By iteration of Eq. (3.18) we find,

N +1

([0, t])
max f (t)
0st
(N + 1)!

0 as N .
Z
1 (t)

So passing to the limit in Eq. (3.16) and again making use of Exercise 3.2 shows,
Z
X
f (t) (t) +
(sk ) d (s1 ) . . . d (sk )
(3.17)
= (t) +

( )

= (t) +
0

Z
= (t) +

k1
X
([, t])
k=1

(k 1)!

26

0 ( ) d ( ) (t)

d (s1 ) ,
0s1 t
Z t Z


1 (s2 ) d (s2 ) (t)
d (s1 ) d (s2 )
0
0
0s1 t
Z
= (t)
d (s1 ) d (s2 ) ,
Z

2 (t)

0s2 s1 t

..
.

d ( )

Z
n (t) (t)

d (s1 ) . . . d (sn ) .
0sn s1 t

e([,t]) ( ) d ( ) .

The result now follows directly from Exercise 3.2.

Page:

k=1 0sk s2 s1 t
Z t
k1
X
([sk , t])
(sk )
d (sk )
(k 1)!
k=1 0

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

3.4 Bounded Variation Ordinary Differential Equations

3.4 Bounded Variation Ordinary Differential Equations

Z
F (, y ( )) dx ( )

(t) = y0 w0 +

G (, w ( )) dz ( )
Z t
Z t
G (, w ( )) dz ( )
F (, w ( ) + ( )) dx ( )
= y0 w0 +
0
0
Z t
Z t
= y0 w0 +
[F (, w ( )) G (, w ( ))] dx ( ) +
G (, w ( )) d (x z) ( )
0
0
Z t
+
[F (, w ( ) + ( )) F (, w ( ))] dx ( ) .
0

In this section we begin by reviewing some of the basic theory of ordinary


differential equations O.D.E.s for short. Throughout this chapter we will let
X and Y be Banach spaces, U o Y an open subset of Y, and y0 U, x : [0, T ]
X is a continuous process of bounded variation, and F : [0, T ]U End (X, Y )
is a continuous function. (We will make further assumptions on F as we need
them.) Our goal here is to investigate the ordinary differential equation,
y (t) = F (t, y (t)) x (t) with y (0) = y0 U.

(3.19)

Since x is only of bounded variation, to make sense of this equation we will


interpret it in its integrated form,
Z t
y (t) = y0 +
F (, y ( )) dx ( ) .
(3.20)
0

Proposition 3.13 (Continuous dependence on the data). Suppose that


G : [0, T ] U End (X, Y ) is another continuous function, z : [0, T ] X
is another continuous function with bounded variation, and w : [0, T ] U
satisfies the differential equation,
Z t
w (t) = w0 +
G (, w ( )) dz ( )
(3.21)

27

Crashing through this identity with norms shows,


Z t
k (t)k (t) +
K ( ) k ( )k kdx ( )k
0

where (t) is given in Eq. (3.24). The estimate in Eq. (3.23) is now a consequence
of this inequality and Exercise 3.3 with d ( ) := K ( ) kdx ( )k .
Corollary 3.14 (Uniquness of solutions). If F satisfies the Lipschitz hypothesis in Eq. (3.22), then there is at most one solution to the ODE in Eq.
(3.20).
Proof. Simply apply Proposition 3.13 with F = G, y0 = w0 , and x = z. In
this case 0 and the result follows.

for some w0 U. Further assume there exists a continuous function, K (t) 0


such that F satisfies the Lipschitz condition,
kF (t, y) F (t, w)k K (t) ky wk for all 0 t T and y, w U.

Proposition 3.15 (An apriori growth bound). Suppose that U = Y, T =


, and there are continuous functions, a (t) 0 and b (t) 0 such that
kF (t, y)k a (t) + b (t) kyk for all t 0 and y Y.

(3.22)
Then

Then
Z

ky (t) w (t)k (t) exp


K ( ) kdx ( )k .

(3.23)

ky (t)k

Z
ky0 k +


Z t

a ( ) d ( ) exp
b ( ) d ( ) , where

where

(t) := x,1 (0, t) = ksk1-Var (t) .


Z
(t) := ky0 w0 k +
0

(3.25)

(3.26)

kF (, w ( )) G (, w ( ))k kdx ( )k
Z t
+
kG (, w ( ))k kd (x z) ( )k

Proof. From Eq. (3.20) we have,


Z

ky (t)k ky0 k +

(3.24)

kF (, y ( ))k d ( )
0

ky0 k +

Proof. Let (t) := y (t) w (t) , so that y = w + . We then have,

(a ( ) + b ( ) ky ( )k) d ( )
Z

0
t

ky ( )k d ( )

= (t) +
0

Page:

27

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

28

3 The Bounded Variation Theory

where

Z
(t) := ky0 k +

a ( ) d ( ) and d ( ) := b ( ) d ( ) .
0

Hence we may apply Exercise 3.3 to learn ky (t)k (t) e([0,t]) which is the
same as Eq. (3.25).
Theorem 3.16 (Global Existence). Let us now suppose U = X and F satisfies the Lipschitz hypothesis in Eq. (3.22). Then there is a unique solution,
y (t) to the ODE in Eq. (3.20).

Therefore, it follows that yn (t) is uniformly convergent on compact subsets of


[0, ) and therefore y (t) := limn yn (t) exists and is a continuous function.
Moreover, we may now pass to the limit in Eq. (3.27) to learn this function y
satisfies Eq. (3.20). Indeed,
Z t

Z t




F
(,
y
(
))
dx
(
)

F
(,
y
(
))
dx
(
)
n


0

kF (, yn ( )) F (, y ( ))k kdx ( )k
0

Proof. We will use the standard method of Picard iterates. Namely let
y0 (t) W be any continuous function and then define yn (t) inductively by,
Z t
yn+1 (t) := y0 +
F (, yn ( )) dx ( ) .
(3.27)

K ( ) kyn ( ) y ( )k kdx ( )k
Z t
sup kyn ( ) y ( )k
K ( ) kdx ( )k 0 as n .
0

0 t

Then from our assumptions and the definition of yn (t) , we find for n 1 that

Z t
Z t



F (, yn ( )) dx ( )
F (, yn1 ( )) dx ( )
kyn+1 (t) yn (t)k =

0
t

kF (, yn ( )) F (, yn1 ( ))k kdx ( )k


0
t

K ( ) kyn ( ) yn1 ( )k kdx ( )k .


0

Remark 3.17 (Independence of initial guess). In the above proof, we were allowed to choose y0 (t) as we pleased. In all cases we ended up with a solution to
the ODE which we already knew to be unique if it existed. Therefore all initial
guesses give rise to the same solution. This can also be see directly. Indeed, if
z0 (t) is another continuous path in W and zn (t) is defined inductively by,
Z t
zn+1 (t) := y0 +
F (, zn ( )) dx ( ) for n 0.
0

Since,

Then

Z


Z t



ky1 (t) y0 (t)k =
y
+
F
(,
y
(
))
dx
(
)

y
(t)
0
0
0

zn+1 (t) yn+1 (t) =

Z
max ky0 ( ) y0 k +

and therefore,

kF (, y0 )k kdx ( )k =: (t) ,

0 t

Z
Z

n (t) := kyn+1 (t) yn (t)k

kyn+1 (t) yn (t)k (t)

n
K ( ) kdx ( )k /n!.

(3.28)

Since the right side of this equation is increasing in t, we may conclude by


summing Eq. (3.28) that

28

K ( ) kzn ( ) yn ( )k kdx ( )k .

Thus it follows from Exercise 3.4 that


Z t
n
1
K ( ) kdx ( )k max kz0 ( ) y0 ( )k 0 as n .
kzn (t) yn (t)k
0 t
n!
0

3.5 Some Linear ODE Results


sup kyn+1 (t) yn (t)k (T ) e

RT
0

K( )kdx( )k

n=0 0tT

Page:

kF (, zn ( )) F (, yn ( ))k kdx ( )k
0

it follows by an application of Exercise 3.4 with

Z

kzn+1 (t) yn+1 (t)k

that

[F (, zn ( )) F (, yn ( ))] dx ( )
0

job:

rpaths

< .

In this section we wish to consider linear ODE of the form,


macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

3.5 Some Linear ODE Results


t

dx ( ) y ( ) + f (t)

y (t) =

(3.29)

29

Then Eq. (3.29) may be written as y Ay = f or equivalently as,

(I A) y = f.

where x (t) End (W ) and f (t) W are finite variation paths. To put this
in the form considered above, let V := End (W ) and define, for y W, F (y) :
V W W by,

Thus the solution to this equation should be given by,


1

y = (I A)

F (y) (x, f ) := xy + f for all (x, f ) V W = End (W ) W.

f=

An f.

(3.32)

n=0

Then the above equation may be written as,


Z t
y (t) = f (0) +
F (y ( )) d (x, f ) ( ) .

But
(An f ) (t) =


dx (n ) An1 f (n ) =

Z
dx (n )


dx (n1 ) An2 f (n1 )

..
.

Notice that
k[F (y) F (y 0 )] (x, f )k = kx (y y 0 )k kxk ky y 0 k

Z
=

Z
dx (n )

Zs

and therefore,
kF (y) F (y 0 )k ky y 0 k

Z
dx (n1 )

n1

dx (n2 ) . . .
s

dx (1 ) f (1 )
s

dx (n ) dx (n1 ) dx (n2 ) . . . dx (1 ) f (1 ) (3.33)


s1 n t

where we use any reasonable norm on V W, for example k(x, w)k := kxk +
kwk or k(x, w)k := max (kxk , kwk) . Thus the theory we have developed above
guarantees that Eq. (3.29) has a unique solution which we can construct via
the method of Picard iterates.

and therefore, Eq. (3.31) now follows from Eq. (3.32) and (3.33).
For those not happy with this argument one may use Picard iterates instead.
So we begin by setting y0 (t) = f (t) and then define yn (t) inductively by,
Z

Theorem 3.18. The unique solution to Eq. (3.29) is given by


y (t) = f (t) +

F (yn ( )) d (x, f ) ( )
s

Z
X
n=1

yn+1 (t) = f (s) +

Z t
= f (s) +
[dx ( ) yn ( ) + df ( )]
s
Z t
=
dx ( ) yn ( ) + f (t) .

dx (n ) dx (n1 ) dx (n2 ) . . . dx (1 ) f (1 ) .

01 n t

More generally if 0 s t T, then the unique solution to


Z t
y (t) =
dx ( ) y ( ) + f (t) for s t T

(3.30)

Therefore,

is given by
y (t) = f (t) +

y1 (t) =

n=1

dx ( ) f ( ) + f (t)
s

Z
X

dx (n ) dx (n1 ) dx (n2 ) . . . dx (1 ) f (1 ) .

y2 (t) =

dx (2 ) y1 (2 ) + f (t)
Z 2

Z t
dx (1 ) f (1 ) + f (2 ) + f (t)
=
dx (2 )

s1 n t

(3.31)
Proof. Let us first find the formula for y (t) . To this end, let

Z
Z
(Ay) (t) :=

dx ( ) y ( ) .

Z
dx (2 ) dx (1 ) f (1 ) +

s1 2 t

dx (2 ) f (2 ) + f (t)
s

Page:

29

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

30

3 The Bounded Variation Theory

and therefore we may conclude in this case that T x (t, s) = e(ts)A where

and likewise,
Z
y3 (t) =

dx (3 ) dx (2 ) dx (1 ) f (1 )

etA :=

s1 2 3 t

dx (2 ) dx (1 ) f (1 ) +
s1 2 t

dx (2 ) f (2 ) + f (t) .

Theorem 3.21 (Duhamels principle I). As a function of t [s, T ] or


s [0, t] , T (t, s) is of bounded variation and T (t, s) := T x (t, s) satisfies the
ordinary differential equations,

So by induction it follows that


n Z
X
yn (t) =
k=1

n
X
t n
A .
n!
n=0

T (dt, s) = dx (t) T (t, s) with T (s, s) = I (in t s),


and
T (t, s) = T (t, s) dx (s) with T (t, t) = I (in 0 s t).

dx (k ) . . . dx (1 ) f (1 ) + f (t) .

s1 k t

Letting n making use of the fact that


Z
Z



kdx (n )k . . . kdx (1 )k
dx (n ) . . . dx (1 )


s1 n t
s1 n t
Z t
n
1
=
(3.34)
kdxk ,
n!
s

T (t, s) T (s, u) = T (t, u) for all 0 u s t T,


and the solution to Eq. (3.30) is given by
Z

y (t) = lim yn (t) = f (t) +


n

k=1

y (t) = f (t)

T (t, d ) f ( ) .

(3.39)

dx (k ) . . . dx (1 ) f (1 ) .

s1 k t

In particular when f (t) = y0 is a constant we have,


y (t) = y0 T (t, ) y0 | =t
=s = T (t, s) y0 .

Definition 3.19. For 0 s t T, let T0 (t, s) := I,


Z
x
Tn (t, s) =
dx (n ) dx (n1 ) dx (n2 ) . . . dx (1 ) , and

(3.35)

s1 n t

T x (t, s) =

X
n=0

Tn (t, s) = I+

Z
X

n=1

dx (n ) dx (n1 ) dx (n2 ) . . . dx (1 ) .

s1 n t

(3.36)
Example 3.20. Suppose that x (t) = tA where A End (V ) , then
Z
dx (n ) dx (n1 ) dx (n2 ) . . . dx (1 )
s1 n t
Z
=An
dn dn1 dn2 . . . d1
=

Page:

30

s1 n t
n
(t s) n

n!

(3.38)

Moreover, T, obeys the semi-group property1 ,

we find as before, that


Z
X

(3.37)

(3.40)

Proof. 1. One may directly conclude that T (t, s) solves Eq. (3.37) by applying Theorem 3.18 with y (t) and f (t) now taking values in End (V ) with
f (t) I. Then Theorem 3.18 asserts the solution to dy (t) = dx (t) y (t) with
y (0) = I is given by T x (t, s) with T x (t, s) as in Eq. (3.36). Alternatively it is
possible to use the definition of T x (t, s) in Eq. (3.36) to give a direct proof the
Eq. (3.37) holds. We will carry out this style of proof for Eq. (3.38) and leave
the similar proof of Eq. (3.37) to the reader if they so desire to do it.
2. Proof of the semi-group property. Simply observe that both t
T (t, s) T (s, u) and t T (t, u) solve the same differential equation, namely,
dy (t) = dx (t) y (t) with y (s) = T (s, u) End (V ) ,
hence by our uniqueness results we know that T (t, s) T (s, u) = T (t, u) .
3. Proof of Eq. (3.38). Let Tn (t, s) := Tnx (t, s) and observe that
1

This is a key algebraic idenitity that we must demand in the rough path theory to
come later.

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

3.5 Some Linear ODE Results


t

Tn1 (t, ) dx () for n 1.

Tn (t, s) =

(3.41)

Thus if let
T (N ) (t, s) :=

N
X

N
X

Tn (t, s) = I +

n=0

N Z
X
n=1

Z
=I+

n=1

Tn1 (t, ) dx ()

1
tN
X

Z
Tn (t, ) dx () = I +

s n=0

Corollary 3.22 (Duhamels principle II). Equation (3.39) may also be expressed as,
Z t
y (t) = T x (t, s) f (s) +
T x (t, ) df ( )
(3.43)
s

Tn (t, s) ,

it follows that
T (N ) (t, s) = I +

31

T (N 1) (t, ) dx () . (3.42)

which is one of the standard forms of Duhamels principle. In words it says,




solution to the homogeneous eq.
y (t) =
dy (t) = dx (t) y (t) with y (s) = f (s)

Z t
solution to the homogeneous eq.
+
.
dy (t) = dx (t) y (t) with y ( ) = df ( )
s
Proof. This follows from Eq. (3.39) by integration by parts (you should
modify Exercise 3.5 below as necessary);
Z t
y (t) = f (t) T x (t, ) f ( ) | =t
+
T x (t, ) df ( )
=s
s
Z t
= T x (t, s) f (s) +
T x (t, ) df ( ) .

We already now that T (N ) (t, s) T (t, s) uniformly in (t, s) which also follows
from the estimate in Eq. (3.34) as well. Passing to the limit in Eq. (3.42) as
N then implies,
Z t
T (t, s) = I +
T (t, ) dx ()

(3.44)

which is the integrated form of Eq. (3.38) owing to the fundamental theorem
of calculus which asserts that
Z t
ds
T (t, ) dx () = T (t, s) dx (s) .
s

4. Proof of Eq. (3.39). From Eq. (3.31) and Eq. (3.41) which reads in
differential form as, Tn (t, d) = Tn1 (t, ) dx () , we have,
y (t) = f (t)
= f (t)
= f (t)

Z
X

n=1 s
Z t
X
n=1 s
Z tX

Tn (t, d) f ()
Tn1 (t, ) dx () f ()

Exercise 3.5 (Product Rule). Suppose that V is a Banach space and x :


[0, T ] End (V ) and y : [0, T ] End (V ) are continuous finite 1 variation
paths. Show for all 0 s < t T that,
Z t
Z t
x (t) y (t) x (s) y (s) =
dx ( ) y ( ) +
x ( ) dy ( ) .
(3.45)
s

Alternatively, one may interpret this an an integration by parts formula;


Z t
Z t
=t
x ( ) dy ( ) = x ( ) y ( ) | =s
dx ( ) y ( )
s

Solution to Exercise (3.5). For P (s, t) we have,


X
x (t) y (t) x (s) y (s) =
(x () y ())

Tn1 (t, ) dx () f ()

s n=1
Z t

= f (t)

[(x ( ) + x) (y ( ) + y) x ( ) y ( )]

T (t, d) f () .

[x ( ) y + ( x) y ( ) + ( x) y] . (3.46)

The last term is easy to estimate as,

Page:

31

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

32

3 The Bounded Variation Theory



X
X
X


k xk k yk max k xk
k yk
( x) y




Exercise 3.8 (Abelian Case). Suppose that [x (s) , x (t)] = 0 for all 0 s, t
T, show
T x (t, s) = e(x(t)x(s)) .
(3.49)

max k xk V1 (y) 0 as || 0.

So passing to the limit as || 0 in Eq. (3.46) gives Eq. (3.45).


Exercise 3.6 (Inverses). Let V be a Banach space and x : [0, T ] End (V )
be a continuous finite 1 variation paths. Further suppose that S (t, s)
End (V ) is the unique solution to,

Solution to Exercise (3.8). By replacing x (t) by x (t) x (s) if necessary,


we may assume that x (s) = 0. Then by the product rule and the assumed
commutativity,
n1
x (t)
xn (t)
d
= dx (t)
n!
(n 1)!
or in integral form,
xn (t)
=
n!

S (dt, s) = S (t, s) dx (t) with S (s, s) = I End (V ) .


Show
x

T x (dt, s)

= T x (dt, s)

may be described as the


1

dx (t) with T x (s, s)

= I.

dx ( )
s

x ( )
(n 1)!

(t) satisfies the same recursion relations as Tnx (t, s) .


which shows that
n
Thus we may conclude that Tnx (t, s) = (x (t) x (s)) /n! and thus,

S (t, s) T (t, s) = I = T (t, s) S (t, s) for all 0 s t T,


1

n1

1 n
n! x

that is to say T x (t, s) is invertible and T x (t, s)


unique solution to the ODE,

(3.47)

Solution to Exercise (3.6). Using the product rule we find,

T (t, s) =

X
1
n
(x (t) x (s)) = e(x(t)x(s))
n!
n=0

as desired.

dt [S (t, s) T x (t, s)] = S (t, s) dx (t) T x (t, s) + S (t, s) dx (t) T x (t, s) = 0.

Exercise 3.9 (Abelian Factorization Property). Suppose that V is a Banach space and x : [0, T ] End (V ) and y : [0, T ] End (V ) are continuous
finite 1 variation paths such that [x (s) , y (t)] = 0 for all 0 s, t T, then

Since S (s, s) T x (s, s) = I, it follows that S (t, s) T x (t, s) = I for all 0 s


t T.
For opposite product, let g (t) := T x (t, s) S (t, s) End (V ) so that

T x+y (s, t) = T x (s, t) T y (s, t) for all 0 s t T.

dt [g (t)] = dt [T x (t, s) S (t, s)] = T x (t, s) S (t, s) dx (t) + dx (t) T x (t, s) S (t, s)
= dx (t) g (t) g (t) dx (t) with g (s) = I.
Observe that g (t) I solves this ODE and therefore by uniqueness of solutions to such linear ODE we may conclude that g (t) must be equal to I, i.e.
T x (t, s) S (t, s) = I.
As usual we say that A, B End (V ) commute if
0 = [A, B] := AB BA.

(3.50)

Hint: show both sides satisfy the same ordinary differential equations see the
next problem.
Exercise 3.10 (General Factorization Property). Suppose that V is a Banach space and x : [0, T ] End (V ) and y : [0, T ] End (V ) are continuous
finite 1 variation paths. Show
T x+y (s, t) = T x (s, t) T z (s, t) ,
where
Z

(3.48)

z (t) :=

T x (s, )

dy ( ) T x (s, )

Exercise 3.7 (Commute). Suppose that V is a Banach space and x : [0, T ]


End (V ) is a continuous finite 1 variation paths and f : [0, T ] End (V )
is continuous. If A End (V ) commutes with {x (t) , f (t) : 0 t T } , then
Rt
A commutes with s f ( ) dx ( ) for all 0 s t T. Also show that
[A, T x (s, t)] = 0 for all 0 s t T.
Page:

32

job:

rpaths

Hint: see the hint for Exercise 3.9.


Solution to Exercise (3.10). Let g (t) := T x (s, t)
ing use of Exercise 3.6 we have,

macro:

svmonob.cls

T x+y (s, t) . Then mak-

date/time:

11-Mar-2009/12:03

3.5 Some Linear ODE Results


1

dg (t) = T x (s, t)

(dx (t) + dy (t)) T x+y (s, t) T x (s, t)

dx (t) T x+y (s, t)

33

Show t X (s, t) is the unique solution to the ODE,

x+y

= T (s, t) dy (t) T
(s, t)


1
1
= T x (s, t) dy (t) T x (s, t) T x (s, t) T x+y (s, t)

X (s, dt) = X (s, t) dx (t) with X (s, s) = 1


and that

= dz (t) g (t) with g (s) = I.


X (s, t) X (t, u) = X (s, u) for all 0 s t u T.
1

Remark 3.23. If g (t) End (V ) is a C 1 path such that g (t)


1
for all t, then t g (t) is invertible and

is invertible
Solution to Exercise (3.12). This can be deduced from what we have already done. In order to do this, let y (t) := Rx(t) End (B) so that for a B,

d
1
1
1
g (t) = g (t) g (t) g (t) .
dt

T y (t, s) a = a +

Exercise 3.11. Suppose that g (t) Aut (V ) is a continuous finite variation


1
path. Show g (t) Aut (V ) is again a continuous path with finite variation
and that
1
1
1
dg (t) = g (t) dg (t) g (t) .
(3.51)
Hint: recall that the invertible elements, Aut (V ) End (V ) , is an open set
and that Aut (V ) 3 g g 1 Aut (V ) is a smooth map.

=a+

dy (n ) . . . dy (1 ) a

n=1 s1 n t
Z
X
n=1

adx (1 ) . . . dx (n )

s1 n t

= aX (s, t) .
Therefore, taking a = 1, we find,

Solution to Exercise (3.11). Let V (t) := g (t)


which is again a finite
variation path by the fundamental theorem of calculus and the fact that
Aut (V ) 3 g g 1 Aut (V ) is a smooth map. Moreover we know that
V (t) g (t) = I for all t and therefore by the product rule (dV ) g +V dg = dI = 0.
Making use of the substitution formula we then find,
Z t
Z t
Z t
1
1
V (t) = V (0)+
dV ( ) =
dV ( ) g ( ) g ( ) =
V ( ) dg ( ) g ( ) .

Z
X

X (s, dt) = T y (dt, s) 1 = dy (t) T y (t, s) 1 = dy (t) X (s, t) = X (s, t) dx (t)


with X (s, s) = T (s, s) 1 = 1. Moreover we also have,
X (s, t) X (t, u) = T y (u, t) X (s, t) = T y (u, t) T y (t, s) 1 = T y (u, s) 1 = X (s, u) .
Alternatively: one can just check all the statements as we did for T (t, s) .
The main point is that if g (t) solves dg (t) = g (t) dx (t) , then ag (t) also solves
the same equation.

Replacing V (t) by g (t)

in this equation then shows,


Z t
1
1
1
g (t) g (0) =
g ( ) dg ( ) g ( )

Remark 3.24. Let R or C as the case may be and define,


X (s, t) := X x (s, t) and Xn (s, t) := Xnx (s, t) = n Xn (s, t) .

Then the identity in Eq. (3.52) becomes,

which is the integrated form of Eq. (3.51).

Exercise 3.12. Suppose now that B is a Banach algebra and x (t) B is a


continuous finite variation path. Let
X (s, t) := X x (s, t) := 1 +

n Xn (s, u) = X (s, u) = X (s, t) X (t, u)

n=0

Xnx (s, t) ,

n=1

where
Xnx (s, t) :=

(3.52)

Z
dx (1 ) . . . dx (n )

X
k,l=0

k l Xk (s, t) Xl (t, u)

n=0

Xk (s, t) Xl (t, u)

k+l=n

s1 n t

Page:

33

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

34

3 The Bounded Variation Theory

from which we conclude,


Xn (s, u) =

n
X

3.5.1 Bone Yard


Xk (s, t) Xnk (t, u) for n = 0, 1, 2, . . .

(3.53)

k=0

Terry. Lyons refers the identities in Eq. (3.53) as Chens identities. These identities may be also be deduced directly by looking at the multiple integral expressions defining Xk (s, t) .

Proof. but we can check this easily directly as well. From Eq. (3.41) we conclude,
Z t
kTn (t, s)k
kTn1 (t, )k kdx ()k
s

By the fundamental theorem of calculus we have, which we write symbolically


as,
Z t
Tn+1 (t, s) =
Tn (t, ) dx ()
s

Summing this equation upon n shows,


N
X
n=0

and hence one learn inductively that Tn (t, s) is a finite variation process in t,
Z
kTn (t, s)k
kdx (n )k kdx (n1 )k kdx (n2 )k . . . kdx (1 )k
s1 n t
Z t
n

1
=
n!

kdxk

and

V1 Tn (, s) |[s,T ]

kdx ( )k kTn1 (, s)k


s

1
(n 1)!

Z
s

n1
1
kdxk
kdx ( )k =
n!

!n
kdxk

In particular it follows that

V1 Tn (, s) |[s,T ] exp

!
kdxk

1.

n=1

P
Hence we learn that n=0 Tn (t, s) converges uniformly to T (t, s) so that T (t, s)
is continuous. Moreover, if P (s, T ) , then
X

kT (t, s) T (t , s)k

X
X

kTn (t, s) Tn (t , s)k

n=1 t


V1 Tn (, s) |[s,T ] < .

n=1

Page:

34

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

Hence it follows from this that


!

kdxk

V1 T (, s) |[s,T ] exp

1 < .

Similarly we have,


N


X
X
X



Tn (t , s)
V1 Tn (, s) |[s,T ]
T (t, s)


n=0

n=N +1

and therefore,
"
V1

T (, s)

N
X

#
Tn (, s) |[s,T ]

!
0 as N .

n=0

It is now a simple matter to see that for any continuous f,


Z

N Z
X

f ( ) T (d, s) = lim

n=0

f ( ) Tn (d, s) =

Z
X
n=0

f ( ) Tn (d, s) .

In particular,
T (t, s) T (s, s) =

[Tn (t, s)
n=0
Z t
X

Tn (s, s)]

#
 "Z t

X
dx ( ) Tn1 (t, s) =
dx ( )
Tn1 (, s)

n=1
Z t

dx ( ) T (, s) .
s

which is the precise version of Eq. (3.37).

n=1

4
Banach Space p variation results
In this chapter, suppose that V is a Banach space and assume x
C ([0, T ] V ) with x (0) = 0 for simplicity. We continue the notation used
in Chapter 2. In particular we have,
!1/p
Vp (x : ) :=

X

xt xt p

Vp (x) :=

sup

Proof. The first inequality is the superadditivity property of the control


that we have already proved in Lemma 2.35. For the second inequality, let
P (s, t) . If u we have ,

!1/p
=

kt xk

Vpp (x, ) = Vpp (x, [s, u]) + Vpp (x, [u, t])

and

(s, u) + (u, t) .

Vp (x : ) .

(4.3)

On the other hand if u


/ we have, using Eq. (4.3) with replaced by {u}
and Lemma 4.1,

P(0,T )

Lemma 4.1. Suppose that P (s, t) and a (s, t) , then

Vpp (x, ) 2p1 Vpp (x, {u}) 2p1 [ (s, u) + (u, t)] .

Vpp (x, \ {a}) 2p1 Vpp (x, ) .

(4.1)

Thus for any P (s, t) we may conclude that

Proof. Since
p

Vpp (x, ) 2p1 [ (s, u) + (u, t)] .

kx (a+ ) x (a )k [kx (a+ ) x (a)k + kx (a) x (a )k]


p

Taking the supremum of this inequality over P (s, t) then gives the desired
result.
These results may be significantly improve upon. For example, we have the
following proposition whose proof we leave to the interested reader who may
wish to consult Lemma (4.13) below.

2p1 (kx (a+ ) x (a)k + kx (a) x (a )k )


= 2p1 Vpp (x : [a , a+ ])
and
p

Vpp (x, \ {a}) = Vpp (x : [0, a ])+kx (a+ ) x (a )k +Vpp (x : [a+ , T ])

Proposition 4.3. If , 0 P (s, t) with

we have,

= {s := 0 < 1 < < n = t} 0 ,

Vpp (x, \ {a}) Vpp (x : [0, a ]) + 2p1 Vpp (x : [a , a+ ]) + Vpp (x : [a+ , T ])

then

2p1 Vpp (x, ) .

Vpp (x, )


p1


Corollary 4.2. As above, let (s, t) := x,p (s, t) := Vpp x|[s,t] be the control
associated to x. Then for all 0 s < u < t T, we have
(s, u) + (u, t) (s, t) 2p1 [ (s, u) + (u, t)] .

(4.2)

The second inequality shows that if x|[s,u] and x|[s,t] both have finite p variation, then x|[s,t] has finite p variation.

p1

# ( 0 ( , ])

Vpp ( 0 ( , ])

Vpp (x, 0 ) ,

(4.4)
(4.5)

where
k := max {#(i1 , i ] 0 : i = 1, 2, . . . , n} .

(4.6)

Proof. This follows by the same methods used in the proof of Lemma 4.1
or Lemma (4.13) below. The point is that,

38

4 Banach Space p variation results

p



X
X


p

x
Vpp (x, ) =
k xk =
s



s 0 ( , ]
X
X
p1
p

# ( 0 ( , ])
ks xk

2. For C, Vp (x : ) = || Vp (x : ) and therefore Vp (x) =


|| Vp (x) .
3. For x, y C ([0, T ] V ) ,

!1/p

s 0 ( , ]

Vp (x + y : ) =

k p1 Vpp (x; 0 ( , ]) = k p1 Vpp (x; 0 ) .

kt x + t yk

kt xk

Corollary 4.4. Suppose that P (s, t) , then


X
p1
Vpp (x : [s, t]) # ( (s, t])
Vpp (x : [ , ]) .

(kt xk + kt yk)

(4.7)

Proof. Let P (s, t) and 0 := , then k defined in Eq. (4.6) is no


greater than # ( (s, t]) and therefore,

kt yk

= Vp (x : ) + Vp (y : )

and therefore,

Vpp (x, 0 )

(4.8)

Hence it follows from this triangle inequality and items 1. and 2. that
C0,p ([0, T ] , V ) is a linear space and that Vp () is a norm on C0,p ([0, T ] , V ) .
4. To finish the proof we must now show (C0,p ([0, T ] , V ) , Vp ()) is a

complete space. So suppose that {xn }n=1 C0,p ([0, T ] , V ) is a Cauchy sequence. Then by Eq. (4.10) we know that xn converges uniformly to some
x C ([0, T ] V ) . Moreover, for any partition, P (0, T ) we have
Vp (x xn : ) Vp (x xm : ) + Vp (xm xn : )
Vp (x xm : ) + Vp (xm xn ) .

while
Vpp (x, 0 [ , ])

Vp (x + y) Vp (x) + Vp (y) .

and in particular this show that if Vp (x : [ , ]) < for all , then


Vp (x : [s, t]) < .

p1

!1/p
X

Vp (x) + Vp (y) ,

Vpp (x, ) # ( (s, t])

!1/p

Vpp (x, 0 ) =

!1/p
X

Vpp (x : [ , ]) .

(4.9)

Taking the limit of this equation as m the shows,

So combining these two inequalities and then taking the supremum over
P (s, t) gives Eq. (4.7).

Vp (x xn : ) lim inf (Vp (x xm : ) + Vp (xm xn ))


m

= lim inf Vp (xm xn ) .

Definition 4.5. The normalized space of p variation is

C0,p ([0, T ] , V ) := {x C ([0, T ] V ) : x (0) = 0 and Vp (x) < } .

We may now take the supremum over P (0, T ) to learn,

Proposition 4.6. The space C0,p ([0, T ] , V ) is a linear space and Vp () is a


Banach norm on this space.

p 1/p

Vp (x) .

As t [0, T ] was arbitrary it follows that


kxku := max kx (t)k Vp (x) .
0tT

So by the triangle inequality, Vp (x) Vp (x xn ) + Vp (xn ) < for sufficiently


large n so that x C0,p ([0, T ] , V ) and Vp (x xn ) 0 as n .

Proof. 1. If t [0, T ] we may take := {0, t, T } to learn that


kx (t)k = kx (t) x (0)k (kx (t) x (0)k + kX (T ) x (t)k )

Vp (x xn ) lim inf Vp (xm xn ) 0 as n .

Proposition 4.7 (Interpolation). Suppose that x C0,p ([0, T ] V ) and


q > p, then
1p/q p/q
Vq (x) 21p/q kxku
Vp (x) .
(4.11)

(4.10)
Proof. Let P (0, T ) , then

In particular if Vp (x) = 0 then x = 0.


Page:

38

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

4 Banach Space p variation results

Vqq (x : ) =

kt xk =

max kt xk
t

qp

kt xk

kt xk

39

t
qp

qp

kt xk 2 kxku

Vpp (x) .

Taking the supremum over P (0, T ) and then taking the q th roots of both
sides gives the result.
Notation 4.8 For x C ([0, T ] V ) and P (0, T ) , let x (t) be the
piecwise linear path defined by
x (t) := x (t ) +

(t t )
t x for all t [0, T ] ,
t+ t

see Figures 4.1 and 4.2.


Fig. 4.2. Here = {0 = t0 < t1 < < t6 = T } and x is indicated by the red
piecewise linear path.

We will give the proof of this theorem at the end of this section.
Corollary 4.11. Suppose
x C0,p ([0, T ] V ) and q (p, ) . Then

lim||0 Vq x x = 0, i.e. x x as || 0 in C0,q for any q > p.
Proof. According to Proposition 4.7 and Theorem 4.10,

Fig. 4.1. Here = {0 = s0 < s1 < < s7 = T } and should be x. The red lines
indicate the image of x .

Proposition 4.9. For each x C ([0, T ] V ) , x x uniformly in t as


|| 0.


qp p


V p x x
Vqq x x 2 x x u

qp 
p
2 x x u
Vp (x) + Vp x

qp p1  p

2 x x u
2
Vp (x) + Vpp x

qp p1 

2 x x u
2
1 + 3p1 Vpp (x) .
The latter expression goes to zero because of Proposition 4.9.
We refer the reader to in [5] for more results in this vain. In particular, as a
corollary of Theorem 23 and 24 one sees that the finite variation paths are not
dense in C0,p ([0, T ] V ) .
Notation 4.12 Suppose that , 0 P (0, T ) the we let

Proof. This is an easy consequence of the uniform continuity of x on the


compact interval, [0, T ] .
Theorem 4.10. If x C0,p ([0, T ] V ) and P (0, T ) , then

Vp x 311/p Vp (x) .

Page:

39

job:

rpaths

= (, 0 ) = {t (0, T ] : (t , t) 6= }
S := S (, 0 ) = t {t , t} ,
see Figure 4.3 below for an example.

(4.12)

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

40

4 Banach Space p variation results

wherein the last equality we have used,


n
X

api

n
X

i=1

Fig. 4.3. In this figure the red xs correspond to and the vertical hash marks
correspond to . The green circles indicate the points which make up S.

Lemma 4.13. If x C ([0, T ] V ) , , 0 P (0, T ) , and S = S (, 0 ) as


in Notation 4.12, then

(This is a special case of Proposition 4.3 above.)


Proof. For concreteness, let us consider the scenario in Figure 4.3. The
difference between Vp (x : 0 ) and Vp (x : 0 S) comes from the terms indicated by the orange brackets above the line. For example consider the u2
u3 contribution to Vpp (x : 0 ) versus the terms in Vpp (x : 0 S) involving
u2 < t2 < t6 < u3 . We have,
p

kx (u3 ) x (u2 )k (kx (u3 ) x (t2 )k + kx (t6 ) x (t2 )k + kx (u3 ) x (t6 )k)
p

3p1 (kx (u3 ) x (t2 )k + kx (t6 ) x (t2 )k + kx (u3 ) x (t6 )k ) .


Similar results hold for the other terms. In some case we only get two terms
with 3p1 being replaced by 2p1 and where no points are squeezed between
the neighbors of 0 , the corresponding terms are the same in both Vp (x : 0 )
and Vp (x : 0 S) . Nevertheless, if we use the crude factor of 3p1 in all cases
we arrive at the inequality in Eq. (4.13).
Lemma 4.14. Suppose that x (t) = a + tb for some a, b , then for
P (u, v) we have
Vp (x : ) Vp (x : {u, v}) .
(4.14)

Proof. Again let us consider the scenario in Figure 4.3. Let := 0 S,


then



Vpp x : 0 S = Vpp x : [0, t1 ] + Vpp x : [t1 , t2 ]


+ Vpp x : [t2 , t6 ] + Vpp x : [t6 , t7 ]


+ Vpp x : [t7 , t9 ] + Vpp x : [t9 , t10 ]

+ Vpp x : [t10 , T ] .

(4.13)

ai

i=1

Lemma 4.15. Let x C ([0, T ] V ) , , 0 P (0, T ) , and S = S (, 0 )


be as in Lemma 4.13. Then as in Notation 4.12, then


Vp x : 0 S Vp x : S {0, T } = Vp (x : S {0, T }) Vp (x) .
(4.15)

Vpp (x : 0 ) 3p1 Vpp (x : 0 S) .

!p

Since x is linear on each of the intervals [ti , ti+1 ] , we may apply Lemma 4.14
to find,



Vpp x : [t1 , t2 ] +Vpp x : [t6 , t7 ] + Vpp x : [t9 , t10 ]



Vpp x : {t1 , t2 } + Vpp x : {t6 , t7 } + Vpp x : {t9 , t10 }
and for the remaining terms we have,

 

Vpp x : [0, t1 ] + Vpp x : [t2 , t6 ] 
+Vpp x : [t7 , t9 ] + Vpp x : [t10 , T ]
 


Vpp x : {0, t1 } + Vpp x : {t2 , t6 } 
=
+Vpp x : {t7 , t9 } + Vpp x : {t10 , T }
which gives the inequality in Eq. (4.15).

with the inequality being strict if p > 1 and is a strict refinement of {u, v} .
Proof. Here we have,
X
X
p
p
Vpp (x : ) =
kx (t) x (t )k =
k(t t ) bk
t
p

= kbk

job:

rpaths

(t t ) kbk (v u) = Vp (x : {u, v})

Page:

40

4.0.2 Proof of Theorem 4.10


We are now in a position to prove Theorem 4.10.
Proof. Let , 0 P (0, T ) . Then by Lemmas 4.13 and Lemma 4.15,


Vp x : 0 311/p Vp x : 0 S Vp (x) .

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

Taking the supremum over all 0 P (0, T ) then gives the estimate in Eq.
(4.12).
For an alternative proof of Theorem 4.10 the reader is referred to [5] and [6,
Chapter 5] where controls are used to prove these results. Most of these results
go over to the case where V is replaced by a complete metric space (E, d) which
is also geodesic. That is if a, b E there should be a path : [0, 1] E such
that (0) = a, (t) = b and d ( (t) , (s)) = |t s| d (a, b) for all s, t [0, 1] .
More invariantly put, there should be a path : [0, 1] E such that if ` (t) :=
d (a, (t)) , then d ( (t) , (s)) = |` (t) ` (s)| d (a, b) for all s, t [0, 1] .

5
Youngs Integration Theory
Theorem 2.24 above shows that if we insist upon integrating all continuous
functions, f : [0, T ] R, then then the integrator, x, must be of finite variation.
This suggests that if we want to allow for rougher integrators, x, then we must
in turn require the integrand to be smoother. Youngs integral, see [18] is a
result along these lines. Our fist goal is to prove some integral bounds.
In this section we will assume that V and W are Banach spaces and g :
[0, T ] V and f : [0, T ] End (V, W ) are continuous functions. Later we will
assume that Vp (f ) + Vq (g) < where p, q 1 with := 1/p + 1/q > 1.
Notation 5.1 Given a partition P (s, t) , let
X
S (f, g) :=
f ( ) g.

Suppose that P (s, t) with # () := r where # () denotes the number


of elements in minus one. Let us further suppose that we have chosen i
P (s, t) such that 1 2 r1 r := and # (i ) = i for each
i. Then
S (f, g) f (s) (g (t) g (s)) =

r
X

Si (f, g) Si1 (f, g)

i=2

and thus we find the estimate,


kS (f, g) f (s) (g (t) g (s))k

r
X


Si (f, g) Si1 (f, g) .
i=2

To get the best result from this procedure


we should choose
the sequence, {i } ,

so as to minimize our estimate for Si (f, g) Si1 (f, g) at each step along
the way. The next lemma is key ingredient in this procedure.

So in more detail,
= {s = 0 < 1 < < r = t}
then
S (f, g) :=

r
X

f (l1 ) l g = f (0 ) 1 g + + f (r1 ) r g.

(5.1)

l=1

Lemma 5.2 (A key identity). Suppose that P (s, t) and u (s, t) .


Then
S (f, g) S\{u} (f, g) = u f u+ g
(5.2)
where
u f u+ g = [f (u) f (u )] (g (u+ ) g (u)) .
Proof. All terms in S (f, g) and S\{u} (f, g) are the same except for those
involving the intervals between u and u+ . Therefore we have,


S\{u} (f, g) S (f, g) = f (u ) [g (u+ ) g (u )] f (u ) u g + f (u) u+ g


= f (u ) u g + u+ g f (u ) u g f (u) u+ g
= [f (u ) f (u)] u+ g

Lemma 5.3. Suppose that p, q (0, ) , := p1 + 1q , and ai , bi 0 for i =


1, 2, . . . , n, then
 
1
min ai bi
kakp kbkq
(5.3)
1in
n
Pn
1/p
where kakp := ( i=1 api ) .
Proof. Let r := 1/, s := p/r, and t := q/r, then 1/s+1/t = 1 and therefore
by Holders inequality,
r
1/s
rt 1/t
1/s
1/t
kf gkr = hf r g r i hf rs i
g
= hf p i hg q i
which is to say,
kf gkr kf kp kgkq .

(5.4)

We also have
min xi =

1in

min xri
i

1/r

1X r
x
n i=1 i

!1/r
=

 1/r
1
kxkr .
n

Taking xi = ai bi in this inequality and then using Eq. (5.4) implies,

44

5 Youngs Integration Theory

min ai bi

1in

 1/r
 1/r
1
1
ka b kr
kakp kbkq .
n
n

where
r () :=

Recalling that 1/r = completes the proof of Eq. (5.3).


Alternatively we could use the following form of the geometric arithmetic
mean inequality.
Proposition 5.4 (Geometric-Arithmetic Mean Type Inequalities). If
n
n
{al }l=1 is a sequence
non-negative numbers, {l }l=1 is a sequence of positive
Pof
n
numbers such that l=1 l = 1, and p > 0 is given, then
1
a
1

n
. . . a
n

n
X

Consequently,


kS (f, g)k kf (s)k kg (t) g (s)k + r () Vp f |[s,t] Vq g|[s,t]



kf (s)k + r () Vp f |[s,t] Vq g|[s,t] .

(5.6)

Proof. Without loss of generality we may assume that ai > 0 for all i. By
Jensens inequality
!
n
n
n
X
X
X
1
n
a1 . . . an = exp
i ln ai
i exp (ln ai ) =
i ai .
(5.7)
i=1

i=1

Replacing ai by api in this inequality and then taking the pth root gives the
inequality in Eq. (5.5). (Remark: if p 1, Eq. (5.5) follows from Eq. (5.7) by
an application of Holders inequality.) Making use of Eq. (5.6), we find again
that
1/n

n
1X p
a
n i=1 i

!1/p

1/n

= [a1 . . . an ]
n
1X q
b
n i=1 i

!1/q

 
1
=
kakp kbkq .
n

44

job:

rpaths

1
r1

1/p

p
k f k

1/q

q
+ g

(s,t)

(s,t)



Vp f |[s,t] Vq g|[s,t] .

Continuing this way inductively, we find i P (s, t) such that 1 2


r1 r := and # (i ) = i for each i and






1
S (f, g) S (f, g)
Vp f |[s,t] Vq g|[s,t] .
i
i1
i1
Thus using
S (f, g) f (s) (g (t) g (s)) =

1/n

[b1 . . . bn ]

1
r1

Thus letting r1 := \ {u} , we have







S (f, g) Sr1 (f, g) = u f u+ g ku f k u+ g




1

Vp f |[s,t] Vq g|[s,t] .
r1

r
X


Si (f, g) Si1 (f, g)
i=2

and the triangle inequality we learn that


r
X


S (f, g) S (f, g)
kS (f, g) f (s) (g (t) g (s))k
i
i1
i=2


r 
X


1

Vp f |[s,t] Vq g|[s,t]
i1
i=2


r1
X


1
Vp f |[s,t] Vq g|[s,t]
=
i
i=1


= r () Vp f |[s,t] Vq g|[s,t] .

Proposition 5.5 (Young-Love Inequality). Let P (s, t) and r =


# () 1. Then r :=If p, q (0, ) and := p1 + q 1 , then


kS (f, g) f (s) (g (t) g (s))k r () Vp f |[s,t] Vq g|[s,t]
(5.8)


() Vp f |[s,t] Vq g|[s,t] ,
(5.9)
Page:

(5.12)

Proof. Let r := and choose u r (s, t) such that






ku f k u+ g = min k f k + g

(5.5)

Moreover, Eq. (5.3) is valid.

min ai bi [a1 b1 . . . an bn ]

(5.11)

(s,t)

!1/p
i api

and in particular by taking i = 1/n for all i we have,


!1/p
n
1X p
1/n
[a1 . . . an ]

a
.
n i=1 i

1in

(5.10)

l=1

l=1

i=1

i=1

r1

X
X
1
1
and

()
=

()
:=
.

l
l

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

5 Youngs Integration Theory

Z

Definition 5.6. Given a function, X : V and a partition, P (s, t)


with (s, t) , let
!1/p
Vp (X : ) :=

X

X , p

As usual we also let


sup

Vp (X : ) .

If Xs,t = x(t) x (s) for some x : [0, T ] V, then Vp (X : ) = Vp (x : )


and Vp X|[s,t] = Vp x|[s,t] .
Lemma 5.7. If 0 u < v T and

(5.16)

[kf (0)k + [1 + ()] Vp (f )] Vq (g) .

(5.17)

More generally,
Z






Vq
f dg |[s,t] f |[s,t] u + () Vp f |[s,t] Vq g|[s,t]
0



kf (s)k + [1 + ()] Vp f |[s,t] Vq g|[s,t] .

(5.13)


wherein we have used (s, t) := Vqq g|[s,t] is a control for the last inequality.
Taking the supremum over P (0, T ) then implies,
Vq (X) () Vp (f ) Vq (g) .


Vqq g|[ , ] q () Vpq (f ) Vqq (g) ,

(5.14)

Proof. If P (u, v) , we have,


X
q
Vqq (Y : ) =
kf ( ) (g ( ) g ( ))k

(5.19)

q () Vpq (f )



Vq Y |[u,v] f |[u,v] u Vq g|[u,v] .

(5.18)

Then according to Proposition 5.12 for any partition, P (0, T ) ,


Z
q

X
X

q


X , =
Vqq (X : ) =
f dg f ( ) (g ( ) g ( ))





X



q () Vpq f |[ , ] Vqq g|[ , ]

then

(5.20)

Using the identity,


q

kf ( )k k(g ( ) g ( ))k

f dg = Xst + Yst ,


q X

q
q
f |[u,v] u
k(g ( ) g ( ))k = f |[u,v] u Vqq (g : ) .

The result follows by taking the supremum over P (u, v) .


Corollary 5.8. Suppose that Vp (f ) < and g has finite variation, then for
any q [1, ) with := 1/p + 1/q > 1 we have,
Z t






f dg f (s) (g (t) g (s))
(5.15)

() Vp f |[s,t] Vq g|[s,t] .

the triangle inequality, Eq. (5.20), and Lemma 5.7, gives


Z

Vq
f dg = Vq (X + Y ) Vq (X) + Vq (Y )
0

[kf ku + () Vp (f )] Vq (g) ,
which is Eq. (5.16). Equation (5.17) is an easy consequence of Eq. (5.16) and
the simple estimate,
kf (t)k kf (t) f (0)k + kf (0)k Vp (f ) + kf (0)k .

(5.21)

The estimates in Eqs. (5.18) and (5.19) follow by the same techniques or a
simple reparameterization argument.

and

Page:

[kf ku + () Vp (f )] Vq (g)

P(s,t)

Yst := f (s) (g (t) g (s)) for all (s, t) ,

Proof. Inequality (5.15) follows from Eq. (5.9) upon letting || 0. For
the remaining inequalities let Yst be as in Eq. (5.13) and define,
Z t
Z t
Xs,t :=
f dg f (s) (g (t) g (s)) =
f dg Ys,t .


Vp X|[s,t] :=

45


f dg

Vq

Young gives examples showing that Eq. (5.9) fails if one only assume that
p1 +q 1 = 1. See Theorem 4.26 on p. 33 of Dudley 98 [2] and Young (1936) [18]
Young constant is not as good as the one in [2].

45

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

46

5 Youngs Integration Theory

Theorem 5.9. If Vp (g) < and Vq (f ) < with := 1/p + 1/q > 1, then
Z

Z
n

f dgn exists

f dg := lim

Therefore letting || 0 in this inequality implies,



Z t
h
h
 i


i

Vp f |[s,t] Vq (g gn ) |[s,t]

2
kf
(s)k
+
1
+

f dg S (f, g)
lim sup

(5.22)

||0

where {gn } is any sequence of finite variation paths1 such that Vp (g gn ) 0


for all q > q with := 1/p + 1/
q > 1. This limit satisfies;
Z t






f dg f (s) (g (t) g (s))

() Vp f |[s,t] Vq g|[s,t] for all (s, t) ,

which proves Eq. (5.24).


Lemma 5.10. Suppose Vp (g) < , Vq (f ) < with := 1/p + 1/q > 1. Let
{n } P (s, t) and suppose that for each t n we are given cn (t) [t , t] .
Then
Z t
X
X
f dg = lim
f (cn (t)) t g = lim
f (cn (t)) (g (t) g (t )) .

(5.23)
f dg is a bilinear form in f and g, the estimates in Eqs. (5.18) and (5.19)
s
continue to hold, and
Z t
f dg :=
lim
S (f, g) .
(5.24)
Rt



(s, t) := Vpp g|[s,t] + Vqq f |[s,t] ,

Proof. From Eq. (5.19),


Z


Z


Z
Vq
f dgn
f dgm |[s,t] = Vq
f d (gn gm ) |[s,t]
0
0
0



kf (s)k + [1 + ()] Vp f |[s,t] Vq (gn gm ) |[s,t]

so that is a control. We then have,






X
X



[f (cn (t)) f (t )] t g
f (cn (t)) t g Sn (f, g) =




tn
tn
X

k[f (cn (t)) f (t )] t gk


tn

which tends to 0 as n . Therefore the limit in Eq. (5.22). Moreover, passing


to the limit in Eq. (5.15) shows,
Z t






() Vp f |[s,t] Vq g|[s,t] .
f
dg

f
(s)
(g
(t)

g
(s))

46

job:

rpaths

1/p

(t , cn (t))

1/p

(t , t)

1/q

(t , t)
1/q

(t , t)

(t , t)

tn
1

sup (t , t)
tn

(t , t)

tn
1

sup (t , t)

(0, T ) .

(5.25)

tn

Since (t, t) = 0 for all 0 t T and : [0, ) is uniformly continuous


on , the last expression tends to zero as n .
Alternate Proof. We can avoid the use the control, , here by making use
of Holders inequality instead. To see this, let q 0 := q/ (q 1) be the conjugate
exponent to q. Letting,

For exmaple, according to Corollary 4.11 , we can take gn := g n where n


P (s, t) with |n | 0.

Page:

tn

kf (cn (t)) f (t )k kt gk

tn

We may now let q q to get the estimate in Eq. (5.23). This estimate gives those
in Eqs. (5.18) and (5.19). The independence of the limit on the approximating
sequence and the resulting bilinearity statement is left to the reader.
If P (s, t) it follows from the estimates in Eq. (5.23) and Eq. (5.12) that
Z t
Z t
Z t

Z t









f dg S (f, g)
f dg
f dgn +
f dgn S (f, gn )


+ kS (f, gn ) S (f, g)k
h
h
 i
i

2 kf (s)k + 1 + Vp f |[s,t] Vq (g gn ) |[s,t]

Z t



+
f dgn S (f, gn )
.

X
tn

tn

Proof. Let

P(s,t) with ||0

tn

n :=

macro:

svmonob.cls

max

|ts||n |

kf (t) f (s)k

q 0 p
p

0 as n ,

date/time:

11-Mar-2009/12:03

5 Youngs Integration Theory

we find,
X

kf (cn (t)) f (t )k kt gk

tn

q0

!1/q0

!1/q

kf (cn (t)) f (t )k

tn

!1/q0
X

Vq (g : )

kf (cn (t)) f (t )k

Exercise 5.1 (Product Rule). Suppose that V is a Banach space and p, q > 0
such that := p1 + 1q > 1, x C ([0, T ] End (V )) with Vq (x) < and
y C ([0, T ] End (V )) with Vp (y) < . Show for all 0 s < t T that,
Z t
Z t
x (t) y (t) x (s) y (s) =
dx ( ) y ( ) +
x ( ) dy ( ) ,
(5.26)

kt gk

tn

tn

47

wherein the integrals are to be interpreted as Youngs integrals.


Solution to Exercise (5.1). For P (s, t) ,
X
x (t) y (t) x (s) y (s) =
(xy)

n Vqp/q (f ) Vq (g) 0 as n .

[(x ( ) + x) (y ( ) + y) x ( ) y ( )]

f dg =

and therefore,
Z t

XZ

f dg =

XZ

d
f () g

f dg

S (f, g) =

Taking the limit at || 0, the terms corresponding to the first two summands
Rt
Rt
converge P
to s x ( ) dy ( ) and s dx ( ) y ( ) respectively. So it suffices to show,
lim||0 ( x) y = 0. However for every > 0 we have,
X
X

1
k xk k yk max k xk
k xk
k yk

d
.
[f () f ( )] g

sup ( , )

(s, t) 0 as || 0.

This observation proves Eq. (5.24) since, by definition,


t

Z
f dg =

Taking norms of this equation and using the obvious inequalities implies,
Z t
XZ


d



f
dg

S
(f,
g)
kf () f ( )k k gk

s

XZ
1/p
1/q d

( , ) ( , )

=
( , )

[x ( ) y + ( x) y ( ) + ( x) y] .

XZ

Remark 5.11. The same methods used to prove the estimate in Eq. (5.25) allows
to give another proof of Eq. (5.24). Observe that

lim

P(s,t), ||0

max k xk

job:

k yk

p
p1

where p0 =
or equivalently, p10 + p1 = 1. As p1 + 1q = > 1 it follows that
0
q < p and therefore we may choose > 0 such that p0 (1 ) = q. For this
we then have,
X



k xk k yk max k xk Vq(1) x|[s,t] Vp y|[s,t] 0 as || 0.




Alternatively: let (s, t) := Vqq x|[s,t] + Vpp y|[s,t] which is a control on
[0, T ] . We then have,
X
X
X
1/q
1/p

k xk k yk
( , ) ( , )
=
( , )

max ( , )

f dg .

rpaths

!1/p
X



(1)

= max k xk Vp0 (1) x|[s,t] Vp y|[s,t] ,

macro:

svmonob.cls

( , )

47

!1/p0

max ( , )

Page:

k xk

p0 (1)

(s, t) 0 as || 0.

date/time:

(5.27)

11-Mar-2009/12:03

48

5 Youngs Integration Theory


n

Exercise 5.2. Find conditions on {xi }i=1 so as to be able to prove a product


rule for x1 (t) . . . xn (t) .
Lemma 5.12. If F : V W is a Lipschitz function with Lip constant K,
then
Vp (F (Z )) KVp (Z) .
(5.28)

=K


Vpp (Z

: ) K

Vpp

 0


f Zt + st Z f 0 Zt ds.

Thus we have,
f (Zb ) f (Za ) =

:=
Letting (s, t) :=

f 0 (Z ) dZ :=

a
0

f 0 (Z ) dZ ,

where f (z) End (V, W ) is defined by, f (z) v :=


it follows that f (Z (t)) has finite p variation and

d
dt |0 f

t t Z.

Z|[s,t] , we have,

0


f Zt + st Z f 0 Zt ds

K
0

1
1/p
s kt Zk ds K (t , t)
2

and hence
X
X
1/p
1/p

kt Zk K
(t , t) (t , t)
t
2
t
t
K X

(t , t) 0 as || 0
2

k k

(5.29)

[a,b]
0

(Z) .

Exercise 5.3 (Fundamental Theorem of Calculus II). Prove the fundamental theorem of calculus in this context. That is; if f : V W be a C 1
function such that f 0 is Lipschitz and {Zt }t0 is a continuous V valued
function such that Vp (Z) < for some p (1, 2). Then Vp (f 0 (Z )) < and
for all 0 a < b T,
b

(5.30)

Vpp


f 0 Zt t Z +

Therefore taking the supremum over all P (0, T ) gives Eq. (5.28).

f (Zb ) f (Za ) =

where

Proof. For P (0, T ) , we have


X
X
p
p
kF (Z ( )) F (Z ( ))k K p
kZ ( ) Z ( )k

t :=

(z + tv) . In particular

df (Z (t)) = f 0 (Z (t)) dZ (t) .


The integrals in Eq. (5.29) are to be interpreted as Youngs integrals.

as we saw in Eq. (5.27). Thus letting || 0 in Eq. (5.30) completes the proof.
Exercise 5.4. See what you can say about a substitution formula in this case.
Namely, suppose that
Z t
y (t) =
f (s) dx (s)
0

Solution to Exercise (5.3). Let P (0, T ) . Because of Lemma 5.12 and


the assumption that p (1, 2) (so that 1/p + 1/p = 2/p =: > 1), we see that
the integral in Eq. (5.29) is well defined as a Youngs integral. By a telescoping
series argument,
X
f (Zb ) f (Za ) =
t f (Z )

as a Youngs integral. Find conditions so that


Z t
Z t
g (t) dy (t) =
g (s) f (s) dx (s) .
0

5.1 Additive (Almost) Rough Paths

where



t f (Z ) = f (Zt ) f Zt = f Zt + t Z f Zt
Z 1


=
f 0 Zt + st Z t Z ds = f 0 Zt t Z +
t t Z
0

Remark 5.13. For an alternate approach to this section, see [3].


Notation 5.14 Suppose that is a partition of [u, v] , i.e. a finite subset of
[u, v] which contains both u and v. For u s < t v, let
[s,t] = {s, t} [s, t] .

and
Page:

48

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

5.1 Additive (Almost) Rough Paths

Typically we will write

49

Then


f (s) (g (t) g (s)) f (s) (g (u) g (s))


kXst Xsu Xut k =

f (u) (g (t) g (u))

[s,t] = {s = t0 < t1 < < tr = t}


where

k(f (s) f (u)) (g (t) g (u))k


kf (s) f (u)k kg (t) g (u)k


1/p
1/q

Vp f |[s,t] Vq g|[s,t] (s, t) (s, t)


= (s, t) .

(s, t) =: {t1 < t2 < < tr1 }.


Notation 5.15 Given a function, X : V and a partition,
= {s = t0 < t1 < < tr = t} ,

Thus Xst is a A.A.F.

of [s, t] , let
X() :=

X , =

r
X

Notation 5.19 Suppose that X : V is a any function and [0, T ] is


a finite set and (s, t) . Then define,
X
X ()s,t :=
X , .

Xti1 ,ti .

i=1

Furthermore, given a partition, , of [0, T ] and (s, t) let


X
X()st := X([s,t] ) =
X , .

[s,t]{s,t}

The following lemma explains the reason for introducing this notation.

[s,t]

Definition 5.16. As usual, let := {(s, t) : 0 s t T } and p 1. We


say that a function, X : V has finite p - variation if X is continuous,
Xt,t = 02 for all t [0, T ] , and

Lemma 5.20. Suppose that X : V is a continuous function such


that Xt,t = 0 for all t [0, T ] and there exists n P (0, T ) such that
limn |n | = 0 and
Yst := lim X (n )s,t exists for (s, t) .
n

!1/p
Vp (X) :=

sup

X

Xt ,t p
V

< .

Then Yst is an additive functional.

P(0,T ) t

Proof. Suppose that 0 s < u < t T, then


h
i
Ysu + Yut = lim X (n )s,u + X (n )u,t = lim X (n {u})s,t .

Definition 5.17. Let > 1. A almost additive functional (A.A.F.) is


a function X : V of finite p -variation such that there exists a control, ,
C < such that
kXst Xsu Xut k C(s, t) for all 0 s u t T.

(5.31)






X (n {u})s,t X (n )s,t = Xu ,u + Xu,u+ Xu ,u




Xu ,u + Xu,u+ + Xu ,u 3 n
where
n := max {kXs,t k : (s, t) 3 |t s| |n |} .

Xst := f (s) (g (t) g (s)) ,

As n 0 by the uniform continuity of Xs,t , we see that

and (s, t) be the control defined by

lim X (n {u})s,t = lim X (n )s,t = Yst



(s, t) := Vpp f |[s,t] + Vqq g|[s,t] .
2

This is redundant since Vp (X) < can only happen if Xt,t = 0 for all t.

Page:

49

(5.32)

If u n we have X (n {u})s,t = X (n )s,t while if u


/ n , then

If Eq. (5.31) holds for some > 1 and control , we say X is an (, p)


almost additive functional.
Example 5.18. Suppose that Vp (f ) + Vq (g) < with := 1/p + 1/q > 1,

job:

rpaths

which combined with Eq. (5.32) shows Yst is additive.


macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

50

5 Youngs Integration Theory

Lemma 5.21. If X : V is a (, ) almost additive functional, then there


is at most one additive functional, Y : V, such that

Proposition 5.23. Suppose that X : V is an (, ) almost additive


functional. Then for any partition, P (s, t) , we have

kX () Xs,t k () (s, t) .

kYst Xst k C (s, t) for all (s, t)


for some C < .

Proof. For any (s, t) , let ( ) := \ { } and observe that



kX () X ( ( ))k = X ,+ X , X,+ ( , + ) .
(5.34)

Proof. If Z : V is another such additive functional. Then Ust :=


Yst Zst is an additive functional such that

Thus making use of Lemma 5.22 implies,

kUst k = kYst Zst k kYst Xst k + kXst Zst k 2C (s, t) .


Therefore if P (s, t) , we have


X
X
X


U , 2C
kUst k =
U ,
( , ) 0 as || 0.


Lemma 5.22. Suppose = {s = t0 < < tr = t} with r 2 and is a


control. Then there exists j {1, 2, . . . , r 1} such that


2
(tj1 , tj+1 ) min
, 1 (s, t)
(5.33)
r1
Proof. If r = 2, we take j = 1 in which case (tj1 , tj ) = (s, t) and Eq.
(5.33) clearly holds. Now suppose r 3. Considering these intervals two at a
time, by super-additivity, we have,

min

(s,t)

kX () X ( ( ))k

min

(s,t)

( , + ) min


2
, 1 (s, t).
|| 2

Thus we remove points from (s, t) so as to minimize the error to eventually


learn,

||2
X 1
(s, t) () (s, t) .
kX () Xs,t k
k
k=1

Theorem 5.24. Let X : V be a continuous (, ) almost additive functional and denote a partition of [0, T ] . Then
Yst := lim X ()st exists uniformly in (s, t) .
||0

(5.35)

Moreover, Y : V is a continuous additive functional and Yst is the (unique)


additive functional such that

(t2k , t2k+2 ) 12k+2r = (t0 , t2 ) + (t2 , t4 ) + (t4 , t6 ) + (s, t)

kYst Xst k C (s, t) for all (s, t)

(5.36)

k=0

for some C < . In fact according to Proposition 5.23 we know that C may be
chosen to be () .

and

Proof. Suppose that , 0 P (s, t) with 0 and for > 0 let

(t2k1 , t2k+1 ) 12k+1r = (t1 , t3 ) + (t3 , t5 ) + (t5 , t7 ) + (s, t).

k=0

() := max 1 (, ) .

Adding these two equations then dividing by r 1 shows


r1

1 X
2
(tj1 , tj+1 )
(s, t)
r 1 j=1
r1

| |

(Observe that () 0 as 0.) Then making use of Proposition 5.23 we find,

from which it follows that (5.33) holds for some j.

Page:

50

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

5.3 An a priori Bound



X 



0
0
X ( ) , X ,
kX ( ) X ()k =




X

X ( 0 ) , X ,

5.2 Youngs ODE


Now suppose that 1 < p < 2, so that := p1 + p1 = 2/p > 1. Also let V and W be
Banach spaces, f : W End (V, W ) be a Lipschitz function, x Cp ([0, T ] , V )
and consider the ODE,

()

( , )

y (t) = f (y (t)) x (t) with y (0) = y0 .

()

max

| |||

= () (||)

(, )
max

| |||

51

Definition 5.26. We say that a function, y : [0, T ] V, solves Eq. (5.38) if


y Cp ([0, T ] , W ) and y satisfies the integral equation,
Z t
y (t) = y0 +
f (y ( )) dx ( ) ,
(5.39)

( , )

(, )

(5.38)

(s, t) .

Now let 1 , 2 P (0, T ) be arbitrary and apply the previous inequality with
0 = [1 2 ][s,t] and being either [1 ][s,t] or [2 ][s,t] to find,
k[X (1 ) X (2 )]st k k[X (1 ) X (1 2 )]st k
+ k[X (1 2 ) X (2 )]st k
() (s, t) [ (|1 |) + (|2 |)]
() (0, T ) [ (|1 |) + (|2 |)] .

where the latter integral is a the Young integral.


Recall from Lemma 5.12 that
Vp (f (y)) kVp (y) ,

(5.40)

where k is the Lipschitz constant for f and hence the integral in Eq. (5.39) is
well defined. In order to consider existence, uniqueness, and continuity in the
driving path x of Eq. (5.46) we will need a few more facts about p variations.

Therefore,

5.3 An a priori Bound

max k[X (1 ) X (2 )]st k () (0, T ) [ (|1 |) + (|2 |)]

(s,t)

which tends to zero as |1 | , |2 | 0. This proves Eq. (5.35). The remaining


assertions of the theorem were already proved in Lemma 5.20 and Lemma 5.21.
Corollary 5.25. Let X : V be a continuous (, ) almost additive
functional of finite p variation, then the unique associated additive functional,
Y, of Theorem 5.24 is also of finite p variation and

Vp (Y ) Vp (X) + C (0, T ) .

(5.37)

Before going on to existence, uniqueness and the continuous dependence of


initial condition and the driving noise for Eq. (5.39), we will pause to prove an
a priori bound on the solution to Eq. (5.39) which is valid under less restrictive
conditions on f.
Proposition 5.27 (Discrete
ui , i , i 0 satisfy

Gronwalls

then
un n1 . . . 1 0 u0 +

Vp (Y ) Vp (Y X) + Vp (X)

C p (0, T )

p1

un n u0 +

(0, T ) = C p (0, T )

job:

n1k
Y

j k .

(5.41)

j=1

n1
X

n1i i

(5.42)

i=0

n u0 + n1

51

Moreover if i = is constant (so that ui+1 ui + i , then this reduces to

Hence it follows that Vp (Y X) C (0, T ) .


Page:

n1
X
k=0

and using Eq. (5.36), for any P (0, T ) ,


X
X
p
p1
Vpp (Y X : ) C p
( , ) C p (0, T )
( , )

Suppose that

ui+1 i ui + i ,

Proof. By the triangle inequality,

Inequalities).

n1
X

i if 1.

(5.43)

i=0

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

52

5 Youngs Integration Theory

Proof. Let (s, t) := Vpp (x : [s, t]) be the control associated to x and define

If we further assume i = is constant (so that ui+1 ui + ), then




1 n
n
un u0 +
.
(5.44)
1
If we further assume that > 1 , then

un n u0 +


.

(5.45)

Proof. The inequality in Eq. (5.41) is proved inductively as


u1 0 u0 + 0
u2 1 u1 + 1 1 (0 u0 + 0 ) + 1 = 1 0 u0 + 1 0 + 1
u3 2 u2 + 2 2 (1 0 u0 + 1 0 + 1 ) + 2 = 2 1 0 u0 + 2 1 0 + 2 1 + 2 ,
etc.
Since the special case where i = is the most important case to us, let us
give another proof for this case. If we let vi := i ui , then
vi+1 = (i+1) ui+1 (i+1) (ui + i ) = vi + (i+1) i

= (p) := 1 + (2/p) .
If y solves Eq. (5.39), then
Z t
y (t) = y (s) +
f (y ( )) dx ( ) for all 0 s t T.
s

So by By Corollary 5.8 (with p = q) along with Eq. (5.40), we learn that


Z

Vp (y : [s, t]) = Vp
f (y) dx : [s, t]
s

[kf (y (s))k + Vp (f (y) : [s, t])] Vp (x : [s, t])


1/p

[kf (y (s))k + kVp (y : [s, t])] (s, t)

or equivalently, with c := k,


1/p
1/p
1 c (s, t)
Vp (y : [s, t]) kf (y (s))k (s, t) .
Therefore it follows that

which is to say,
vi+1 vi

(i+1)

i .

1/p

Vp (y : [s, t]) 2 (s, t)

1/p

kf (y (s))k if (s, t)

1/2c.

(5.47)

Summing this expression on i implies,

un u0 = vn v0 =

n1
X

(vi+1 vi )

n1
X

Since

(i+1)

and

i=0

i=0

kf (y)k kf (y) f (0)k + kf (0)k k kyk + kf (0)k

ky (t)k ky (s)k + Vp (y : [s, t]) ,

which upon solving for un gives Eq. (5.42). When i = is constant, we use
n1
X

i=0

n1i

n1
X

it follows from Eq. (5.47) that

n 1
=
=
1
i=0
i

1/p

Vp (y : [s, t]) 2 (s, t) [k ky (s)k + kf (0)k]


1
1/p
1/p
1/2c
ky (s)k + 2 (s, t) kf (0)k if (s, t)

in Eq. (5.42) to learn,


un n u0 +

n 1
1 n
= n u0 +

1
1



ky (t)k

where k is the Lipschitz constant for f.


Page:

52

job:

rpaths

(5.49)

and

which is Eq. (5.44).


Theorem 5.28 (A priori Bound). Let 1 < p < 2, f (y) be a Lipschitz function, and x C ([0, T ] V ) be a path such that Vp (x) < . Then there exists
C (p) < such that for all solutions to Eq. (5.39),

p p
p
p
Vpp (y) C (p) eC(p)k Vp (x) ky0 k + kf (0)k Vpp (x)
(5.46)

(5.48)

1
1+

1/p

ky (s)k + 2 (s, t)

1/p

kf (0)k if (s, t)

1/2c.

(5.50)

In order to make use of this


result, let h (t) := (0, t) . Write h (T ) =

1 p
1 p
+ r where 0 r < 2c
and then choose 0 = t0 < t1 < t2 < < tn
n 2c

1 p
tn+1 := T such that h (ti ) = i 2c
for 0 i n. We then have
 p
1
(ti , ti+1 ) h (ti+1 ) h (ti ) =
for 0 i n.
2c
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

5.4 Some p Variation Estimates

Therefore we may conclude from Eq. (5.50) that




1
1/p
ky (ti+1 )k 1 +
ky (ti )k + 2 (ti , ti+1 ) kf (0)k for i = 0, 1, 2 . . . , n

Finally, observing that n


have,

2c

p

1
p
p
p
ky (ti )k + 22p1 (ti , ti+1 ) kf (0)k
ky (ti+1 )k 2p1 1 +

Therefore by an application of the discrete Gronwall inequality in Eq. (5.43) we


have,
p

ip

ky (ti )k 4 ky0 k + 4

(i1)p

kf (0)k

(ky0 k + kf (0)k (0, T )) ,

i1
X

Lemma 5.29. Suppose that f Cp ([0, T ] , End (V, W )) and x Cp ([0, T ] , V ) ,


then (f x) (t) = f (t) x (t) is in Cp ([0, T ] , W ) and
Vp (f x) 2 [kf ku Vp (x) + kxku Vp (f )] .

(tl , tl+1 )

kt (f x)k = k(f + t f ) (x + t x) f x k
= kf t x + t f x + t f t xk
1
kf ku kt xk + kxku kt f k + (kt f t xk + kt f t xk)
2
1
kf ku kt xk + kxku kt f k + (2 kf ku kt xk + 2 kxku kt f k)
2
2 (kf ku kt xk + kxku kt f k) .

ky (T )k 4(n+1)p ky0 k + 4np kf (0)k (0, T ) .


Going back to Eq. (5.49) we have,
 p
1
p
p
1/p
p
p1
Vp (y : [s, t]) 2
ky (s)k + 22p1 (s, t) kf (0)k if (s, t)
1/2c

and therefore,
 p
1
p
p
(y : [ti , ti+1 ]) 2
ky (ti )k + 22p1 (ti , ti+1 ) kf (0)k

 p h
i
1
p
p
p1
2
4ip ky0 k + 4(i1)p kf (0)k (0, ti )

Therefore it follows that

p1

+ 22p1 (ti , ti+1 ) kf (0)k


p

C (p) (ky0 k + kf (0)k (0, T )) 4ip .


Summing this result on i and making use of Corollary 4.4 then implies,

(5.51)

Proof. Let P (0, T ) , t and f := f (t ) and x := x (t ) , then

and in particular it follows that

n
X

(0,T )

((2c) (0, T ) + 1)

5.4 Some p Variation Estimates

l=0
p

p1

(0,T )+1)p

which is Eq. (5.46).

4ip ky0 k + 4(i1)p kf (0)k (0, ti )

Vpp (y : [0, T ]) (n + 1)

(2e) ky (ti )k + 22p1 (ti , ti+1 ) kf (0)k


p
p
4p (ky (ti )k + (ti , ti+1 ) kf (0)k ) .

h (T ) = (0, T ) so that n (2c) (0, T ) we

C (p) eC(p)k

53

Vpp (y : [0, T ]) C (p) (ky0 k + kf (0)k (0, T )) 4((2c)

and hence that,

Vpp


1 p

Vp (f x : ) 2 kf ku Vp (x : ) + 2 kxku Vp (f : )
from which the result follows.
Theorem 5.30. Suppose that W and Z are Banach spaces and
f
C 2 (W Z) with f 0 C 1 (W End (W, Z)) and f 00
C (W End (W, End (W, Z))) both being bounded functions. If y0 , y1
Cp ([0, T ] W ) , then
Vp (f (y1 ) f (y0 )) 2 kf 0 ku Vp (y1 y0 )+kf 00 ku [Vp (y0 ) + Vp (y1 )] ky1 y0 ku .
(5.52)

Vpp (y : [ti , ti+1 ])

i=0

4(n+1)p 1
41
p
p
p1
(n+1)p
C (p) (ky0 k + kf (0)k (0, T )) 4
(n + 1)
.
(n + 1)

Page:

53

p1

Proof. Let

C (p) (ky0 k + kf (0)k (0, T ))

job:

rpaths

k := kf 0 ku ,

M := kf 00 ku ,

and for the moment suppose that y0 , y1 are elements of W. Letting


macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

p1

54

5 Youngs Integration Theory

ys := y0 + s (y1 y0 ) = (1 s) y0 + sy1 ,
we have, by the fundamental theorem of calculus,
Z 1
Z 1
d
f (y1 )f (y0 ) =
f (ys ) ds =
f 0 (ys ) (y1 y0 ) ds. =: F (y0 , y1 ) (y1 y0 )
0 ds
0
Thus if we define,
1

f 0 (ys ) ds,

F (y0 , y1 ) :=

kt F (y0 , y1 )k = kF (y0 (t) , y1 (t)) F (y0 (t ) , y1 (t ))k


1
M [kt y0 k + kt y1 k] ,
2
we learn that

1
M [Vp (y0 ) + Vp (y1 )] .
2
Putting this all together gives the result in Eq. (5.52).
Vp (F (y0 , y1 ))

(5.53)

5.5 An Existence Theorem

then
f (y1 ) f (y0 ) = F (y0 , y1 ) (y1 y0 ) .

(5.54)

Let us observe that if y0 and y1 are two more such points in W, then
Z 1

Z 1


0
0

kF (y0 , y1 ) F (
y0 , y1 )k =
f (ys ) ds
f (
ys ) ds

0
0

Z 1


[f 0 (ys ) f 0 (
ys )] ds
=


Z

0
1

kf 0 (ys ) f 0 (
ys )k ds M

We are now prepared to prove our basic existence, uniqueness, and continuous
dependence on data theorem for Eq. (5.39).
Theorem 5.31 (Local Existence of Solutions). Let p (1, 2) and :=
1 + (2/p) < . Suppose that f : W End (V, W ) is a C 2 function such that f 0 and f 00 are both bounded functions. Then there exists
0 = 0 (kf 0 ku , kf 00 ku , p, kf (y0 )k) such that for all x Cp ([0, T ] V ) with
Vp (x) 0 , there exists a solution to Eq. (5.39).

kys ys k ds

Proof. Let

WT := {y Cp ([0, T ] , W ) : y (0) = y0 } .

((1 s) ky0 y0 k + s ky1 y1 k) ds

M
0

1
= M [ky0 y0 k + ky1 y1 k] .
2
So in summary we have,
f (y1 ) f (y0 ) = F (y0 , y1 ) (y1 y0 )
where F : W W Z is bounded Lipschitz function satisfying,
kF ku k and
1
kF (y0 , y1 ) F (
y0 , y1 )k M [ky0 y0 k + ky1 y1 k] .
2
Therefore
Vp (f (y1 ) f (y0 )) = Vp (F (y0 , y1 ) (y1 y0 ))
2 [kVp (y1 y0 ) + ky1 y0 ku Vp (F (y0 , y1 ))] .

Then by Proposition 4.6, we know (WT , ) is a complete metric space where


(y, z) := Vp (y z) for all For y, z WT .
We now define S : WT WT via,
Z t
S (y) (t) := y0 +
f (y) dx.
0

Our goal is to show that S is a contraction and then apply the contraction
mapping principle to deduce the result. For this to work we are going to have
to restrict our attention to some ball, C , about the constant path y0 and at the
same time shrink T in such a way that S (C ) C and S|C is a contraction.
We now carry out the details.
First off if y C WT , then
Vp (S (y)) [kf (y0 )k + Vp (f (y))] Vp (x) [kf (y0 )k + kf 0 ku Vp (y)] Vp (x)
kf (y0 )k Vp (x) + kf 0 ku Vp (x) .
(5.55)
Secondly, if y, z C WT , then

Since,

Z
[S (y) S (z)] (t) :=

(f (y) f (z)) dx
0

Page:

54

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

5.5 An Existence Theorem

Therefore, making use of the fact that f (y) = f (z) at t = 0, we find that
Vp (S (y) S (z)) Vp (f (y) f (z)) Vp (x)
{2 kf 0 ku Vp (y z) + kf 00 ku [Vp (z) + Vp (y)] ky zku } Vp (x)
Vp (x) {2 kf 0 ku + kf 00 ku [Vp (z) + Vp (y)]} Vp (y z)
2Vp (x) {kf 0 ku + kf 00 ku } Vp (y z) .
(5.56)

Thus we need to take

So in order for this scheme to work we must require, for some (0, 1) , that
0

(kf (y0 )k + kf ku ) Vp (x) and


2Vp (x) {kf 0 ku + kf 00 ku } .

Lemma 5.32. Let a, A, and B be positive constants and (0, 1) . Then there
exists 0 (, a, A, B) > 0 such that for 0 , there exists (0, ) such that
(a + As) and 2A + 2Bs for 0 s .

(5.59)

Proof. Our goal is to satisfy Eq. (5.59) while allowing for to be essentially
as large as possible. The worst case scenarios in these inequalities is when s =
and in this case the inequalities state,
a
2A

1 A
2B
provided A < 1.
Letting M := 1/ we may rewrite the condition on as M > A and
M 2A
a

M A
2B

0 := min A1 ,

(5.57)
(5.58)

But according to Lemma 5.32 with := Vp (x) , a = kf (y0 )k , A := kf 0 ku


and B := kf 00 ku , all of this can be achieved if Vp (x) = 0 (, a, A, B) .
So under this assumption on Vp (x) and with the choice of in Lemma 5.32,
S : C C is a contraction and hence the result follows by the contraction
mapping principle.

2 +
2

s
A+

1
2
2
aB A
2+
A +2
.

Corollary 5.33 (Global Existence). Suppose that f : W End (V, W ) is


a C 2 function such that f 0 and f 00 are both bounded functions. Then for all
x Cp ([0, T ] V ) , there exists a solution to Eq. (5.39).

Proof. Let := 0 kf 0 ku , kf 00 ku , p, kf (y0 )k + KN 1/p where N denotes
the right side of the a priori bound in Eq. (5.46) and 0 is the function appearing
in Theorem 5.31. Further let (s, t) := Vpp (x : [s, t]) and h (t) := (0, T ) . If
Vp (x) we are done by Theorem 5.31. If Vp (x) > , then write h (T ) = np +r
with n Z+ and 0 r < p . Then use the intermediate value theorem to find,
0 = T0 < T1 < T2 < < Tn Tn+1 = T such that h (Ti ) = ip . With this
choice we have (Ti1 , Ti ) h (Ti ) h (Ti1 ) p for 1 i n + 1. So by a
simple induction argument, there exists y C ([0, T ] W ) such that, for each
1 i n + 1, y Cp ([Ti1 , Ti ] W ) and
Z t
y (t) = y (Ti1 ) +
f (y) dx for t [Ti1 , Ti ] .
(5.60)
Ti1

It now follows from Corollary 4.4 that y Cp ([0, T ] W ) . Summing the


identity,
Z Tk1
y (Tk ) y (Tk1 ) =
f (y) dx,
Tk1

which gives,
2aB (M A) (M 2A) ,

on k implies,

i.e.
2

y (Ti1 ) y0 =

M (2 + ) AM + 2 A aB 0
2+
A2 aB
AM + 2
0M


2

2
2
2+
A aB
2+
= M
A +2

A .
2

2
2

55

X Z
1k<i

or equivalently,

Page:

55

2
Thus we must choose (if 2 A aB
2+
0),

2 A
s
2
aB A2
1
2+
2+
=M
A+
A +2
.

2
2

job:

rpaths

Tk1

Ti1

f (y) dx =

Tk1

f (y) dx
0

which combined with Eq. (5.60) implies,


Z t
y (t) = y0 +
f (y) dx.

(5.61)

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

56

5 Youngs Integration Theory

Proof. Let that u Cp ([0, T ] , V ) and v0 W and suppose that v solves,


Z t
v (t) = v0 +
f (v ( )) du ( ) .

Theorem 5.34 (Uniqueness of Solutions). Keeping the same assumptions


as in Corollary 5.33, then the solution to Eq. (5.39) is unique.
Proof. Suppose that z Cp ([0, T ] V ) also solves Eq. (5.39), i.e.
Z t
z (t) = y0 +
f (z ( )) dx ( ) .
0

Then, with w (t) := z (t) y (t) , we have


Z t
w (t) =
[f (z ( )) f (y ( ))] dx ( )

Let w (t) := y (t) v (t) so that


Z t
Z t
w (t) =
f (y ( )) dx ( )
f (v ( )) du ( )
0
0
Z t
Z t
=
f (y ( )) d (x u) ( ) +
[f (y ( )) f (v ( ))] du ( )

and therefore,

and therefore,

Vp (w : [0, t]) Vp (f z f y : [0, t]) Vp (x : [0, t])


C (f, Vp (w) , Vp (z)) Vp (x : [0, t]) Vp (w : [0, t]) ,
wherein we have used Eq. (5.52) for the second inequality. Thus if T1 is chosen
so that
C (f, Vp (w) , Vp (z)) Vp (x : [0, T1 ]) < 1
it follows that Vp (w : [0, T1 ]) = 0 and hence that w|[0,T1 ] = 0. Similarly we may
now show that w|[T1 ,T2 ] = 0 provided

Z
f (y ( )) d (x u) ( ) +

w (t) = w (s) +
s

[f (y ( )) f (v ( ))] du ( )
s

and hence
Vp (w : [s, t]) A (s, t) + B (s, t)
where
Z

f (y ( )) d (x u) ( ) : [s, t]

A (s, t) = Vp
Zs

C (f, Vp (w) , Vp (z)) Vp (x : [T1 , T2 ]) < 1.


B (s, t) = Vp

Working as in the proof of Corollary 5.33, it is now easy to conclude that w 0.


and

[f (y ( )) f (v ( ))] du ( ) : [s, t] .

We now estimate each of these expressions as;


A (s, t) (kf (y (s))k + Vp (f y : [s, t])) Vp (x u : [s, t])
(kf (y (s))k + KVp (y : [s, t])) Vp (x u : [s, t])
C (p, f, ky0 k , Vp (x)) Vp (x u : [s, t])

5.6 Continuous dependence on the Data


Let us now consider the issue of continuous dependence on the driving path, x.
Recall the estimate,
Vp (f (y1 ) f (y0 )) 2 kf 0 ku Vp (y1 y0 ) + kf 00 ku [Vp (y0 ) + Vp (y1 )] ky1 y0 ku
C (f, Vp (y0 ) + Vp (y1 )) (ky1 y0 ku + Vp (y1 y0 ))
C (f, Vp (y0 ) + Vp (y1 )) (ky1 (0) y0 (0)k + Vp (y1 y0 ))

and
Z
B (s, t) = Vp

Y : W Cp ([0, T ] , V ) Cp ([0, T ] , W )
is uniformly continuous on bounded subsets of W Cp ([0, T ] , V ) .
Page:

56

job:

rpaths


[f (y ( )) f (v ( ))] du ( ) : [s, t]

kf (y (s)) f (v (s))k + Vp (f y f v : [s, t]) Vp (u : [s, t])


C (f, Vp (y) + Vp (v)) Vp (u : [s, t]) (ky (s) v (s)k + Vp (y v : [s, t]))
= C (p, f, ky0 k , Vp (x) , Vp (u)) Vp (u : [s, t]) (kw (s)k + Vp (w : [s, t])) ,

where C (f, ) is a constant depending on kf 0 ku , kf 00 ku , and 0.


Theorem 5.35. Let Yt (y0 : x) := y (t) where y solves Eq. (5.39). Then assuming the f 0 and f 00 are bounded, then Y

wherein we have used the a priori bounds that we know of y and v. Thus we
have,
Vp (w : [s, t]) C1 Vp (x u : [s, t]) + C2 Vp (u : [s, t]) (kw (s)k + Vp (w : [s, t])) .
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

5.7 Towards Rougher Paths

The result follows from this estimate via the usual iteration procedure based
on keeping C2 Vp (u : [s, t]) 1/2 so that

Then

57

d
z(xt x0 ) = a(z(xt x0 ))x t with z(xt x0 )|t=0 = .
dt

Vp (w : [s, t]) 2C1 Vp (x u : [s, t]) + kw (s)k .


The same type of argument works more generally. We state with out proof
the following theorem.

say. For example choose T1 so that C2 Vp (u : [0, T1 ]) = 1/2, then

d
Theorem P
5.38. If {Ai }m
i=1 are commuting Lipschitz vector fields on R ,
m
m
d
1
m
f (y) x :=
and y R , and x C ([0, T ] , R )
i=1 xi Ai (y) for x R
then Eq. (5.62) becomes,

Vp (w : [0, T1 ]) 2C1 Vp (x u : [0, T1 ]) + ky0 v0 k .


Then choose T2 such that C2 Vp (u : [T1 , T2 ]) = 1/2 to learn,
Vp (w : [T1 , T2 ]) 2C1 Vp (x u : [T1 , T2 ]) + kw (T1 )k
2C1 Vp (x u : [T1 , T2 ]) + ky0 v0 k + Vp (w : [0, T1 ])
2C1 (Vp (x u : [0, T1 ]) + Vp (x u : [T1 , T2 ])) + 2 ky0 v0 k .

y t =

m
X

Ai (yt )x it with y0 = .

i=1

This equation has a unique solution has a unique solution given by


Continuing on in this vain and them making use of Corollary 4.4 gives the
claimed results.

yt = e

P i
(xt xi0 )Ai

() = e(xt x0 )A1 e(xt

xm
t )Am

().

Remark 5.36. This result could be used to give another proof of existence of
solutions, namely, we just extend

The next example shows however that life is more complicated when the
vector fields do not commute.

Y : W C 1 ([0, T ] , V ) Cp ([0, T ] , W )

Example 5.39. Let A1 and A2 be the vector fields on R defined by A1 (r) = r


and A2 (r) = 1 or in differential operator form,

by continuity to W Cp ([0, T ] , V ) .

and A2 :=
.
r
r

[0, T ] , R2 so that the corresponding ODE be-

A1 := r

5.7 Towards Rougher Paths

Further let x = (x1 , x2 ) C 1


comes,

Let us now begin to address the question of solving the O.D.E.,


dyt = f (yt )dxt with y0 = ,

y t = A1 (yt ) x 1t + A2 (yt ) x 2t

(5.62)

when Vp (x) = for p < 2.


Lemma 5.37. If a : R R is a Lipschitz vector field on R, f (y) = a (y) , and
x C 1 ([0, T ] , R) , then Eq. (5.62) has solution given by

= yt x 1t + x 2t with y0 = .
The solution to this equation is given by Duhamels principle as
Z t
1
1
x1t x10
yt = e
+
ext xs dx2s .
0

yt = e(xt x0 )a x ().
In particular, yt (x, ) depends continuously on x in the sup-norm topology and
hence easily extends to rough paths.

To simplify life even further let us now suppose that = 0 so that


Z t
1
x1t
yt (x) := e
exs dx2s .
0

Proof. Let z solve,


z(
) = a(z( )) with z(0) = .
Page:

57

job:

rpaths

This last expression is not continuous in x in the Vp norm for any p > 2
which follows from Lemma 5.40 with x = (u (n) , v (n)) .

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

58

5 Youngs Integration Theory

Lemma 5.40. For simplicity suppose that T = 2. Suppose ut (n) =


and vt (n) = n1 sin n2 t, then
t

Z
0

while

1
eus (n) dvs (n) = o (1) t 9 0 as n
2


Vp (u (n) , v (n)) = O n2/p1

1
n

cos n2 t

(5.63)

By Greens theorem, At , is the signed area swept out by x|[0,t] then followed by
the straight line path from x (t) back to the origin. Taking


 1
1
2
2
xn (t) := (ut (n) , vt (n)) =
cos n t 1 , sin n t
n
n
we find,

(5.64)

Z


1 2 t
n
cos n2 s 1 cos n2 s + sin n2 s sin n2 s ds
2
2n
0
Z


1
1
1
1 t
1 cos n2 s ds = t 2 sin n2 t t as n .
=
2 0
2
2n
2

Whereas we have seen from above, that Vp (xn )
= O n2/p1 . This shows that
the area process is not continuous in the Vp - norm when p > 2 since Vp (xn ) 0
but At (n) 21 t 6= 0.
At (n) =

which tends to 0 for p > 2 as n .


Proof. For notational simplicity we will suppress the n from our notation.
By an integration by parts we have,
Z t
Z t
t
eus dvs = eus vs 0 +
eus u s vs ds
0
0
Z t

2
1
= o(1)
e n cos(n s) sin2 n2 s ds
0
Z t
Z t



1
cos(n2 s)
n
2
2
1 sin n s ds
sin2 n2 s ds
= o(1)
e
0
0
Z t  
Z t


1
= o(1)
O
sin2 n2 s ds
sin2 n2 s ds
n
0
0
 
Z

1 t
1

= o(1) O
1 cos 2n2 s ds
n
2 0
1
= o (1) t,
2
which proves Eq. (5.63).
By Theorem 2.10,
Vp (v (n))  Vp (u (n)) 

 p
1/p




2
1 2/p
n2
=O
n
= O n2/p1 .
n
n

Equation (5.64) now follows from this estimate and the fact that
Vp (u, 0) Vp (u, v) Vp (u, 0) + Vp (0, v) .

Example 5.41. Let x (t) := (u (t) , v (t)) be a smooth path such that x (0) = 0
and consider the area process,
Z
1 t
At :=
(udv vdu) .
2 0
Page:

58

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

6
Rough Paths with 2 p < 3
We now are going to consider paths with finite p variation for some p
[2, 3). As we have seen above we have to add some additional information to get
a reasonable theory going. We first need to introduce the appropriate algebra.
As usual V with be a Banach space which we will sometimes assume is finite
dimensional in order to avoid technical details.

6.1 Tensor Norms


Throughout the rest of this class, we will assume that V V has been equipped
with a tensor norm satisfying,
kw vk = kv wk kvk kwk for all v, w V.
For example, if V is an inner product space, then there is a unique inner product
on V V determined by
(v w, v 0 w0 ) = (v, v 0 ) (w, w0 ) for all v, w, v 0 , w0 V.
The norm associated to this inner product will satisfy the desired assumptions.
Here is another example of such a norm.
Example 6.1 (Projective Norm). Suppose again that V and W are Banach
spaces. The projective norm of V W is defined by
(
)
X
X
kk := inf
kvi k kwi k : =
vi wi .
i

Proof. Then for =




X
X

X



Q (vi , wi )
|Q (vi , wi )| kQk
kvi k kwi k ,
Q () =


E
i

where kQk is the best constant for which the last inequality holds. Taking the
infimum over all such decomposition of shows



Q () kQk kk
E



and therefore Q

kQk . Let (0, kQk) and choose v V and w W

op

such that |Q (v, w)| |v| |w| . Then






Q (v w) = |Q (v, w)| |v| |w| |v w|
E



from which it follows that Q
. Since (0, kQk) was arbitrary, it
op


follows that Q

kQk
.

op

Corollary 6.3. Assume that kk is defined as in Example 6.1. If kk = 0 then


= 0.
Proof. If 6= 0 we may write

Before checking kk is a norm (see Corollary 6.3), let us observe the following
property.
Lemma 6.2. Suppose that E is another Banach space and Q : V W E is
: V W E be corresponding linear map on V W
a bilinear form
and let Q


to E. Then Q = kQk . Because of this property, in the future we will no
op

vi wi , we have

It is easy to check that this satisfies the properties of norm modulo showing
kk = 0 implies = 0.

longer distinguish between Q and Q.

:=

n
X

vi wi

i=1
n

with {vi }i=1 being a linearly independent set and each wi 6= 0. Choose V
such that (vi ) = i1 and W such that (w1 ) = 1. Then Q (v, w) :=
(v) (w) is a continuous bilinear form on V W with kQk = kk kk > 0.
Thus we have
1 = |Q ()| kQk kk
from which it follows that kk > 0.

6 Rough Paths with 2 p < 3

60

6.2 Algebraic Preliminaries

Definition 6.7. A function, X : G (V ) is an algebraic multiplicative


functional if

Notation 6.4 let : V V V V be the map determined by, (v w) =


w v and define,
2 (V ) := { V V : = } and

for all 0 s u t T.

Xsu = Xst Xtu

(6.1)

Equation 6.1 is referred to as Chens identity.

S 2 (V ) := { V V : = } .

We will write the components of X as X 1 V and X 2 V V, so that

Furthermore, for a, b V, let

1
2
Xst = 1 + Xst
+ Xst
.

[a, b] := ab ba (V ) and
Let us observe by the multiplicative property that for all t [0, T ] , Xt,t =
Xt,t Xt,t , i.e. Xt,t = 1 for all t [0, T ] .

a b := ab + ba S 2 (V ) .
Observe that if V V, then

Lemma 6.8. Chens identity (6.1) is equivalent to

1
1
= ( + ) + ( ) S 2 (V ) 2 (V )
2
2
and in particular,
1
a b = (a b + [a, b]) ,
2

(a + v + ) (b + w + ) = ab + (aw + vb) + (b + a + v w) .
We are now going to drop the tensor symbol from the notation.
Lemma 6.6. The subset,
G (V ) := {g A (V ) : g = 1 + v + with v V and V V } ,

1
1
Xst
Xtu

which suffices to complete the proof.


Example 6.9. If x Cp ([0, T ] , V ) with p < 2, then we let
Z
Xst = 1 + x (t) x (s) +
dx (u) dx (v)
suvt
t

is a group under the multiplication coming from A (V ) . The identity element is


1 and the inverse to g is given by
g 1 = 1 v + v 2 .
Proof. If h := 1 + w + then
gh = (1 + v + ) (1 + w + ) = 1 + (v + w) + ( + + vw) G (V )
and we see that gh = 1 iff
2

w = v and = vw = + v .

rpaths

2
Xtu



1
2
1
2
1
2
1 + Xsu
+ Xsu
= 1 + Xst
+ Xst
1 + Xtu
+ Xtu
 1
  2

1
2
1
1
= 1 + Xst
+ Xtu
+ Xst + Xtu
+ Xst
Xtu

Definition 6.5. Let A (V ) := T (2) (V ) := R V V V which we make into


an algebra via the multiplication rule,

job:

(6.3)

2
Xst

Proof. Chens identity states,

V V = S 2 (V ) 2 (V ) .

60

(6.2)

2
Xsu

for all 0 s t u T.

so that

Page:

1
1
1
Xsu
= Xst
+ Xtu
and

:= 1 + x (t) x (s) +

(x (v) x (s)) dx (v) G (V )

(6.4)

where the latter integral is the Young integral. An alternative way to look at
this function is to observe that
Z t
Xst = 1 +
Xs dx ( )
(6.5)
s

and this differential equation uniquely determines Xst . Let us also check directly
that Eqs. (6.3) holds. Equation (6.2) holds trivially.
Rt
1
2
We have in this case Xst
= x (t) x (s) and Xst
= s (x (v) x (s)) dx (v) ,
therefore,
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

6.3 The Geometric Subgroup


2
2
2
Xsu
Xst
Xtu
=

Proof. We have,

(x (v) x (s)) dx (v)


s
Z t

Z
(x (v) x (s)) dx (v)

(x (v) x (t)) dx (v)

2
Xst

(x (v) x (s)) dx (v)

=
s

(x (v) x (s)) dx (v)


t

2
Xst

61

[(x (v) x (s)) dx (v) + dx (v) (x (v) x (s))]

(x (v) x (t)) dx (v)

1
1
(x (t) x (s)) dx (v) = Xst
Xtu

1
dv [x (v) x (s)] = [x (t) x (s)] = Xst

2

as desired.

and

Example 6.10. Suppose again that p < 2 and W := x+A with x Cp ([0, T ] , V )
and A Cp ([0, T ] , V V ) . Then
Z
Xst = 1 + W (t) W (s) +
dW (u) dW (v)

2
2
Xst
Xst
=

Z
(x (v) x (s)) dx (v)

dx (v) (x (v) x (s))


s

[x (v) x (s) , dx (v)] .

suvt

Z
:= 1 + x (t) x (s) + A (t) A (s) +

(x (v) x (s)) dx (v) G (V )


s

(6.6)

6.3 The Geometric Subgroup

is the unique solution to


t

Z
Xst = 1 +

Xs dW ( )

(6.7)

Definition 6.12. Let



Ggeo (V ) := g = 1 + a + A G (V ) : A + A = a2 ,

1
Xst

and therefore still satisfies Chens identity and


= x (t) x (s) . Thus it is
not reasonable to try to define
Z t
2
(x (v) x (s)) dx (v) := Xst

i.e. the symmetric part of A is


Ggeo (V ) is of the form,

2
2
where Xst
is chosen so that Xst := 1+x (t)x (s)+Xst
satisfies Chens identity.

Xst


1 2

Xst

Proof. Let g, h Ggeo (V ) with g as in Eq. (6.9) and


1
h = 1 + b + b2 + B with B 2 (V ) .
2
Then

Z
+


[x (v) x (s) , dx (v)] .

61

1
1
g 1 = 1 a a2 A + a2 = 1 a + a2 A
2
2
1
2
= 1 a + (a) A Ggeo (V )
2

(6.8)

2
This shows that it is only the anti-symmetric part of Xs,t
which does not depend
continuously on x in the sup-norm topology.

Page:

(6.9)

Lemma 6.13. Ggeo (V ) is a subgroup of G (V ) .

so that
1
1
= 1 + Xst
+
2

Alternatively put, the general element of

1
g = 1 + a + a2 + A with A 2 (V ) .
2

Proposition 6.11. Let x Cp ([0, T ] , V ) with p < 2 and then we let X :


G (V ) be as in Eq. (6.4). Then

2
2
1 2
Xst
+ Xst
= Xst
and
Z t
2
2
Xst
Xst
=
[x (v) x (s) , dx (v)]

1 2
2a .

job:

rpaths

and
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

6 Rough Paths with 2 p < 3

62



1 2
1 2
gh = 1 + a + a + A
1+b+ b +B
2
2
1
1
= 1 + a + b + a2 + b2 + ab + A + B
2
2
1
1
2
= 1 + a + b + (a + b) (ab + ba) + ab + A + B
2
2
1
1
2
= 1 + a + b + (a + b) + (ab ba) + A + B
2
2
1
1
2
= 1 + a + b + (a + b) + [a, b] + A + B Ggeo (V ) .
2
2

h i
]
which we now
The Lie bracket is then determined by the formula, [,
] = ,
work out. Let f : G R be a smooth function, then


f (g) = (g f 0 (g) (g)) = f 00 (g) (g, g) + f 0 (g) .
Since f 00 (g) is symmetric (mixed partial derivative commute) we find,
h i 
^
f (g) = f 0 (g) ( ) = (
,
)f (g) .
We summarize these results in the following proposition.

As with any Lie group, H, we may associate a Lie algebra, Lie (H) = Te H.
Lemma 6.14. For G = G (V ) and Ggeo = Ggeo (V ) we have
Lie G = V V V and Lie Ggeo = V 2 (V ) .

(6.10)

Proof. Let g (t) = 1+x (t)+A (t) be a smooth path in G such that g (0) = 1,
then
g (0) = x (0) + A (0) V V V.
Conversely if a + A V V V then g (t) = 1 + t (a + A) G is a smooth
path such that g (0) = 0 and g (0) = a + A. Therefore Lie (G) = V V V.
A smooth path g (t) Ggeo may be written as
1
g (t) = 1 + x (t) + x2 (t) + A (t) with A (t) 2 (V ) .
2

Proposition 6.15. The Lie bracket on Lie (G) is given by [, ] = .


Moreover, Lie (Ggeo ) is the Lie sub-algebra (at least when dim V < ) of Lie (G)
generated by V Lie (G) = V V V. We will denote Lie (Ggeo ) by L (V ) .
Proof. It only remains to prove the second assertion. So suppose that =
a + A and = b + B with a, b V and A, B 2 (V ) , then
[, ] = [a + A, b + B] = [a, b] = ab ba 2 (V ) Lie (Ggeo ) .
Although these are non-trivial Lie algebras they are only slightly non-trivial
in the sense that [[, ] , ] = 0 for all , , Lie (G) , i.e. Lie (G) is nilpotent.
The next thing item to compute for these Lie algebras and Lie groups is the
group exponential map defined by

e = e (1) for all Lie (G) .

Assuming that g (0) = 0 so that x (0) = 0, it follows that

g (0) = x (0) + A (0) V 2 (V ) .


Conversely if a + A V 2 (V ) then g (t) = 1 + ta + 12 t2 a2 + tA Ggeo is
a smooth path such that g (0) = 0 and g (0) = a + A and therefore Lie Ggeo =
V 2 (V ) .
We now continue on with the Lie group mantra. To this end we associate
to element of Lie G, a left invariant vector field, (g) via,
d
(g) := |0 g h (t)
dt

Let g (t) = et (1) , so that


g (t) = (g (t)) = g (t) with g (0) = 1.
Writing = a + A and g (t) = 1 + x (t) + B (t) , we learn that
x + B = (1 + x + B) (a + A) = a + xa + A
so that x = a and B = xa + A with x (0) = 0 and B (0) = 0. The solution
to the first equation is x (t) = at and then B (t) = ta2 + A and therefore
B (t) = 21 a2 t2 + At. Therefore,

where h (t) is any smooth curve in G such that h (0) = 0 and h (0) = for
example take h (t) = 1 + t. Writing g = 1 + , we find,
d
(g) = |0 (1 + ) (1 + t) = g .
dt
Page:

62

job:

rpaths

et (1) = g (t) = 1 + at +

t2 2
a + At.
2

Thus we have proved

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

6.3 The Geometric Subgroup

1
1
e = 1 + a + a2 + A = 1 + + 2 = exp () .
2
2

Proof. We have


1 2
1 2
e e = 1++
1++
2
2
1
1
= 1 + + + 2 + 2 +
2
2
1
1
2
= 1 + + + ( + ) ( + ) +
2
2
1
1
2
= 1 + + + ( + ) + [, ]
2
2
++ 21 [,]
=e
.

It is not so surprising that e is given by the Taylors theorem expansion. Indeed


if we had defined e by its Taylors expansion, then
d
d
d t
e =
|0 e(t+s) =
|0 et es = et with e0 = 1.
dt
ds
ds
The other point to notice is that if = a + A Lie (Ggeo ) , then e = 1 + a +
1 2
2 a + A Ggeo as it should be. Let us summarize what we have done in the
following theorem.
Theorem 6.16. We have, exp () = e , exp : Lie (G) G and exp :
Lie (Ggeo ) Ggeo are diffeomorphism with inverse given by

Alternatively; compute



1 2 1 2
log (exp () exp ()) = log 1 + + + + +
2
2
1
1
1
2
= + + 2 + 2 + ( + )
2
2
2
1
= + + [, ] .
2

1
log (1 + ) = 2 .
2
Moreover, g G is in Ggeo iff log (g) L (V ) = Lie (Ggeo ) .
Proof. To prove the last assertion, observe that if
1
1 + = e = 1 + + 2
2

Corollary 6.18. Suppose that U (t) Lie (G) is a finite variation curve with
U (0) = 0 and g (t) solves,

then = + 12 2 and therefore,

g (t) = g (t) U (t) with g (0) = 1,

1
1
= 2 =
2
2

1

2

2

1
= 2 .
2

If g = 1 + = 1 + a + A, then g Ggeo iff A 21 a2 2 (V ) while


1
1
log (g) = a + A a2 L (V ) A a2 2 (V ) .
2
2

then



Z
1 t
g (t) = exp U (t) +
[U ( ) , dU ( )] .
2 0
If U (t) = u (t) + A (t) with u (t) V and A (t) V V, we may write g (t) as,


Z
1 t
[u ( ) , du ( )] + A (t) .
g (t) = exp u (t) +
2 0
Proof. We know that the solution to the ODE is given by
Z t
Z
g (t) = 1 +
dU ( ) +
dU () dU ( )

Proposition 6.17. If , Lie (G) , then

0 t

e e =e

++ 12 [,]

U ( ) dU ( )
Z
1 2
1 t
= 1 + U (t) + U (t) +
[U ( ) , dU ( )]
2
2 0


Z t
1
= exp U (t) +
[U ( ) , dU ( )] .
2 0
0

1
[, ] for all , Lie (G) ,
2

then Lie (G) becomes a group such that exp : Lie (G) G and exp : L (V )
Ggeo are Lie group isomorphisms.
Page:

63

= 1 + U (t) +

In particular if we define
=++

63

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

6 Rough Paths with 2 p < 3

64

Corollary 6.19. If u, v V, then


[u,v]

=e

u v u v

e e .

[ a, a] d = 0.

[xa ( ) , dxa ( )] =
Now let

Proof. By repeated use of Proposition 6.17,


e

u v u v

e e =e
=e

xi (t) =

u v u+v+ 21 [u,v]

u u+v+ 21 [u,v]v 12 [v,u+v+ 21 [u,v]]

Then
Z 1
Z
[xi (t) , dxi (t)] =

= eu eu+[u,v] = e

u+[u,v]u+ 21 [u,u+[u,v]]

1
(ui (cos 2t 1) + vi sin 2t) for 0 t 1.
2

= e[u,v] .

[ui (cos 2t 1) + vi sin 2t, vi cos 2t ui sin 2t] dt

Z
= [ui , vi ]

Theorem 6.20 (Chows Theorem). To each y Ggeo there exists a (smooth)


finite variation path, x (t) V such that x (0) = 0 and gx (1) = y, where
gx (t) = g (t) is the solution to,
g (t) = g (t) x (t) with g (0) = 1.
Proof. First Proof. If A 2 (V ) is written as A =
ui , vi V, then by Corollary 6.19,
A

e =e

Pm

i=1 [ui ,vi ]

m
Y

[ui ,vi ]

i=1

m
Y

(6.11)
Pm

i=1

[ui , vi ] for some


eui evi eui evi .

i=1



(cos 2t 1) cos 2t + sin2 2t dt

= [ui , vi ]
Let x := x1 xm xa by which we mean follows x1 , then x2 , . . . , then xm ,
and then xa . Then
Z
1 1
x (1) +
[x ( ) , dx ( )]
2 0
Z
m Z
1X 1
1 1
[xa ( ) , dxa ( )] +
[xi ( ) , dxi ( )]
=a+
2 0
2 i=1 0
m

If in addition, a V, then

=a+

ea+A = ea eA = ea

m
Y


eui evi eui evi .

1X
[ui , vi ]
2 i=1

which again represents an arbitrary element in L (V ) .

i=1

It is now easy to see how to construct the desired path x (t) . We determine
x (t) so that it is continuous and satisfies, x (t) = a for 0 t 1, x (t) = ui
for t 4 (i 1) + (1, 2) , x (t) = vi for t 4 (i 1) + (2, 3) , x (t) = ui for
t 4 (i 1) + (3, 4) , and x (t) = vi for t 4 (i 1) + (4, 5) for i = 1, 2, . . . , m.
With this definition it follows that
g (4 (m 1) + 5) = ea+A

6.4 Characterizations of Algebraic Multiplicative


Functionals
Definition 6.21. A multiplicative functional, X : G such that X ()
Ggeo is said to be an algebraic geometric multiplicative functional.
Example 6.22. To every x Cp ([0, T ] , V ) with 1 p < 2, the function,

as desired.
Second Proof. We know the solution to Eq. (6.11) is given by


Z
1 t
[x ( ) , dx ( )] .
gx (t) = exp x (t) +
2 0

Z t
Xst = 1 + (x (t) x (s)) +
(x (v) x (s)) dx (v)
s


Z
1 t
[x (v) x (s) , dx (v)] .
= exp (x (t) x (s)) +
2 s

Let xa (t) = ta for 0 t 1, then


Page:

64

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

6.4 Characterizations of Algebraic Multiplicative Functionals

Lemma 6.23 (Characterization of MFs). The space of (algebraic geometric) multiplicative functionals are in one to one correspondence with functions
(Y : [0, T ] Ggeo (V )) Y : [0, T ] G (V ) such that Y (0) = 1. (The latter
conditions is an arbitrary normalization.) The correspondence is given by
Y Xst := Ys1 Yt and Xst Yt := X0t .
Proof. Given a multiplicative function, X, let Yt := X0,t . Then for (s, t)
we have
Yt = X0t = X0s Xst = Ys Xst = Xst = Ys1 Yt .
Conversely if Y : [0, T ] G (V ) and Xst := Ys1 Yt , then

Xst Xtu = Ys1 Yt Yt1 Yu = Ys1 Yu = Xsu

: G (V ) are multiplicative functionals


Corollary 6.25. Suppose that X, X
1
1
2
2

st
such that X = X , then st := Xst
X
is an additive functional.
may be written as in Eq. (6.12) with
Proof. By Proposition 6.24, X and X
Therefore,
A replaced by A for X.


2
2
st
st = Xst
X
= At As At As


= At At As As
which is an additive functional.
Alternatively, we make use of Chens identity to find,




2
2
2
2
1
1
2
2
1 1
2
su
st
tu
st
X2 X
= Xsu
X
= Xst
+ Xtu
+ Xst
Xtu
X
+X
+X
Xtu
su

2
2
2
2
st
tu
= Xst
X
+ Xtu
X




2
2
= X2 X
+ X2 X
.

as desired.
Proposition 6.24. If Yt = 1 + xt + At G (V ) then

Xst = 1 + (xt xs ) + At As + x2s xs xt



 

1
1
1
= exp (xt xs ) + At x2t As x2s [xs , xt ]
2
2
2

st

(6.13)

(6.14)

Definition 6.26. Given a path x : [0, T ] V we say that X : G is


a (geometric) lift of x if X is a (geometric) multiplicative functional and
1
Xst
= x (t) x (s) for all (s, t) .
Corollary 6.27. If X is a (geometric) lift of x : [0, T ] V then every (geo of x is of the form,
metric) lift, X,
st = Xst + st
X

(6.15)

Proof.
1

Xst = Ys1 Yt = (1 + xs + As ) (1 + xt + At )

= 1 xs As + x2s (1 + xt + At )

= 1 + (xt xs ) + At As + x2s xs xt
1
2
= 1 + (xt xs ) + (xt xs )
2


1 2

xt + x2s xs xt xt xs + At As + x2s xs xt
2

 

1
1
1
1
2
= 1 + (xt xs ) + (xt xs ) + At x2t As x2s [xs , xt ]
2
2
2
2

where st 2 (V ) st V V is an arbitrary additive functional.


Example 6.28. Suppose dim(V ) = 1, i.e. V = R, and x : [0, T ] R is a continuous path. Then


x2
x3
t
t
xt
Yt = e = 1, xt ,
,
,...,
2!
3!


x2t 2 x3t 3
= 1, xt , 1 , 1 , . . . , T (R),
2!
3!
Yt1 = ext because
(ea eb )k =

For the second assertion, apply Eq. (6.13) with At = 21 x2t + t .

Page:

65

job:

rpaths

tu

(6.12)

and if Yt = 1 + xt + 12 x2t + t Ggeo (V ) , then


1
1
2
Xst = 1 + (xt xs ) + (xt xs ) + t s [xs , xt ]
2

2
1
= exp 1 + (xt xs ) + t s [xs , xt ] .
2

65

macro:

svmonob.cls

k
k
X
X
ai bki
(ea )i (eb )ki =
i! (k i)!
i=0
i=0

(a + b)k
= (ea+b )k ,
k!
date/time:

11-Mar-2009/12:03

66

6 Rough Paths with 2 p < 3

where we have used

st
k

(a + b)

= (a + b) (a + b)
k

= (a + b) 1

k
X
ai bki k
=
1
i! (k i)!
i=0

k
X
ai b(ki)
.
i! (k i)!
i=0

Definition 6.30. Let p [1, ). A p (geometric) rough path is a multiplicative functional, (X : Ggeo ) X : G such that;

Therefore
Xst = Ys1 Yt = exs ext = e(xt xs )
is a multiplicative functional. Indeed if s < u < t then
k
X

ki
i
Xsu
Xut
=

i=0



xs xt
(xt xs )2
:=
xs (xs xt ) = (xs xt ) xs +
2
2


xs + xt
1 2
= (xs xt )
= (xs x2t )
2
2
1
= (x2t x2s ).
2

k
X
(xs xu )j (xt xu )ki
i!
(k i)!
i=0

1
k
= (xt xu + xs xu )k = Xs,t
k!
as desired.
Example 6.29. Suppose dim(V ) = 1, i.e. V = R, and x : [0, T ] R is a contin
P
uous path and now let Yt = 1 xt . Then Yt1 = 1 +
xkt so that

1. X is continuous,


2. Vp X 1 + Vp/2 X 2 < .
Definition 6.31. If x Cp ([0, T ] V ) is given. We say that X is a (geomet1
= x (t) x (s)
ric) p - lift if X is a p (geometric) rough path such that Xst
for all (s, t) .
Theorem 6.32. Let x Cp ([0, T ] , V ) with p < 2. Then x has precisely one p
lift which is given by
Z t
2
Xst
=
(x ( ) x (s)) dx ( )
(6.16)
s
Z
1
1 t
2
[x ( ) x (s) , dx ( )] ,
(6.17)
= (x (t) x (s)) +
2
2 s

k=1

Xst := Ys1 Yt = (1 + xs + x2s + x3s + . . . )(1 xt )


= 1 + (xs xt ) + (x2s xt xs ) + (x3s x2s xt ) + . . .
= 1 + (x2 xt ) + xs (xs xt ) + x2s (x2 xt ) + . . .
is a multiplicative functional. Let me check this explicitly at level 2, namely
2
X

2i
i
0
2
1
1
2
0
Xsu
Xut
= Xsu
Xut
+ Xsu
Xut
+ Xsu
Xut

i=0

where all integrals are Youngs integrals. Alternatively we may write, X, as




Z
1 t
[x ( ) x (s) , dx ( )] .
(6.18)
Xst = exp x (t) x (s) +
2 s
Moreover, this p lift is a geometric p rough path.
2
Proof. To prove the existence assertion, define Xst
by Eq. (6.16) and recall
that Eq. (6.17) follows as in Proposition 6.11. Moreover we have seen in Example
6.9 that X is a lift of x which takes values in Ggeo . Moreover if we let (s, t) :=
Vpp (x : [s, t]) , then

= xu (xu xt ) + (xs xu )(xu xt ) + xs (xs xu )


= xs (xu xt ) + xs (xs xu ) = xs (xs xt )
2
= Xst

as desired.
Let us now consider (at level 2) the difference, , between the two multiplicative functionals in Examples 6.28 and 6.29,

Page:

66

job:

rpaths

2
Xst (2/p) Vp (x () x (s) : [s, t]) Vp (x : [s, t]) = (2/p) (s, t)2/p .

2/p
Hence it follows that Vp/2 X 2 (2/p) (0, T )
< and the existence
assertion is proved.
For uniqueness, suppose that Y is another lift. Then we know st := Yst2
2
Xst V V is an additive functional with Vp/2 () < . As p/2 < 1, this
implies that st = 0 for all (s, t) which gives the uniqueness assertion of
the theorem.
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

Proposition 6.33. Let p 1. A multiplicative functional, X : G, is a p


rough path iff there exists a control, , such that
i
Xst (s, t)i/p for all (s, t) , 1 i 2.
(6.19)
Moreover if X is a p rough path, we may always take,


p/2
(s, t) := Vpp X 1 : [s, t] + Vp/2 X 2 : [s, t] .

(6.20)

Proof. () This is the easy direction because


p/i X
X

(t`1 , t` ) (0, T )
Xti`1 t`
`
i

and hence v (X) (0, T ) < for all i. This implies that X p (V ) .
For the converse, an application of Theorem 2.32 shows that both


p/2
Vpp X 1 : [s, t] and Vp/2 X 2 : [s, t] are controls. It is now easy to verify that
Eq. (6.19) holds for this control.
The following theorem due to Lyons and Victoir [14] shows that there are
plenty of p rough paths.
Theorem 6.34 (Extension Theorem). Let 1 p < and x
Cp ([0, T ] , V ) , then there always exists a geometric p lift, X : Ggeo ,
of x.
Proposition 6.35 (Non-Uniqueness of Lifts). If 2 p < and x
Cp ([0, T ] , V ) , there exists an infinite number of (geometric) p lifts of x.
2

Proof. If X and Y are any two p lifts of x, then st := (Y X)st is an


additive functional with finite p/2 variation. Conversely if : V V
is any continuous additive functional with finite p variation and X is a fixed
p - lift of x, then Yst := Xst + st is another p lift of X. We we assume X
and Y are geometric, then we must require takes values in 2 (V ) otherwise
nothing else changes.
The following theorem makes use of result from Section 7.3 below.
Theorem 6.36 (A Geometric Rough Path Approximation Theorem).
Suppose dim V < and X : Ggeo is a geometric p rough path for
some 1 p < . Then there exists smooth (or finite variation) paths, xn
C ([0, T ] V ) such that for all q > p, q (Xn , X) 0 as n , where
Z
1 t
(xn ( ) xn (s)) dxn ( ) .
Xn (s, t) := 1 + (xn (t) xn (s)) +
2 s
Proof. The proof is similar to the proof of Corollary 4.11. One needs to now
replace the piece x by the horizontal projections of the piecewise geodesics constructed in Theorem 7.15 below. One should also use the Carnot-Caratheodory
metric on Ggeo in the proof. For the details the reader is referred to [6] and [5].

7
Homogeneous Metrics
In this chapter we are going to introduce some metrics on G and Ggeo which
will allow us to relate p rough paths to the more familiar paths of finite p
variation relative to one of these metrics. We begin with some generalities
about left invariant metrics and the associated p variation spaces.

!1/p
(x, y : ) =

d (t x, t y)

!1/p

[d (t x, e) + d (e, t y)]

Vp (x : ) + Vp (y : ) Vp (x) + Vp (y)
and hence,
(x, y) Vp (x) + Vp (y) < .

7.1 Lie group p variation results


We begin with some generalities about p variations for group valued functions.
In this section, suppose (G, d) is a group equipped with a left invariant metric
in which (G, d) is complete. The left invariance assumption on d states that for
all a, b, c G, d (ca, cb) = d (a, b) . Equivalently we are assuming that



d (a, b) = d e, a1 b = d b1 a, e = d e, b1 a ,

and in particular it follows that d (e, a) = d e, a1 . We will write kak for
d (e, a) .
Suppose that x C ([0, T ] G) . Given a partition, P (s, t) and ,
let, x := x1
x . We continue the notation used in Chapter 2. In particular
we have,
Vp (x : ) :=

dp xt , xt

!1/p


Vp (x) :=

sup

kt xk

Proposition 7.2. The function, , defined in Eq. (7.1) is a complete metric


on C0,p ([0, T ] , G) .
Proof. Let x, y, z C0 ([0, T ] , G) .
1. If t [0, T ] we may take := {0, t, T } to learn that
d (x (t) , y (t)) = d (t x, t y)
1/p

(dp (t x, t y) + dp (T x, T y))
As t [0, T ] was arbitrary it follows that

(7.2)

0tT

and

2. Let P (0, T ) and observe that

!1/p

Vp (x : ) .

P(0,T )

(x, z : ) =

d (t x, t z)

We also define

!1/p
!1/p

(x, y : ) :=

(x, y) .

du (x, y) := max d (x (t) , y (t)) (x, y) .

!1/p
X

Definition 7.1. Let C0,p ([0, T ] , G) := {x C ([0, T ] G) : x (0) = e and Vp (x) < }

dp (t x, t y)

p
X


1
(t x) t y

!1/p

[d (t x, t y) + d (t y, t z)]

!1/p

and set,
(x, y) :=

sup
P(0,T )

Observe that (x, e) = Vp (x) < and

(x, y : ) .

(7.1)

d (t x, t y)

!1/p
+

= (x, y : ) + (y, z : )
(x, y) + (y, z) .

X
t

d (t y, t z)

70

7 Homogeneous Metrics

Hence it follows that

Therefore it would follows that


(x, z) (x, y) + (y, z)

!1/p

which shows that satisfies the triangle inequality. It is clear from the definition
of that (x, y) = (y, x) and from Eq. (7.2) that (x, y) = 0 implies x = y.
Moreover we now see that if x, y C0,p ([0, T ] V ) , then (x, y) (x, e) +
(e, z) < so that is finite on C0,p ([0, T ] V ) .
3. To finish the proof we must now show is complete. So suppose that

{xn }n=1 C0,p ([0, T ] , G) is a Cauchy sequence. Then by Eq. (7.2) we know
that xn converges uniformly to some x C ([0, T ] G) . Moreover, for any
partition, P (0, T ) we have
(x, xn : ) (x, xm : ) + (xm , xn : )
(x, xm : ) + (xm , xn ) .

We may now take the supremum over P (0, T ) to learn,


(x, xn ) lim inf (xm , xn ) 0 as n .
m

So by the triangle inequality, (e, x) (e, xn ) + (xn , x) < for sufficiently


large n so that x C0,p ([0, T ] , G) and (x, xn ) 0 as n .
Remark 7.3 (Group Structure Comment). In order to get C0,p ([0, T ] , G) to be
a group under pointwise multiplication I think we will need to assume that d
satisfies something like,

d (y, xy) = d e, y 1 xy C (y) d (e, x) .
Assuming this to be the case, we then would have,
1

t (xy) = (xy)t (xy)t = yt1


xt xt yt = yt1
t xyt

and therefore,
d (e, t (xy)) = d e, t y yt1 t xyt = d t y 1 , yt1 t xyt


d t y 1 , e + d e, yt1 t xyt

= d (e, t y) + d e, yt1 t xyt
d (e, t y) + C (yt ) d (e, t x) .

70

job:

rpaths

max C (yt ) [ (e, x : ) + (e, y : )]


t

and hence that


(e, xy) max C (yt ) [ (e, x) + (e, y)] < .
t

Definition 7.4. For R let : A (V ) A (V ) be defined by


( + a + A) := + a + 2 A where R, a V, and A V V. We
call the dilation isomorphism.
Proposition 7.5. For each R , : A (V ) A (V ) is an isomorphism of
algebras. Moreover restricts to a group isomorphism of G (V ) and Ggeo (V )
and to a Lie algebra isomorphism of Lie G and Lie Ggeo .
Proof. Notice that is an algebra homomorphism. Indeed, if +b+B A,
then
( + a + A) ( + b + B) = + b + a + B + A + ab
and therefore,
(( + a + A) ( + b + B)) = + (b + a) + 2 [B + A + ab]


= + ( (b) + (a)) + 2 B + 2 A + (a) (b)
= ( + a + A) ( + b + B)
as desired. The remaining assertions are easy to prove and our left to the reader.

= yt1
yt yt1 t xyt = t y yt1 t xyt

Page:

d (e, t (xy))

We now go back to the specific case at hand. In our case the groups G and Ggeo
are equipped with a dilation structure.

(x, xn : ) lim inf ( (x, xm : ) + (xm , xn )) = lim inf (xm , xn ) .

(e, xy : ) =

7.2 Homogeneous Metrics on G (V ) and Ggeo (V )

Taking the limit of this equation as m the shows,


m

Definition 7.6 (Homogeneous Norm). A homogeneous norm on G (or


Ggeo ) is a continuous function, kk : G [0, ) such that:
1. kgkG = 0 iff g = 1,

2. k
1(g)k
= || kgk (homogeneous) for all R ,
3. g = kgk (symmetric), and
4. kghk kgk + khk (subadditive).

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

7.2 Homogeneous Metrics on G (V ) and Ggeo (V )

We will give one example of such a norm in Corollary 7.8 below. We will
give another example on Ggeo in Section 7.3 below.

71

Proposition 7.9. If dim V < then any two homogeneous norms on G =


G (V ) are equivalent.

Lemma 7.7. For g = 1 + g1 + g2 G = G (V ) , let




p
(g) := max kg1 k , 2 kg2 k .

Proof. Suppose that || is another homogeneous norm on G and then define


c := min |g| and C := max |g| .
kgk=1

kgk=1

Then is subadditive and homogeneous, i.e.


(gh) (g) + (h) for all g, h G

By compactness, 0 < c < C < . For general g G \ {1} , choose > 0 such
that k (g)kG = 1, i.e. take := 1/ kgkG . Then we know that

and
( (g)) = || (g) for all g G (V ) and R .
Proof. Only the subadditivity requires any proof here. Let := (g) and
:= (h) where h = 1+h1 +h2 . Observe that kg1 k , kh1 k , 2 kg2 k 2 ,
and 2 kh2 k . With this notation we have,

c | (g)| =

|g|
C
kgkG

and therefore,
c kgkG |g| C kgkG for all g G.

gh = 1 + g1 + h1 + (g2 + h2 + g1 h1 ) ,
and



p
(gh) = max kg1 + h1 k , 2 kg2 + h2 + g1 h1 k


p
max kg1 k + kh1 k , 2 kg2 k + 2 kh2 k + 2 kg1 k kh1 k


p
max + , 2 + 2 + 2 = max ( + , + )

Proposition 7.10. If kkG is a homogeneous norm on G then




d (g, h) := g 1 h G for g, h G

defines a left invariant homogeneous (i.e. d ( (g) , (h)) = || d (g, h)) metric
on G.
Proof.
The proof of this proposition is easy. For example, d (g, h) = 0 iff

1
g h = 0 iff g 1 h = 1 iff g = h;
G

= + = (g) + (h) .





d (h, g) = h1 g G = g 1 h G = d (g, h) , and




d (g, k) = g 1 k G = g 1 h h1 k G




g 1 h + h1 k = d (g, h) + d (h, k) .

Corollary 7.8. If we define



kgkG := (g) + g 1 ,
then kkG is a homogeneous norm on G and by restriction on Ggeo . Furthermore
we have,

(g) kgkG 3 (g) for all g G.


(7.3)
Proof. It only remains to prove the upper bound in Eq. (7.3). Since g 1 =
1 g1 g2 + g12 , we find,


q

g 1 = max kg1 k , 2 kg12 g2 k


q
2
max kg1 k , 2 kg1 k + 2 kg2 k


p
max (g) , 2 2 (g) + 2 (g) = 3 (g) .

Page:

71

(7.4)

job:

rpaths

For the rest of this section we will assume that () and kkG are as defined
in Lemma 7.7 and 7.8 above.
Theorem 7.11. Let X : G (or Ggeo ) be a continuous multiplicative functional and Y C ([0, T ] G) be associated to X via, Y (t) := X0,t or equiva1
lently by, Xst = Y (s) Y (t) for all (s, t) . Then X is a p rough path iff
Y has finite p variation.
Proof. The point is that

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

72

7 Homogeneous Metrics

Vpp (Y : ) =

dp (Y (t ) , Y (t)) =

Xst =

B dB +

1 Bs

X : +

p/2
Vp/2

X : .

0
t

B dB Bs (Bt Bs )
s

Z
p/2

B dB + Bs2 Bs Bt

B dB

= 1 + Bt Bs +

= 1 + Bt Bs +

1

= 1 + Bt Bs +

Therefore it follows that






Z t
B dB
1 + Bt +
0

Z
Z

Bs2

! !
r
p
2
Xt ,t



1 p
Xt ,t +

X
X

Xt ,t p 
G
Vpp

p
X


1
Y (t ) Y (t)

(B Bs ) dB a.s.
s


2

Vpp (Y ) C Vpp X + Vp/2 X


,


p/2
Vpp X 1 CVpp (Y ) , and Vp/2 X 2 CVpp (Y )

Therefore,


E [dp (Ys , Yt )] = E dp 1, Ys1 Yt = E [dp (1, Xst )]
"
Z t
1/2 !p #


CE
|Bt Bs | +
(B Bs ) dB
.

which certainly implies,




p/2
Vpp (Y )  Vpp X 1 + Vp/2 X 2 .

Using this theorem we can see fairly easily that finite dimensional Brownian
motions have geometric p - lifts for all p > 2. This is the content of the next
theorem.
Theorem 7.12 (Enhance Brownian Motion). Let {Bt }t0 be a Rd valued
Brownian motion.
 Then for all (0, 1/2) , there exists a Xst = 1 + Bt Bs +
2
Ggeo Rd such that
Xst

Let b := Bs+ Bs a new Brownian motion and T := t s, then the above


equation may be written as,

Z
1/2 p
T



E [dp (Ys , Yt )] CE |bT | +
b db
0

"
Z 1
1/2 !p #




b db
= CE
T |b1 | + T
0

2 1/2


|Bt Bs | + Xst
C |t s| a.s.,
where C is a random finite (a.s.) constant.
Proof. Let
Z

Z
B dB = Yt := 1 + Bt +

Yt := 1 + Bt +
0

B dB + tC
0

= C (p, n) T

p/2

= C (p, n) |t s|

d
For the second line we have used the Brownian scaling, b = T bT 1 () , to
conclude that

 d


b + b+ b+ b = T bT 1 + bT 1 + bT 1 + bT 1

and therefore,
Z

Pd

where C := i=1 ei ei which we assume to be chosen to be continuous. We


then define Xst := Ys1 Yt . Since
Z s
1
Ys = 1 Bs
B dB + Bs2

p/2

b db = T
0

b db .
0

As p is arbitrary, it now follows by an application of Kolmogorovs continuity


criteria Theorem 1.7 as in the proof of Corollary 1.8, that almost surely,

d (Ys , Yt ) C |t s|

we have,

where can be chosen to be any point in (0, 1/2) .

Page:

72

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

7.3 Carnot Caratheodory Distance

7.3 Carnot Caratheodory Distance

g (t1 ) g (t0 ) =

Definition 7.13. We say a smooth path, g : [0, T ] Ggeo is horizontal if


g 1 (t) g (t) V for all t. We define the length of a horizontal path to be
given by,
Z T
1

g (t) g (t) dt.
` (g) :=
0

Moreover for x, y Ggeo , let


d (x, y) := inf {` (g) : g (0) = x, g (T ) = y & g is horizontal} .

Since 0 t0 t1 T are arbitrary, it follows that (after choosing a particular


version of 0 )
g ( ) = 0 (S ( )) S ( ) for a.e. .
Hence, using the change of variables theorem one more time, we find,
Z t1



1
S (t1 ) S (t0 ) =
g ( ) g ( ) d
t0
t1

By Chows theorem, we know that the set of horizontal paths joining x to y is


not empty.

Proof.
Notice that
S is a continuously differentiable function such that
S (t) = g 1 (t) g (t) . Moreover if S (t0 ) = S (t1 ) for some 0 t0 < t1 T,
then g 1 ( ) g ( ) = 0 for [t0 , t1 ] and therefore g ( ) is constants on [t0 , t1 ] .
Thus it makes sense to define, : [0, ` (g)] Ggeo by the equation,
(S (t)) := g (t) for all 0 t T.
Now suppose that 0 s0 < s1 ` (g) . Using the intermediate value theorem, there exists 0 t0 < t1 T such that S (t0 ) = t0 and S (t1 ) = s1 .
Therefore,
Z t1
1


g ( ) g ( ) d = S (t1 ) S (t0 ) .
d ( (S (t1 )) , (S (t0 ))) ` g|[t0 ,t1 ] =
t0

g ( ) d.
t0

Lemma 7.14. Suppose that g : [0, T ] Ggeo is a smooth horizontal path, then
there exists a unique path, : [0, ` (g)] Ggeo , such that g (t) = (S (t)) for

Rt
g|[0,t] . Moreover,
all 0 t T where S (t) := 0 g 1 ( ) g ( ) d arc-length

is absolutely continuous, horizontal, and 1 (s) 0 (s) = 1 for a.e. s.

73

t1

t0

Z




1
(S ( )) 0 (S ( )) S ( ) d =

S(t1 )

S(t0 )





1
(s) 0 (s) ds

from which we may conclude,


Z s1



1
(s) 0 (s) ds = s1 s0 for all 0 s0 < s1 ` (g) .
s0





1
This then implies that (s) 0 (s) = 1 for a.e. s showing that is
parametrized by arc-length as desired.
Theorem 7.15. Assuming dim (V ) < , we have:
1. d is a metric on Ggeo compatible with the natural induced topology.
2. d is left invariant, i.e. d (uw, u y) = d (w, y) for all u, w, y Ggeo .
3. d is homogeneous, i.e. for all R and w, y Ggeo ,
d ( (w) , (y)) = || d (w, y) .

(7.5)

4. Let (w) := d (e, w) . Then there are constants, 0 < c < C < such that,




p
p

c kak + kAk ea+A C kak + kAk .
(7.6)

From this it follows that is d Lipschitz. As d dominated the metric associated


to a certain Riemannian metric (see the proof of Theorem 7.15 below) we may
conclude that is absolutely continuous. So on one hand we have, by making
use of the change of variables theorem,

5. (Ggeo , d) is a complete metric space.


6. For all w, y Ggeo there
is an absolutely continuous path, g : [0, 1] Ggeo
such that g 1 (t) g (t) = d (w, y) a.e. t, g (0) = w, and g (1) = y. Since
` (g) = d (w, y) , this path is a length minimizing geodesic joining w to y.

g (t1 ) g (t0 ) = (S (t1 )) (S (t0 ))


Z S(t1 )
Z t1
=
0 (s) ds =
0 (S ( )) S ( ) d

Proof. Since ug (t) is a horizontal path joining uw to uy and ` (ug) = ` (g) ,


1
it follows fairly easily that d is left invariant. Let w (t) := g (t) g (t) and
g (t) := (g (t)) . Then

S(t0 )

t0

while on the other,

Page:

73

g (t)
job:

rpaths

macro:

g (t) =

svmonob.cls



d
d
1
1
|0 g (t) g (t + s) =
|0 g (t) g (t + s) = w (t)
ds
ds
date/time:

11-Mar-2009/12:03

74

7 Homogeneous Metrics

which shows that g is a horizontal path joining (w) to (y) and moreover
` (g ) = || ` (g) . Equation (7.5) follows easily from this observation.
We now check that d is a metric. Since g (T t) is a path taking y to w with
` (g (T )) = ` (g) , it follows that d (w, y) = d (y, w) . If g is a horizontal path
from w to y and k is a horizontal path from y to z, then g k is a horizontal
path from w to z such that
d (w, z) ` (g k) = ` (g) + ` (k) .
Taking the infimum over g and k joining w to y and y to z respectively shows
that d satisfies the triangle inequality.
We now must still show that d (w, y) = 0 implies w = y. To prove this let
us consider another metric,
d0 (w, y) := inf {`0 (g) : g (0) = w, and g (T ) = y}
where now; g is not assumed to be horizontal and
Z
`0 (g) :=
0

and therefore,
1
g g dt

` (g) =
0

Page:

74

job:

rpaths

t1



2
kxk
+ A dt

d (w, z) d0 (w, z) C (w) min (1, kx (T ) x (0)k + kA (T ) A (0)k) .


From this it follows that d (w, z) = 0 iff w = z. Hence we have shown that d is
a metric.
Let {u, v}h be an orthonormal subset of V and A
i R. Letting
p
p
1
x (t) := 2 (cos 2t 1) |A|u + sgn(A) sin 2t |A|v , we then have

gx (1) = exp 21 A [u, v] and therefore,


exp

1
A [u, v]
2



` (gx ) =

kx (t)k dt
0

1p
|A|.

Similarly if a V, then let xa (t) := ta so that


(ea ) ` (gxa ) = kak .
P

i<j

Aij [ei , ej ] , then




ea+A = ea eA (ea ) + eA

Y
kak + eAij [ei ,ej ]


2
1 2


2
g g = kxk
+ A x x
2



2
2
x x
= kxk
+ A + kx xk
2 A,
2
2 1


2
2
2

kxk
+ A + kx xk
A kx xk

2


2
2
= kxk
1 1 kx xk
+ (1 ) A
2
n
o


2
2
= kxk

1 1 1 kxk + (1 ) A .

2
1 2

2
g g kxk
+ (1 ) A

1
g g dt C (, )

Therefore if A =

So if g (0) = w = 1 + x0 + A0 let t1 be the first time that


kx (t) x0 k h 1 Then for t ti1 , we can choose so close to one such

2
that inf tt1 1 1 1 kx (t)k = > 0, then

t1

where the second bound comes when t1 = T. Thus it follows that


1
g (t) g (t)
dt.
V V V

so that, for any (0, 1) ,

C (, ) (kx (t1 ) x (0)k + kA (t1 ) A (0)k)


C (, ) min (1, kx (T ) x (0)k + kA (T ) A (0)k)

Let g (t) = 1 + x (t) + A (t) , then





1
1
g (t) g (t) = 1 x A + x2
x + A = x + A x x
2

i<j


X 
kak +
eAij [ei ,ej ]
i<j

q

1X
2 |Aij | C ea+A
i<j


p

C kak + kAk = C ea+A

kak +

where
p

ea+A = kak + kAk.
This gives the upper bound in Eq. (7.6). This bound also shows is continuous.
Indeed,

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

7.3 Carnot Caratheodory Distance

| (w) (z)| = |d (1, w) d (1, z)| d (w, z) = w1 z .




So if we write w = ea+A and z = eb+B , then




1
w1 z = exp b a + B A [a, b]
2


1
= exp b a + B A [a, b a]
2

= lim

c ( (w)) = (w) =

(w)
(w)

which gives the lower bound in Eq. (7.6). The bounds in Eq. (7.6) shows that
the metric topology associated to d is the same as the vector space topology.

Now suppose that {wn }n=1 is a d Cauchy sequence, i.e.



wn1 wm = d (wn , wm ) 0 as m, n .

In particular, from Eq. (7.6), we know that {wn }n=1 is a bounded sequence
and therefore has a convergent subsequence in the usual topology and therefore
in the d topology. It is now easy to conclude that {wn } is d convergent.
Therefore (Ggeo , d) is a complete metric space.
We now prove the last assertion about geodesics we will follow Montgomery [16]. Using Lemma 7.14, we may choose n : [0, `n ] Ggeo which are
absolutely continuous horizontal paths with unit speed a.e. such that (0) = x
and (`n ) = y and `n d (x, y) as n . By letting gn (t) := n (t`n /d (x, y)) ,
1
we then have gn (0) = x and gn (`) = y with gn (t) g n (t) = `n /d (x, y) for a.e.
t. It now follows by the AscolliArzela and Banach Anolouge theorem, that after
passing to a subsequence if necessary, we may assume that gn g uniformly
on [0, `n ] and g n (t) u (t) weakly in L2 ([0, `]) . For any A , we have for
any bounded measurable , that
Page:

75

job:

rpaths



1
(t) g (t) u (t) dt = 0



1
allowing us to conclude that g (t) u (t) = 0 a.e. t and therefore
g (t)

u (t) V for a.e. t. Now taking V and (t) 0, it follows that


Z

Z `


1
(t) g (t) u (t) dt
(t) kk dt

inf { (w) : w s.t. (w) = 1} = c > 0.


Thus we know that (w) c whenever (w) = 1. For general w Ggeo , let
> 0 be chosen so that ( (w)) = 1. Since ( (w)) = (w) , this means
that = 1/ (w) . Then



1
(t) gn (t) g n (t) dt

We are now ready to use the dilation invariance to prove the lower bound.
By compactness, we have,

where in the we have used the fact that gn g uniformly and supn kg n k < .
Now if (V ) = 0, we find that
Z

| (w) (z)| C

and hence,
s
!


1

kb ak +
B A 2 [a, b a] 0 as z w.

75

Z `



1
1
(t) g (t) g n (t) dt
(t) g (t) u (t) dt = lim


and hence that



1
g (t) u (t) kk a.e. t.




1
Since V was arbitrary we may conclude that g (t) u (t) 1 for a.e.
V
t. Moreover we have,
g (t1 ) g (t0 ) = lim [gn (t1 ) gn (t0 )]
n
Z `
Z `
= lim
1(t0 ,t1 ] ( ) g n ( ) d =
1(t0 ,t1 ] ( ) u ( ) d
n

from which it follows that g is absolutely continuous and g (t) = u (t) a.e.
t.


1
Thus g is a horizontal absolutely continuous path such that g (t) g (t) 1
for a.e. t. Therefore we may conclude that
d (x, y) ` (g) =

Z `



1
g (t) g (t) dt ` = d (x, y) .
0





1
Thus we must in fact d (x, y) = ` (g) and g (t) g (t) = 1 for a.e. t as desired.

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

8
Rough Path Integrals
Throughout this chapter we will be assuming that 2 p < 3. Our first goal
is to show how to make p rough paths out of almost p rough paths.

8.1 Almost Multiplicative Functionals


The results of this section will be a fairly straight forward generalization of the
results in Section 5.1.
Definition 8.1. Let > 1. A almost multiplicative functional
(A.M.F.) is a function X : G of finite p -variation such that there exists
a control, and C < such that


i
i
Xst [Xsu Xut ] C(s, t) for all 0 s u t T and 1 i 2. (8.1)
If Eq. (8.1) holds for some > 1 and control , we say X is an (, p) almost
multiplicative functional or sometimes an (, p) almost rough path.
We will see plenty of example of almost rough paths later.
Notation 8.2 Given a function, X : G and a partition,
= {s = t0 < t1 < < tr = t} ,
of [s, t] , let
X() :=

X , = Xt0 ,t1 Xt1 ,t2 . . . Xtr1 ,tr .

Furthermore, given a partition, , of [0, T ] and (s, t) let


Y
X()st := X([s,t] ) =
X , .
[s,t]

Theorem 8.3. If X : G is a an (, p) almost rough path then there


: G such that
exists a unique p rough path, X


i
i
st
Xst X
CKi ( (0, T )) (s, t) for all (s, t) and i = 1, 2, (8.2)
where K1 is independnent of .

Proof. (Uniqueness.) Suppose Zst is another such rough path so that Eq.
replaced by Z. . Then by the triangle inequality we have
(8.2) holds with X
h
ii


st 2Ci (s, t) .
Zst X
(8.3)


1
1
st
is an additive functional, it follows from Lemma 5.21 and Eq. (8.3)
X
As Zst
1
1 . Now that X
1 = Z 1 we further know that X
2 Z 2 is also an
that Z = X
additive functional. So another application of Lemma 5.21 along with Eq. (8.3)
2 = Z 2 . Thus we have shown X
= Z as desired.
implies that X
(Existence.) 1. Notice that the condition in Eq. (8.1) for i = 1 is the
1
is an almost (, p) additive functional. Therefore we may
statement that Xst
1
st
, such
apply Theorem 5.24 to find a finite p variation additive functional, X
that Eq. (8.2) holds for i = 1, with K () = () in this case.
1
2
st
2. Let Zst := 1 + X
+ Xst
. I now claim that Z is still an (, p) almost
rough path. Indeed,




2
2
2
2
1 X
1
Xst
Xtu
X
[Zsu Zst Ztu ] = Xsu
st tu

2

1 1
2
2
1
1
1 X
1
Xsu
Xst
Xtu
Xst
Xtu
+ Xst
Xtu X
st tu



1 1
1 1

1 1
1 1
st
C (s, u) + Xst
Xtu Xst
Xtu + Xst
Xtu X
Xtu



1
1

1
1
1 1
tu
st

C (s, u) + Xst
X
X
Xtu
+ Xst
Xtu



1

1
C (s, u) + Xst
+ X
tu C () (s, u)
"
!
#
1/p
1/p
C (s, t) + C (t, u)

C 1+
() (s, u)

+C () (t, u)

K (, (0, T )) (s, t) .
3. Now suppose that P (s, t) and . Then



Z ( \ { }) Z () = Z [s, ] Z ,+ Z , Z,+ Z [+ ,t]

2

= Z [s, ] Z ,+ Z , Z,+ Z [+ ,t]

2
= Z ,+ Z , Z,+ ,

78

8 Rough Path Integrals

Lemma 8.5. Let a, b be elements of a Banach algebra, i.e., |ab| |a||b|, then

2
a b2 |a + b| |a b|.

wherein we have used,


Z ,+ Z , Z,+

i

= 0 for i = 0, 1.

Therefore it follows that

Proof. This is a consequence of the simple algebraic relation,


2

kZ ( \ { }) Z ()k = Z ,+ Z , Z,+ C ( , + ) .

a2 b2 =

Comparing this identity with that of Eq. (5.34), we now see that we may follow
the proof of Proposition 5.23 and Theorem 5.24 verbatim with X replaced by
2
2
st
exists and satisfies,
Z 2 = X 2 in order to learn, lim||0 X ()st := X


2

2
CK (, (0, T )) (s, t) for all (s, t) .
Xst Xst

from which it follows that


1
|(a + b)(a b) + (a b)(a + b)|
2
1
[|a + b| |a b| + |a b| |a + b|] = |a + b| |a b|.
2

|a2 b2 | =

: G is the the desired multiIt is straight forward to now show that X


plicative functional which completes the proof of existence.
Lemma 8.4. Suppose a1 , . . . , ar and b1 , . . . , br are elements of an associative
algebra, A, then
a1 . . . ar b1 . . . br =

r
X

b1 . . . bi1 (ai bi ) ai+1 . . . ar

Remark 8.6. Suppose that gi = 1 + ai + Ai G, then


g1 . . . gn = (1 + a1 + A1 ) . . . (1 + an + An )

n
n
X
X
X
=1+
ai +
Ai +
ai aj .

(8.4)

i=1

i=1

where b1 . . . bi1 := 1 when i = 1 and ai+1 . . . ar = 1 when i = r. If A is a


normed algebra with the property that |ab| |a||b| for all a, b A, then
|a1 . . . ar b1 . . . br |

r
X

|b1 | . . . |bi1 | |ai+1 | . . . |ar | |ai bi | .

X () =

In particular if |ai | , |bi | and |ai bi | for all i, then


(8.5)

a1 . . . ar+1 b1 . . . br+1
= (a1 . . . ar b1 . . . br ) ar+1 + b1 . . . br (ar+1 br+1 )
r
X
=
b1 . . . bi1 (ai bi ) ai+1 . . . ar ar+1 + b1 . . . br (ar+1 br+1 )

X2 , +

X1 , X1 , .

1
st
X
= lim

||0

X1 , and

(8.8)

b1 . . . bi1 (ai bi ) ai+1 . . . ar ar+1 .

2
st
X
= lim
||0

job:

rpaths

(8.7)

, : <

st = lim||0 X () which in components reads,


Then X
st

i=1

78

(8.6)

Theorem 8.7. Suppose X : G is a A.M.F with finite p variation


: G is the unique M.F. such that
and X



st
Xst X
C(s, t) (s, t) .

i=1

Page:

Proof. This is easily proved by induction. Indeed, for i = 1, the right side
of Eq. (8.4) is a1 b1 and by induction,

i<j

|a1 . . . ar b1 . . . br | r r .

i=1

Therefore if P (s, t) , it follows that


X
1
X () =
X1 , and

i=1

r+1
X

1
[(a + b)(a b) + (a b)(a + b)]
2

macro:

svmonob.cls

X2 , +

X1 , X1 , .

(8.9)

, : <

date/time:

11-Mar-2009/12:03

8.2 Path Integration along Rough Paths


1
2
st
Proof. Let Zst = 1 + X
+ Xst
, then as was shown in the proof of Theorem
8.3,
1 = lim X ()1 and X
2 = lim Z ()2 .
X
st
st
st
st
||0

||0

So to finish the proof of this theorem, it suffices to prove


2

8.2 Path Integration along Rough Paths


Our next goal is to define integrals with integrators being a p rough path. To
get an idea of what sort of definitions we should be using, let us go back to the
smooth case briefly. So suppose that x : [0, T ] V is a smooth function and
f : V End (V, U ) is also a smooth function and let

lim [X () Z ()]st = 0.

||0

1
Xst

X () Z () = Xt0 t1 . . . Xtr1 tr Zt0 t1 . . . Ztr1 tr .


An application of Eq. (8.4) of Lemma 8.4 using
X () Z () =

r
X

Zt2i1 ti ,

Z t
1
Zst
:= z (t) z (s) =
f (x ( )) dx ( ) , and
s
Z t
Z t
2
Zst
:=
(z ( ) z (s)) dz ( ) =
(z ( ) z (s)) f (x ( )) dx ( ) .


Z ( [s, ]) X , Z , X ( [, t])



Z ( [s, ]) X1 , Z1 , X ( [, t]) .

Z t
1
Zst
:= z (t) z (s) =
f (x ( )) dx ( )
s
Z t

[f (x (s)) + f 0 (x (s)) (x ( ) x (s))] dx ( )


=

Taking the V V component of this identity then shows,


X 2 () Z 2 ()

 

o
Xn
1
1
=
Z ( [s, ]) X1 , Z1 , + X1 , Z1 , X ( [, t])


 

o
Xn
1
1
1
1
1
1
Zs,
X

Z
+
X

Z
X
(

[,
t])
.
,
,
,
,

If (s, t) with |t s| small, then using Taylors theorem,

s
1
2
= f (x (s)) Xst
+ f 0 (x (s)) Xst

2
Zst

Z
=

Crude estimates then imply,




2
X 

1


X () Z 2 ()
Zs, + X 1 ( [, t]) X1 , Z1 ,

K (, , X)

( , ) 0 as || 0.

f (x ()) dx () f (x ( )) dx ( )
s t

Z
f (x ()) f (x ( )) dx () dx ( )

=
s t

Z
dz () dz ( ) =

s t



X 

1

+ () (, t) K ( , )
Zs, + X,t

(8.10)

and

(x ( ) x (s)) dx ( )
s

then gives,


Zt0 t1 . . . Zti2 ti1 Xti1 ti Zti1 ti Xti ti+1 . . . Xtr1 tr

i=1

2
Xst

:= x (t) x (s) ,
:=
Z t
z (t) :=
f (x ( )) dx ( ) ,

If := {s = t0 < t1 < < tr = t} , then

Xt2i1 ti

79

f (x (s)) f (x (s)) dx () dx ( ) = f (x (s))

2
Xst
. (8.11)

s t
2
Proposition 8.8. Let x Cp ([0, T ] , V ) and Xst = 1 + (x (t) x (s)) + Xst
is
a p lift of x to a p rough path. Further suppose that is a control such that

i
Xst (s, t)i/p for all (s, t) .

(8.12)

Then the functions fs = f (Xs ) L (V, U ) and s = f (Xs ) L (V V, U )


have finite p variation and satisfy estimates of the form,

Page:

79

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

80

8 Rough Path Integrals



p/2
1
ft fs s Xst
C (s, t)

kk1 = |0 | + N1 () ,
kyk1 = |y0 | + N1 (y) , and
k(y, )k2 = |0 | + |y0 | + N1 () + N2 (y, )
= kk1 + |y0 | + N2 (y, ) .

and
1/p

|t s | C (s, t)

Proof. By Taylors Theorem 8.21,





1
1
ft fs s Xst
= f (x (s) + Xst ) f (x (s)) f 0 (x (s)) Xst

Z 1


 1
00
1
1

(1 ) f Xs + Xst Xst Xst d
=


Similarly,
if X is a p rough path controlled by , we define N1 X 1 and

N2 X 2 to be the best constants such that
2
1


N2 X 2 (s, t)2/p .
Xst N1 X 1 (s, t)1/p and Xst

We also let

2/p

C (f 00 ) (s, t)


kXk := N1 X 1 + N2 (X) .

and

For later purposes let us observe that





1
|t s | = f 0 x (s) + Xst
f 0 (x (s))

Z 1


 1
1/p
1
f 00 Xs + Xst
Xst () d C (f 00 ) (s, t) .
=

1/p

|s | |0 | + N () (0, s)
2/p

|yst | |s | |xst | + N2 (y, ) (s, t)


h
i
1/p
1/p
2/p
|0 | + N () (0, s)
N (x) (s, t) + N2 (y, ) (s, t)


1/p
1/p
|0 | N (x) + [N () N (x) + N2 (y, )] (0, T )
(s, t) .

Definition 8.9 (Differentiable Pairs). Let U be another Banach


space,

 x
1/p
Cp ([0, T ] , V ) , and be a control such that xst := xt xs = O (s, t)
. We
way that (y, ) Cp ([0, T ] U End (V, U )) is an (x, ) differentiable
pair if;


1/p
1. st := t s = O (s, t)
and


2/p
2. st := yst s xst = O (s, t)
. (The reader may wish to view s as a
derivative of y relative to x.
Let D (U ) = D (U, x, ) denote the space of U valued (x, ) differentiable
pairs.
We now wish to introduce a number of norms and semi-norms.
Notation 8.10 For (y, ) D (U ) let N1 () and N2 (Y, ) denote the best
constants such that
1/p

|st | N1 () (s, t)

Proposition 8.11. If (f, ) D (End (V, U )) and


1
1
2
2
2
Zst
:= fs Xst
+ s Xst
and Zst
= [fs fs ] Xst
,
1
2
then Zst := 1 + Zst
+ Zst
is an (3/p, ) A.M.F. satisfying,
1




3/p
1
1
Zsu + Zut
Zst
N2 (f, ) N1 (X) + N2 X 2 N1 () (s, t) .



2
3/p
2
[Zsu Zut ] Zst
C (, f, X) (s, t)

(8.13)

(8.14)

such that
1



1/p
2/p
1
2
Zst = fs Xst
+ s Xst
|fs | N X 1 (s, t) + |s | N X 2 (s, t)
and
2


2
2/p
2
Zst = [fs fs ] Xst
|fs | N X 2 (s, t) .
Proof. Let 0 s u t T, then

and

2/p

|yst s xst | N2 (y, ) (s, t)


We further define,

1
1
1
2
1
2
Zsu
+ Zut
= fs Xsu
+ s Xsu
+ fu Xut
+ u Xut

 1
1
2
1
2
+ su Xut
+ [s + su ] Xut
= fs Xsu
+ s Xsu
+ fs + s Xsu
 1



1
2
1
1
2
1
2
= fs Xsu + Xut
+ s Xsu
+ Xsu
Xut
+ Xut
+ su Xut
+ su Xut
1
1
2
= Zst
+ su Xut
+ su Xut

Page:

80

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

8.2 Path Integration along Rough Paths

and therefore,
2
1

1
1
1

+ |su | Xut
Zsu + Zut
|su | Xut
Zst
N2 (f, ) (s, u)

Z
(f, ) dX =

f dX 1 + dX 2
t

Z
1/p

f dX + dX

=1+

N1 (X) (u, t)

2/p
+ N1 () (s, u) N2 X 2 (u, t)



3/p
N2 (f, ) N1 (X) + N2 X 2 N1 () (s, t) .

2/p

81


 1

Z

t
1

f dX + dX

2
.

1/p

(See Remark 8.6 for the formulas stated in Eqs. (8.15) and (8.16).)
Proposition 8.13. Continuing the notation used above we have

Similarly,

1/p

|t | |0 | + N1 () (0, t)

[Zsu Zut ] =

2
Zsu

2
Zut

1/p

|0 | + N1 () (0, T )

(8.17)

1
1
Zsu
Zut

2
2
1
2
= [fs fs ] Xsu
+ [fu fu ] Xut
+ fs Xsu
+ s Xsu

1
2
fu Xut
+ u Xut

h
i
1/p
1/p
2/p
|fst | |0 | + N1 () (0, T )
N1 (X) (s, t) + N (f, ) (s, t)

2
2
1
1
= [fs fs ] Xsu
+ [(fs + fsu ) (fs + fsu )] Xut
+ fs (fs + fsu ) Xsu
Xut

2
2
1
1
2
= [fs fs ] Xsu
+ Xut
+ Xsu
Xut
+ [fs fsu + fsu fs + fsu fsu ] Xut

(8.18)
h

1/p

max 1, (0, T )

1/p

N1 (X) kk1 + N (f, ) (0, T )

1
1
+ fs fsu Xsu
Xut

2
Zsu

(8.19)

+ [fs fsu + fsu fs + fsu

2
fsu ] Xut

+ fs

1
fsu Xsu

1
Xut
.

N1 (f ) max 1, (0, T )

Thus it follows that






2
2
2
1
1
+ fs fsu Xsu
Xut
[Zsu Zut ] Zst
[fs fsu + fsu fs + fsu fsu ] Xut
3/p

C (, f, X) (s, t)

1/p

[N1 (X) kk1 + N (f, )] ,


1/p

1/p

|ft | |f0 |+|0 | N1 (X) (s, t)

Definition 8.12 (Integration). For X : G and f and as in Proposition 8.11, we let


t

f dX 1 + dX


 1
2

f dX 1 + dX
(

:= lim

||0

:= lim

||0

Z

Xh

f X1 , + X2

and

, : <

)
+ X2 ,
(8.16)

so that
Z
Zst := 1 +

f dX 1 + dX 2

is the unique p rough path close to that (3/p, ) A.M.F., Z, defined of


Proposition 8.11. We will use the notation,
81

job:

(8.22)

2/p

+ [N1 () N1 (X) + N (f, )] (0, T ) .


(8.24)

Proof. The proofs of these estimate are all fairly straight forward. The
first estimate is a trivial consequence of the definition of N1 and the fact that
t = 0 + 0,t . For the estimate in Eq. (8.18) we have
1
+ |st |
|fst | |s | Xs,t
h
i
1/p 1
|0 | + N1 () (0, T )
Xs,t + |st |
h
i
1/p
1/p
2/p
|0 | + N1 () (0, T )
N1 (X) (s, t) + N (f, ) (s, t) .

Page:

1/p

2
f f X
  ,
f X1 , + X2 , f X1 ,

(8.21)

h
i
1/p
2/p
+ N1 () N1 (X) (0, T ) + N (f, ) (s, t) ,
(8.23)

kf k |f0 | + |0 | N1 (X) (0, T )

(8.15)

and


 2
2

(8.20)

N1 (f ) |0 | N1 (X) + [N1 () N1 (X) + N (f, )] (0, T )




1/p
kf k1 max (1, N1 (X)) max 1, (0, T )
k(f, )k2

The exact form of the constant is a bit of a mess.

Z

1/p

(s, t)

rpaths

Eq. (8.21) is a simple consequence of Eq. (8.18) and Eq. (8.23) is a consequence
of Eqs. (8.17) and (8.21) and Eq. (8.24) follows directly from Eq. (8.23). For
Eq. (8.22) we have
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

82

8 Rough Path Integrals

h
i
1/p
1/p
kf k1 = |f0 | + N1 (f ) |f0 | + (0, T ) N (f, ) + N1 (X) |0 | + (0, T ) N1 ()


1/p
max (1, N1 (X)) max 1, (0, T )
k(f, )k2 .

it follows that
Z

N
F dX, f



1/p
K3/p N (F ) N1 X 1 + N1 () N2 (X) (0, T ) + kk N2 (X)



1/p
K3/p N (F ) N1 X 1 + N1 () N2 (X) (0, T )
h
i
1/p
+ |0 | + N1 () (0, T )
N2 (X)




1/p
= K3/p N (F ) N1 X 1 + N1 () N2 (X) + N1 () N2 (X) (0, T )
+ |0 | N2 (X)
h
i
1/p
1/p
K3/p kXk kF k2 (0, T ) + |0 | + N1 () (0, T )
N2 (X)

1/p
1 + K3/p kXk kF k2 (0, T ) + |0 | N2 (X)

Theorem 8.14. Let F := (f, ) D (End (V, U )) , then


Z






1/p

F dX, f

|f0 | + kXk |0 | + 1 + K3/p kXk + 1 N1 (X) (0, T ) kF k2
2

(8.25)
1/p

|f0 | + kXk |0 | + Cp (kXk) (0, T )

kF k2

(8.26)

where

Cp (kXk) := 1 + 2 + K3/p kXk .

(8.27)

Z
1

t


1/p
F dX K (X, f ) (s, t)

s

(8.28)

Furthermore,
and thus that
Z

N
F dX, f |0 | N2 (X)

(8.30)

where

1/p
K (X, f ) := kf k N1 X 1 + kk N2 (X) (0, T )



2/p
+ K3/p N (F ) N1 X 1 + N1 () N2 (X) (0, T ) .

(8.29)

Proof. Combining Eq. (8.14) with Theorem 8.3 shows


Z

1


t




3/p
1
F dX Wst

K3/p N (F ) N1 X 1 + N1 () N2 (X) (s, t)
s

3/p

K3/p kXk kF k2 (s, t)

Therefore,
Z

1


t

1
F dX fs Xst

s

Z


1


t


1
2

F dX Wst + s Xst
s

2



3/p

K3/p N (F ) N1 X 1 + N1 () N2 (X) (s, t) + |s | Xst



3/p
2/p
K3/p N (F ) N1 X 1 + N1 () N2 (X) (s, t) + |s | N2 (X) (s, t)
3/p



1/p
+ K3/p N (F ) N1 (X) + K3/p + 1 N1 () N2 (X) (0, T )

1/p
(8.31)
1 + K3/p kXk kF k2 (0, T ) + |0 | N2 (X)
Moreover from Eq. (8.17) we find
1/p

N1 (f ) |0 | N1 (X) + [N1 () N1 (X) + N (F )] (0, T )


|0 | N1 (X) + 1 N1 (X) (0, T )

1/p

kF k2 .

(8.32)
(8.33)

Combining these two estimate shows


Z

N1 (f ) + N
F dX, f

1/p 
kXk |0 | + Cp (N1 (X) , N2 (X)) (0, T )
N1 (0 ) + N F f



1/p
kXk |0 | + 1 + K3/p kXk + 1 N1 (X) (0, T ) kF k2

(8.34)
(8.35)

from which it follow that

2/p

K3/p kXk kF k2 (s, t)

+ |s | N2 (X) (s, t)
h
i
3/p
1/p
2/p
K3/p kXk kF k2 (s, t) + |0 | + N1 () (0, t)
N2 (X) (s, t)

Page:

82

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

8.3 Spaces of Integrands

Z




F dX, f

83

1/p

| (n)st (m)st | N1 ( (n) (m))1 (s, t)

we may let m in this inequality to learn that



Z

F dX, f
F dX + N
0
Z

|f0 | + N1 (f ) + N
F dX, f
Z

= |f0 | + N1 (f ) +


1/p
|f0 | + kXk |0 | + 1 + K3/p kXk + 1 N1 (X) (0, T ) kF k2



1/p
|f0 | + kXk |0 | + 1 + 2 + K3/p kXk (0, T ) kF k2


| (n)st st | lim sup N1 ( (n) (m)) (s, t)

1/p

and hence that


(8.36)

N1 ( (n) ) lim sup N1 ( (n) (m)) 0 as n .


m

Hence it follows that (n) in (1 (U, ) , kk1 ) and the proof is complete.

8.3 Spaces of Integrands

Warning !!Rough Notes Ahead!!


Remark 8.15 (Co-Cycle Condition). If s < u < t, then
1
st = su + ut + su Xut

because
1
1
st = yst s Xs,t
= ysu + yut s Xs,t
1
1
1
= s Xsu
+ su + u Xut
+ ut s Xs,t
1
1
1
= s Xsu
+ su + [s + su ] Xut
+ ut s Xs,t
 1

1
1
1
= su + ut + s Xsu + Xut
Xs,t
+ su Xut

Theorem 8.17 (Completeness II). (D (U, ) , k(, )k2 ) is a Banach space.


Recall that
k(y, )k2 := kk1 + |y0 | + N (y, ) .
(8.37)
Proof. From Eqs. (8.23) and (8.17), it follows that there is a constant
C < such that
k(y, )k C k(y, )k2

for all (y, ) D (U, ) . Hence if {(y (n) , (n))}n=1 is a Cauchy sequence

in D (U, ) it is also uniformly Cauchy as well. Moreover { (n)}n=1 is


Cauchy in 1 (End (V, U ) , ) and hence convergent in 1 (End (V, U ) , )
by Theorem 8.16. Let (y, ) denote the uniform limit of the sequence

{(y (n) , (n))}n=1 . Then we have





 y(n)
y(m)
1
1
y (n) (n) Xst

y
(m)

(m)
X
=



st
st
st
s
st
s st
2/p

N (y (n) y (m) , (n) (m)) (s, t)

1
= su + ut + su Xut
.

Theorem 8.16 (Completeness I). Let 1 (U, ) = 1 (U, , p) denote those


: U such that N1 () < and let

and by letting m this implies





1
1
y (n) (n) Xst
yst s Xst
st
s

kk1 := |0 | + N1 () .

lim sup N (y (n) y (m) , (n) (m)) (s, t)

2/p

Then (1 (U, ) , kk1 ) is a Banach space.

Hence it follows that

Proof. From Eq. (8.17) it follows that




1/p
kk max 1, (0, T )
kk1

and hence if { (n)}n=1 is a Cauchy sequence in 1 (U, ) then it is also uniformly Cauchy. Therefore there exist a continuous function : U such
that (n) uniformly in t. Since
Page:

83

job:

rpaths

N (y (n) y, (n) ) lim sup N (y (n) y (m) , (n) (m)) .


m

Since the right side of this equation goes to zero as n , it follows that
limn N (y (n) y, (n) ) = 0 and we have shown (y (n) , (n)) (y, )
as n in (D (U, ) , k(, )k2 ) .
We now wish to consider the mapping properties of the spaces 1 and D.
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

84

8 Rough Path Integrals

Proposition 8.18. Let a and b two finite (, p) variation paths valued in


appropriate spaces so that ab is well defined. Then
N1 (ab) kbk N1 (a) + kak N1 (b) ,

(8.38)

kabk1 |a0 | |b0 | + kbk N1 (a) + kak N1 (b)


kabk1 kbk N1 (a) + kak kbk1 , and


1/p
1/p
kabk1 max 2 (0, T ) , (0, T ) + 1 kak1 kbk1


1/p
1 + 2 (0, T )
kak1 kbk1

(8.39)
(8.40)

Using Eq. (8.38), it follows that


N1 (ab uv) = N1 ((a u) b + u (b v)) N1 ((a u) b) + N1 (u (b v))
ka uk N1 (b) + N1 (a u) kbk + kuk N1 (b v) + N1 (u) kb vk
which is Eq. (8.43). Using
|(ab uv)0 | = |(a u)0 b0 + u0 (b v)0 |
|(a u)0 | |b0 | + |u0 | |(b v)0 |
ka uk kbk + kuk kb vk

(8.41)
(8.42)

and if (u, v) is another pair of such paths, then


N1 (ab uv) ka uk N1 (b)+N1 (a u) kbk +kuk N1 (b v)+N1 (u) kb vk
(8.43)


1/p
and there exists C = C p, (0, T )
such that
kab uvk1 C [ka uk1 kbk1 + kuk1 kb vk1 ] .

(8.44)

and working as above one easily proves Eq. (8.44) as well. Alternatively,
kab uvk1 = k(a u) b + u (b v)k1 k(a u) bk1 + ku (b v)k1


1/p
1/p
max 2 (0, T ) , (0, T ) + 1 [k(a u)k1 kbk1 + kuk1 kb vk1 ] .
Theorem 8.19. Suppose that f : U S is a smooth map of Banach spaces
and for a 1 (U, ) , let f (a) := f a. Then f (a) := f (a) 1 (S, ) and
the map f : 1 (U, ) 1 (S, ) satisfies the following estimates,

Proof. The simple estimate,


|(ab)st | = |at bt as bs | = |(at as ) bt + as (bt bs )|
|at as | |bt | + |as | |bt bs |
1/p

kbk N1 (a) (s, t)

(8.45)

kf (a)k1 kf 0 k,a kak1

(8.46)

where
1/p

+ kak N1 (b) (s, t)

kf 0 k,a = sup {|f 0 (as + ast )| : (s, t) and [0, 1]}

sup {|f 0 (u)| : |u| kak } .

implies Eq. (8.38). Moreover,


kabk1 |a0 | |b0 | + N1 (ab) |a0 | |b0 | + kbk N1 (a) + kak N1 (b)
kbk N1 (a) + kak [|b0 | + N1 (b)] = kbk N1 (a) + kak kbk1 ,

and

(8.47)

Now suppose that a and b are two paths of finite (, p) variation and that f
is a smooth function, then
N1 (f (a) f (b)) kf 0 k,b N1 (a b)
h
i
1/p
N1 (a)
+ kf 00 k,a,b |a0 b0 | + N1 (a b) (0, T )

kabk1 |a0 | |b0 | + kbk N1 (a) + kak N1 (b)


kbk [|a0 | + N1 (a)] + kak N1 (b) = kbk kak1 + kak N1 (b) ,

(8.48)
and

h
i
1/p
1/p
kak1 + |a0 | + N1 (a) (0, T )
N1 (b)
kabk1 |b0 | + N1 (b) (0, T )
h
i


1/p
1/p
|b0 | + N1 (b) (0, T )
kak1 + 1 (0, T )
kak1 N1 (b)
h
n

oi
1/p
1/p
= kak1 |b0 | + N1 (b) (0, T ) + 1 (0, T )


1/p
1/p
max 2 (0, T ) , (0, T ) + 1 kak1 kbk1 .

Page:

N1 (f (a)) kf 0 k,a N1 (a)

84

job:

rpaths

h

i
1/p
kf (a) f (b)k1 kf 0 k,a,b + kf 00 k,a,b N1 (a) max 1, (0, T )
ka bk1
(8.49)
where
 00

|f (bs + bst + r [as + ast (bs + bst )])| :
00
kf k,a,b = sup
s, t [0, T ] & r, [0, 1]
sup {|f 00 ()| : || kak kbk } .
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

8.3 Spaces of Integrands

Proof. By Taylors Theorem,


f (a)st = f (at ) f (as ) = f (as , at ) ast
where
f (as , at ) =

(8.50)


Z


f (as , at )

and thus we have






f (as , at ) f (bs , bt ) kf 00 k,a,b [|as bs | |at bt |] kf 00 k,a,b ka bk
where

1
0

f (as + ast ) d

kf 00 k,a,b sup {|f 00 ()| : || kak kbk } .

(8.51)

and

Using these estimates in Eq. (8.52),


|f (a)st f (b)st | kf 00 k,a,b ka bk |ast | + kf 0 k,b |ast bst |
h
i
1/p
1/p
kf 00 k,a,b |a0 b0 | + N1 (a b) (0, T )
N1 (a) (s, t)

|f 0 (as + ast )| d kf 0 k,a .

The second inequality in Eq. (8.47) follows from the fact that

1/p

+ kf 0 k,b N1 (a b) (s, t)

|as + ast | = |as + (at as )| = |as (1 ) + at |


(1 ) |as | + |at | |as | |at | kak .

and therefore we have


N1 (f (a) f (b)) kf 0 k,b N1 (a b)
h
i
1/p
+ kf 00 k,a,b |a0 b0 | + N1 (a b) (0, T )
N1 (a) .

Hence it follows that


|f (a)st | kf 0 k,a |ast | kf 0 k,a N1 (a) (s, t)

1/p

Moreover,

from which Eqs. (8.45) and (8.46) easily follow.


From Eq. (8.50) we have
f (a)st f (b)st = f (as , at ) ast f (bs , bt ) bst
h
i
= f (as , at ) f (bs , bt ) ast + f (bs , bt ) [ast bst ] .

(8.52)

Now




f (as , at ) f (bs , bt )
Z 1

|f 0 (as + ast ) f 0 (bs + bst )| d


0

Z 1 Z 1 
00

bs + bst

|as bs + [ast bst ]|

d
dr f

+r
[a
+

(b
+

b
)]
s
st
s
st
0
0
kf 00 k,a,b |as bs + [ast bst ]|

(8.53)

kf (a) f (b)k1 = |f (a0 ) f (b0 )| + N1 (f (a) f (b))


kf 0 k,a,b [|a0 b0 | + N1 (a b)]
h
i
1/p
+ kf 00 k,a,b |a0 b0 | + N1 (a b) (0, T )
N1 (a)


1/p
kf 0 k,a,b ka bk1 + kf 00 k,a,b N1 (a) max 1, (0, T )
ka bk1
h

i
1/p
kf 0 k,a,b + kf 00 k,a,b N1 (a) max 1, (0, T )
ka bk1 .
We will now see how f acts on the space D.
Theorem 8.20. Suppose that f : U S is a smooth map of Banach spaces and
for (y, ) D (U, ) , let f (y, ) := (f (y) , f 0 (y) ) . Then f (y, ) D (S, )
and the map f : D (U, ) D (S, ) satisfies the following estimates,
kf (y, )k |f (y0 )| + kf 0 k,y k(y, )k2
(8.54)




1/p
2
2
+ 2C (0, T )
max 1, N1 (X) kf 00 k,y k(y, )k2

where
kf 00 k,a,b = sup

|f 00 (bs + bst + r [as + ast (bs + bst )])| :


s, t [0, T ] & r, [0, 1]


.

As above, we have

where






1/p
1/p
1/p
1/p
C (0, T )
= max 1, (0, T )
max 2 (0, T ) , (0, T ) + 1

|as bs + [ast bst ]| |as bs | |at bt |


|bs + bst + r [as + ast (bs + bst )]| |bs + bst | |as + ast |
max (|bs | , |bt | , |as | , |at |) kak kbk
Page:

85

85

job:

rpaths

(8.55)
h

1 + 2 (0, T )

macro:

svmonob.cls

1/p

i2

(8.56)

date/time:

11-Mar-2009/12:03

86

8 Rough Path Integrals


2

N (f (y) , f 0 (y) ) kf 0 k,y N (y, ) + kf 00 k,y N1 (y)

and
kf (y, y ) f (z, z )k2 |f (y0 ) f (z0 )| + C k(y, y ) (z, z )k2

kf 0 k,y N (y, )
h
i2
1/p
+ kf 00 k,y |0 | N1 (X) + [N1 () N1 (X) + N (y, )] (0, T )

(8.57)

where

(8.64)



1/p
C = C (0, T ) , N1 (X) , kf 0 k,y,z , kf 00 k,y,z , kf 000 k,y,z , k(y, y )k2 , k(z, z )k2
(8.58)


y
z
0
00
depends on (k(y, )k2 , k(z, )k2 ) quadratically and on kf k,y,z , kf k,y,z , kf 000 k,y,z
linearly.
Proof. Let f : W S be a smooth map of Banach space, st = yst :=
1
, and
yst s Xst
f (y)

st

1
1
:= f (yt ) f (ys ) f 0 (ys ) s Xst
= f (y)st f 0 (ys ) s Xst
,

(8.59)

2
f (yt ) f (ys ) = f 0 (ys ) yst + R (ys , yt ) yst


2
1
= f 0 (ys ) s Xst
+ st + R (ys , yt ) yst

(8.60)

then

where
Z
R (ys , yt ) =

(8.61)

f 00 (ys + yst ) (1 ) d.

Thus we have
df (y) = f 0 (y) dX 1 + df (y)

f (y)

kf k,y N (y, )




2
2/p
2
+ kf 00 k,y max 1, N1 (X) max 1, (0, T )
k(y, )k2
(8.65)
wherein the last two inequalities we have made use of Eqs. (8.21) and (8.22).
Moreover by Eqs. (8.41) and (8.46)


1/p
1/p
kf 0 (y) k1 max 2 (0, T ) , (0, T ) + 1 kf 0 (y)k1 kk1


1/p
1/p
max 2 (0, T ) , (0, T ) + 1 kf 00 k,y kyk1 kk1


1/p
1/p
max 2 (0, T ) , (0, T ) + 1
(8.66)




1/p
2
max 1, (0, T )
max 1, N1 (X) kf 00 k,y k(y, )k2 kk1




1/p
2
= C (0, T )
max 1, N1 (X) kf 00 k,y k(y, )k2 kk1


1/p
where C (0, T )
is as in Eq. (8.55). Combining Eqs. (8.65) and (8.66) then
shows
kf (y, )k

where
st

2
= f 0 (ys ) yst + R (ys , yt ) yst
(8.62)

y
y 2
0
y 1
= f (ys ) st + R (ys , yt ) s Xst + st
h
i

2
2
1 2
1
= f 0 (ys ) yst + R (ys , yt ) (sy ) Xst
+ sy Xst
yst + (yst ) .

We now estimate f (y) as







f (y)
2
1
|f 0 (ys ) st | + R (ys , yt ) yst
st = f (y)st f 0 (ys ) s Xst





2/p
2
|f (y0 )| + kf 0 k,y N (y, ) + kf 00 k,y max 1, (0, T )
max 1, N1 (X) k(y, )k




1/p
2
+ C (0, T )
max 1, N1 (X) kf 00 k,y k(y, )k2 kk1




1/p
2
2
|f (y0 )| + kf 0 k,y N (y, ) + 2C (0, T )
max 1, N1 (X) kf 00 k,y k(y, )k2




1/p
2
2
|f (y0 )| + kf 0 k,y k(y, )k2 + 2C (0, T )
max 1, N1 (X) kf 00 k,y k(y, )k2
This proves Eq. (8.54).
Now suppose that Y and z are two X differentiable paths so that
dY = Y dX 1 + dY and dz = z dX 1 + dz

kf 0 k,y |st | + kf 00 k,y |yst |


2/p

kf 0 k,y N (y, ) (s, t)

1 00
2
2/p
kf k,y N1 (y) (s, t) ,
2

(8.63)

where
Y

d N z, Y (s, s + ds)2/p and

and so

Page:

2/p

|dz | N (z, z ) (s, s + ds)

86

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

8.3 Spaces of Integrands

From Eq. (8.59) it follows that

and hence we have shown


f (Y )

1
f (Y )st f (z)st = f 0 (Ys ) sY Xst
+ st

f (z)

1
f 0 (zs ) sz Xst
+ st

f (Y )
f (z)
1
= f 0 (Ys ) sY f 0 (zs ) sz Xst
+ st st

1
= f 0 (Ys ) sY Xst
+ f 0 (Ys ) Yst + R (Ys , Yt ) Yst2
 0

2
1
f (zs ) sz Xst
+ f 0 (zs ) zst + R (zs , zt ) zst

and hence from Eq. (8.62)


f (Y )f (z)

st

f (Y )

= st

f (z)

st

2
= f 0 (Ys ) Yst f 0 (zs ) zst + R (Ys , Yt ) Yst2 R (zs , zt ) zst


= [f 0 (Ys ) f 0 (zs )] Yst + f 0 (zs ) Yst zst


2
+ [R (Ys , Yt ) R (zs , zt )] Yst2 + R (zs , zt ) Yst2 zst
.

Let us observe
1

f 00 (x + (y x)) (1 ) d

R (x, y) =

|R0 (x, y)|

2 000
kf k,x,y .
3

It now follows that




f (y)f (z)
y
y
st
= |f 0 (ys ) f 0 (zs )| |st | + |f 0 (zs )| |st zst |
2
2

2
+ |R (zs , zt )| yst
+ |R (ys , yt ) R (zs , zt )| yst
zst
kf 00 k,y,z |ys zs | |yst | + kf 0 k,z |yst zst |
2 1 00
2

2
2
+ kf k

+ kf 000 k,y,z |(ys zs , yt zt )| yst
,z yst zst .
3
2
(8.67)
From Eq. (8.21)
N1 (y z) |0y 0z | N1 (X)
h
i
1/p
1/p
+ N1 (y z ) N1 (X) (0, T ) + N (y z, y z ) (0, T )
(8.68)

so that
Z

and from this it also follows that

1
00

(v,w) [f (x + (y x))] (1 ) d

(v,w) R (x, y) =
0

87

1/p

|yt zt | |y0 z0 | + N1 (y z) (0, t)

1/p

|y0 z0 | + |0y 0z | N1 (X) (0, t)


h
i
1/p
2/p
+ N1 (y z ) N1 (X) (0, T ) + N (y z, y z ) (0, t) .

d
|0 [f 00 (x + tv + (y x + t (w v)))] (1 ) d
dt
0
Z 1



=
v+ (wv) f 00 (x + (y x)) (1 ) d
=

(8.69)

Z
=

Combining Eqs. (8.67 8.69) shows




f (y)f (z)
2/p
st
kf 00 k,y,z N (y, y ) |ys zs | (s, t)

[f 000 (x + (y x)) (v + (w v) , , )] (1 ) d.

From this it follows that


Z 1


(v,w) R (x, y)
|f 000 (x + (y x))| |v + (w v)| (1 ) d

2/p

+ kf 0 k,z N (y z, y z ) (s, t)
2
2

+ kf 000 k,y,z |(ys zs , yt zt )| yst
3
2

1
2
+ kf 00 k,z yst
zst
.
2

0
000

kf k,x,y

[|v| + |w v|] (1 ) d
0



1
1 1
= kf 000 k,x,y
|v| +

|w v|
2
2 3




1
1
1
1
1
= kf 000 k,x,y
|v| + |w v| kf 000 k,x,y
|v| + |v| + |w|
2
6
2
6
6


2
1
2
kf 000 k,x,y
|v| + |w| kf 000 k,x,y (|v| + |w|)
3
6
3
Page:

87

job:

rpaths

Moreover
2

2
yst zst
|yst + zst | |yst zst | [|yst | + |zst |] |yst zst |
1/p

[N1 (y) + N1 (z)] (s, t)

1/p

N1 (y z) (s, t)
2/p

= [N1 (y) + N1 (z)] N1 (y z) (s, t)

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

88

8 Rough Path Integrals

and

Combining the last three equations and using Proposition 8.13 shows
1/p

|yt zt | |y0 z0 | + N1 (y z) (0, t)

Putting this all together shows




h
i
f (y)f (z)
1/p
2/p
(s, t)
st
kf 00 k,y,z N (y, y ) |y0 z0 | + N1 (y z) (s, t)
2/p

+ kf 0 k,z N (y z, y z ) (s, t)
h
i
4
1/p
2
2/p
+ kf 000 k,y,z |y0 z0 | + N1 (y z) (0, T )
N1 (y) (s, t)
3
1
2/p
+ kf 00 k,z [N1 (y) + N1 (z)] N1 (y z) (s, t)
2
from which it follows that
N (f (y) f (z) , f 0 (y) y f 0 (z) z )
h
i
1/p
kf 00 k,y,z N (y, y ) |y0 z0 | + N1 (y z) (0, T )
+ kf 0 k,z N (y z, y z )
h
i
4
1/p
2
+ kf 000 k,y,z |y0 z0 | + N1 (y z) (0, T )
N1 (y)
3
+ kf 00 k,z [N1 (y) + N1 (z)] N1 (y z)

kf 0 (y) y f 0 (z) z k1
h

i
1/p
C (p, ) kf 00 k,y,z + kf 000 k,y,z N1 (y) max 1, (0, T )
ky zk1 ky k1
+ C (p, ) kf 00 k,z kzk1 ky z k1


kf 00 k,y,z
C (p, , N1 (X))
k(y z, y z )k2 k(y, y )k2
+ kf 000 k,y,z k(y, y )k2
+ C (p, ) kf 00 k,z k(z, z )k2 k(y z, y z )k2
"
#
kf 00 k,y,z (k(y, y )k2 + k(z, z )k2 )
C (p, , N1 (X))
k(y z, y z )k2 .
2
+ kf 000 k,y,z k(y, y )k2
Assembling all of these estimates shows Eq. (8.57) with a constant as described in Eq. (8.58).

8.4 Appendix on Taylors Theorem


Theorem 8.21 (Taylors Theorem). If f : V E is a C n+1 -smooth map
between Banach space, then
n
X
1 k
f (v + h) =
D f (v)(h, . . . , h) + Rn+1 (v, h)
k!

or after some rearranging that


N (f (y) f (z) , f 0 (y) y f 0 (z) z )
kf 0 k,z N (y z, y z )
#
"
|y0 z0 | N (y, y ) +


00
+ kf k,y,z
1/p
1
y
N1 (y z)
2 [N1 (y) + N1 (z)] + N (y, ) (0, T )
h
i
4
1/p
2
+ kf 000 k,y,z |y0 z0 | + N1 (y z) (0, T )
N1 (y) .
3


1/p
Moreover, by Eq. (8.44) with C = C p, (0, T )
, we have
kf 0 (y) y f 0 (z) z k1 C (p, ) [kf 0 (y) f 0 (z)k1 ky k1 + kf 0 (z)k1 ky z k1 ]

k=0

where
1
Rn+1 (v, h) =
(n + 1)!

n+1 times

n+1

z }| {
f (v + rh)( h, . . . , h )dn (r)

and
dn (r) = (n + 1) (1 r)n dr.
Notice that n is a probability measure on [0, 1] for each n = 0, 1, 2, . . . In
particular we have
Z 1
f (v + h) = f (v) +
f 0 (v + rh) dr
0

and by Theorem 8.19 we have


h

i
1/p
kf 0 (y) f 0 (z)k1 kf 00 k,y,z + kf 000 k,y,z N1 (y) max 1, (0, T )
ky zk1
and
0

00

88

job:

rpaths

f 00 (v + rh) (h, h) (1 r) dr

and
1
1
f (v + h) = f (v)+f (v) h+ f 00 (v) (h, h)+
2
2!
0

kf (z)k1 kf k,z kzk1 .


Page:

f (v + h) = f (v) + f 0 (v) h +

macro:

svmonob.cls

D3 f (v + rh) (h, h, h) (1 r) dr.

date/time:

11-Mar-2009/12:03

8.4 Appendix on Taylors Theorem

Corollary 8.22. Keeping the same notation as in Taylors Theorem 8.21, we


have


n


X
1 k
M n+1


n+1
D f (v)(h, . . . , h)
|h|
f (v + h)

(n + 1)!
k!
k=0

n+1
f (v + rh) .
where M := sup0r1 D
Notation 8.23 Let B = B (0, R) be a ball in V which contains Xs :=
s [0, T ] and let M be a bound on , 0 and 00 on this ball.

1
X0,s

89

3. For , B,
|R1 (, )|

M
2
| |
2

and
|R2 (, )| = M | | .
and in particular we have

for
|R1 (Xs , Xt )| =

Example 8.24. Suppose C 2 (V L (V, W )) . Then we have the following


factorization results:

M 1 2
M 2
2/p
Xst
C (s, t)
2
2

and
1/p

|R2 (Xs , Xt )| M C (s, t)

1. For , V we have, with h =


() = () + 0 () ( ) + R1 (, )
where
Z

R1 (, ) =

(1 r) D2 ( + r( ))( , )dr

and in particular it follows that


1
(Xt ) = (Xs ) + 0 (Xs ) Xst
+ R1 (Xs , Xt )

where
1

1
1
1
(1 r) D2 (Xs + rXst
)(Xst
, Xst
)dr.

R1 (Xs , Xt ) =
0

2. For , V we have, with h =


0 () = 0 () + R2 (, )
where
Z

R2 (, ) =

D2 ( + r( ))( , , )dr

and in particular
0 (Xt ) = 0 (Xs ) + R2 (Xs , Xt )
where
Z
R2 (Xs , Xt ) =

1
1
1
D2 (Xs + rXst
)(Xst
, , )dr.

Page:

89

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

9
Rough ODE
Let us not go on to understanding the meaning of the differential equation,
Z t
y (t) = y0 +
f (y ( )) dx ( ) ,
(9.1)
0

Proof. We have,
Z

f 0 ( y (t) + (1 ) y (s)) yst d




2/p
1
= f 0 (y (s)) yst + st = f 0 (y (s)) f (y (s)) Xst
+ O (s, t)
+ st


2/p
1
= (s) Xst
+ O (s, t)
+ st

st = f (y (t)) f (y (s)) =

where x Cp ([0, T ] V ) and y Cp ([0, T ] , W ) . As we do not know how to


do this integral we work heuristically for the moment. As before, let
Z t
Yst = y (t) y (s) =
f (y ( )) dx ( )
s
Z t

[f (y (s)) + f 0 (y (s)) (y ( ) y (s))] dx ( )


=
s
Z t

[f (y (s)) + f 0 (y (s)) f (y (s)) (x ( ) x (s))] dx ( )


=
s
1
2
= f (y (s)) Xst
+ f 0 (y (s)) f (y (s)) Xst
.

This suggests that if X is a p lift of x, we should reinterpret Eq. (9.1) as


Z t


y (t) = y0 +
f (y) dX 1 + f 0 (y) f (y) dX 2 .
(9.2)
0

Notice that if y is such a solution we would have,


Z t


1
2
yst =
f (y) dX 1 + f 0 (y) f (y) dX 2
+ f 0 (y (s)) f (y (s)) Xst
= f (y (s)) Xst
s

where
Z
st :=

[f 0 ( y (t) + (1 ) y (s)) f 0 (y (s))] yst d.

Letting M2 be a bound on |f 00 | over {( y (t) + (1 ) y (s)) : s, [0, 1]} , we


find
Z 1


M2
2
2/p
|yst | = O (s, t)
|st | :=
|f 0 ( y (t) + (1 ) y (s)) f 0 (y (s))| |yst | d
2
0


2/p
1
so that st = (s) Xst
+ O (s, t)
. Since (t) = g (y (t)) where g (y) =


1/p
f 0 (y) f (y) is smooth, it follows that st = O (|yst |) = O (s, t)
as required.

9.1 Local Existence and Uniqueness

and in particular,


2/p
1
yst f (y (s)) Xst
C (s, t) .

(9.3)

The following lemmas shows the right side of Eq. (9.2) makes sense provided y
satisfies the constraint in Eq. (9.3).
Lemma 9.1. Suppose that p [1, 3), y Cp ([0, T ] W ) , and X : G is
a p rough path. Further suppose that


2/p
1
y (t) y (s) f (y (s)) Xst
= O (s, t)
.
0

Then ( (t) , (t)) = (f (y (t)) , f (y (t)) f (y (t))) is an X integrable function.

Let f : U L(V, U ) be a smooth function. We wish to consider the rough path


ODE,
dy = f (y)dX with y0 = y0 .
(9.4)
As usual we should interpret this as an integral equation,
Z t
yt = y0 +
f (y)dX,
0

by which we really mean;

(9.5)

92

9 Rough ODE

where Z = (z, z ) 2 (U, ) with Z0 = 0 and 0Z = 0 and let

f (y, y ) dX

yt = y0 +
0

Z
= y0 +

g(y) := f (y0 + y) f (y0 ).

[f (y)dX 1 + f 0 (y)y dX 2 ] where

(9.6)

y = f (y)

(9.7)

Notation 9.2 In what follows, Y := (y, ) and Y := (


y,
) will denote elements of D (U ) . Moreover, we may write as y and
as y.
Theorem 9.3 (Global Uniqueness). Assuming that f is C 3 , there is at most
one solution to Eq. (9.6).
Proof. Suppose Y = (y, ) and Ye = (
y,
) are two solutions to Eq. (9.6).
By Eq. (8.57) of Theorem 8.20,








f (Y ) f (Ye ) C Y Ye
2

where


C = C((0, T )1/p , kf kC 3 (B(0,kY k kYe k )) , kY k2 , Ye , kXk).
2

Putting this expression into Eq. (9.5) shows Z must satisfy,


Z t
Z t
y0 + Xt + zt = y0 +
f (Y )dX = y0 +
f (y0 + X + Z)dX
0
0
Z t
= y0 + Xt +
g(X + Z)dX
0

or equivalently that
Z t
Z t
Zt =
f (y0 + X + Z)dX Xt =
[f (y0 + X + Z) ]dX
0
0
Z t
=
g(X + Z)dX
0

Z t
d(X + Z)
dX 2
=
g(X + Z)dX 1 + g 0 (X + Z)
dX
0
Z t


=
g(X + Z)dX 1 + g 0 (X + Z)( + Z )dX 2 .
0

Using this estimate along with Theorem 8.14 we find,


Z h





i




1/p
e
e
e )

f (Y ) f (Y ) dX

C
(kXk)(0,
T
)
(Y
)

f
(
Y
Y Y =
f

p

2
2
2


1/p
C Cp (kXk)(0, T ) Y Ye .
2

Hence on the interval [0, t] such that C Cp (kXk)(0, t)1/p < 1, it follows that
Y = Ye .
n
o
Let := inf t > 0 : Yt 6= Yet . By continuity of Y and Ye it follows that
Y = Ye . If < T , it follows from the above argument with the time interval
shifted to [, +t], then in fact Ys = Yes for all s +t. But this contradicts
the definition of unless = T .
When f = L(V, U ) is constant, it follows that the solution to Eq. (9.4)
or more precisely Eq. (9.6) is
1
yt = y0 + Xt := y0 + X0,t
.

We will solve the general case as a perturbation of the solution of the constant
case with := f (y0 ). So let
yt = y0 + Xt + zt
Page:

92

job:

rpaths

More precisely, using Eq. (9.6), Z must satisfy


Z t
y0 + Xt + Zt = y0 +
[f (y)dX 1 + f 0 (y)y dX 2 ]
0

or equivalently that
Z t
Zt =
[f (y)dX 1 + f 0 [(y)y dX 2 ] Xt
0
Z t
=
([f (y0 + X + Z) f (y0 )]dX 1 + f 0 (y0 + X + Z)( + Z )dX 2 )
0
Z t
=
(g(X + Z)dX 1 + g 0 (X + Z)( + Z )dX 2 )
0
Z t
=
g (X + Z, + Z ) dX.
0

Theorem 9.4 (Local Existence and Uniqueness). Assuming that f is C 3 ,


and T is sufficiently small, then Eq. (9.6) has a unique solution. If we further
assume that f and its derivatives to order three are bounded, then Eq. (9.6) has
a solution defined for 0 t T. (BRUCE: Ideally, the assumption that f is
bounded should be dropped from the hypothesis.)
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

9.1 Local Existence and Uniqueness

Letting N := 2kXk|f 0 (y0 )f (y0 )| and making the assumption that kZk2 N,
we find


1
M [|f (y0 )| + N
]+
1/p

k (Z)k2 N +Cp (kXk)(0, T )
+2C (0, T )1/p 1 + kXk2 M 2 [|f (y0 )| + N ]2
2
(9.11)
from which it follows that k (Z)k2 N provided we choose T sufficiently small
so that


1
M [|f (y0 )| + N
]+

Cp kXk(0, T )1/p
1/p
2
2
2 N.
+2C (0, T )
1 + kXk M [|f (y0 )| + N ]
2

Proof. Let Z := (z, z ) and then define,


Z
Z
(Z) = g(X + z)dX := g (X + Z, + Z ) dX
or more precisely,
Z
(Z) :=


g (X + z, + Z ) dX, g(X + z)

where as usual,
g (X + z, + Z ) = g(X + z), g 0 (X + z) + Z



(One should keep in mind that by scaling , if it is desirable we may assume


that N = 2kXk |f 0 (y0 )f (y0 )| = 1.) In summary, if

which at t = 0 is given by

F := {Z 1 (U, ) : Z0 = 0, 0Z = 0, and kZk2 N }

g (X + z, + z )|t=0 = (g(0), g 0 (0) ) = (0, g 0 (0) ) .


From Eq. (8.26) we have, with Cp (kXk) = 1 + Kp kXk that
k (Z)k2 |g(0) + kXk kg 0 (0)| + Cp (kXk)(0, T )1/p kg(X + Z)k2
= kXk |f 0 (y0 )f (y0 )| + Cp (kXk)(0, T )1/p kg(X + Z)k2 .

(9.8)

and T is sufficiently small, then (F ) F.


e F 1 (U, ), then
Suppose Z, Z

Z h




i






e
e
e


g(X
+
Z)

g(X
+
Z)
dX
(Z)

(
Z)
=
(Z

Z)
=






2

kg(X + Z)k2 kg 0 k,Z+Z kX + Zk2


(9.9)

Let us now assume that


kf 0 k + kf 00 k + kf 000 k M < ,
then upon noting that
kX + Zk2 kXk2 + kZk2 = || + kZk2
it follows from Eq. (9.9) that


kg(X+Z)k2 M [|f (y0 )|+kZk2 ]+2C (0, T )1/p


1 + kXk2 M 2 [|f (y0 )|+kZk2 ]2 .
(9.10)

Combining this with Eq. (9.8) shows


k (Z) k2 kXk|f 0 (y0 )f (y0 )|


M [|f (y0 )| + kZk
2 ]+

+ Cp (kXk)(0, T )1/p
.
+2C (0, T )1/p 1 + kXk2 M 2 [|f (y0 )| + kZk2 ]2

Page:

93

job:

rpaths

|g(0) g(0)| + kXk|g 0 (0) g 0 (0)|





e
+ Cp (kXk)(0, T )1/p g(X + Z) g(X + Z)


2

e
=Cp (kXk)(0, T )1/p g(X + Z) g(X + Z)
.

Moreover, from Theorem 8.20 (using g([X + Z]0 ) = g(0) = 0) we have

+ ((0, T )1/p )(1 + kXk2 )kg 00 k2,X+Z kX + Zk22 .

93

It then follows by an application of Eq. (8.57) of Theorem 8.20 that





e
g(X + Z) g(X + Z)
|g(0) g(0)|
2




1/p
e
+ C M, N, (0, T ) , kXk X + Z (X + Z)

2





1/p
=C M, N, (0, T ) , kXk Z Ze .
2

Combining the last two displayed equations then shows,











e
(Z) (Z)
Cp (kXk)C M, N, (0, T )1/p , kXk (0, T )1/p Z Ze .
2

By shrinking T some more if necessary, we may assume




Cp (kXk)C M, N, (0, T )1/p , kXk (0, T )1/p < 1.

(9.12)

With this choice of T it then follows that |F : F F is a contraction. Since F


is a closed subset of a Banach space, an application of the contraction mapping
macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

94

9 Rough ODE

Z

principle implies there exists a unique Z F such that (Z) = Z. The desired
solution to Eq. (9.4) is then

f (yu ) f (ys ) =

f (ys + Ysu ) d Ysu


0

Z
yt := y0 + f (y0 )Xt + Zt .

f 0 (ys + Ysu ) d

1
s Xsu
+ su

For the last assertion, notice that when f and its derivatives are assumed
to be bounded, then for Eqs. (9.11) and (9.12) to be valid, we need only
choose T such that (0, T ) with > 0 being a constant independent
of y0 . Thus if we want to solve the equation, we may choose a partition
= {0 = t0 < t1 < < tr = T } such that (tl1 , tl ) for all l. Thus we
may inductively construct the solution on [0, tl ] for l = 1, 2, . . . , r.

we see that
 1 1
A = f 0 (ys ) s Xsu
Xut

Z

f 0 (ys + Ysu ) d

 1
1
s Xsu
+ su Xut

Z

1
0

[f (ys ) f (ys + Ysu )] d

 1 1
s Xsu
Xut

Z

1
f (ys + Ysu ) d su Xut

and hence that

9.2 A priori-Bounds

1
1 1

Xut + M1 |su | Xut
|A| M2 |Ysu | |s | Xsu

Theorem 9.5 (A priori Bounds). Assuming that f is C 2 , f is bounded with


bounded derivatives to order two and suppose y (t) solves Eq. (9.6), then there
exists a constant C and depending only on Mi := f (i) for i = 0, 1, 2; such
that
|yt | |y0 | + C ( (0, T ) / p + 1) for all 0 t T.

[M2 |s | N (Y ) + M1 N ()] (s, t)

3/p

Furthermore,
2
B = (f 0 (ys ) s f 0 (yu ) u ) Xut
2
= ([f 0 (ys ) f 0 (yu )] s + f 0 (yu ) [s u ]) Xut
,

Proof. Let
1
2
Wst := f (ys ) Xst
+ f 0 (ys ) s Xst

and thus

where
Yst =

1
s Xst

2
(M2 |s | N (Y ) + M1 N ()) (s, t)3/p .
|B| (M2 |Ysu | |s | + M1 |su |) Xut

+ st .

Then for s < u < t, we have


 1

1
1
Wst Wsu Wut = f (ys ) Xst
Xsu
f (yu ) Xut


2
2
2
+ f 0 (ys ) s Xst
Xsu
f 0 (yu ) u Xut
 1

1
1
1
= f (ys ) Xst Xsu
Xut
+ [f (ys ) f (yu )] Xut


2
2
2
2
+ f 0 (ys ) s Xst
Xsu
Xut
+ [f 0 (ys ) s f 0 (yu ) u ] Xut


1
1
1
= [f (ys ) f (yu )] Xut
+ f 0 (ys ) s Xsu
Xut


2
2
2
1
1
+ f 0 (ys ) s Xst
Xsu
Xut
Xsu
Xut

Assembling these estimates gives


|Wst Wsu Wut | |A| + |B|
3/p

[M2 |s | N (Y ) + M1 N ()] (s, t)

+ (M2 |s | N (Y ) + M1 N ()) (s, t)

3/p
3/p

= [2M2 |s | N (Y ) + M1 (N () + N ())] (s, t)


which allows us to conclude that

3/p

|Yst Wst | K (3/p) [2M2 kk N (Y ) + M1 (N () + N ())] (s, t)

2
+ [f 0 (ys ) s f 0 (yu ) u ] Xut
1
1
1
= [f (ys ) f (yu )] Xut
+ f 0 (ys ) s Xsu
Xut
=: A + B.



2
+ [f 0 (ys ) s f 0 (yu ) u ] Since
Xut
s = f (ys ) , we have kk M0 ,

|st | = |f (yt ) f (ys )| M1 |Yst | M1 N (Y ) (s, t)

1/p

Since
and hence
N () M1 N (Y ) .
Page:

94

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

9.2 A priori-Bounds

Therefore,

95

from which it follows that


3/p

|Yst Wst | K (3/p) [2M2 M0 N (Y ) + M1 (N () + M1 N (Y ))] (s, t)


and this then implies that

N (Y ) + N ()

M0
(1 + M1 + M1 ) .
1

In particular, this gives a bound of the form that


0

1
2
|Yst | |Wst | + |Yst Wst | f (ys ) Xst
+ f (ys ) s Xst
+ |Yst Wst |

|yt | |y0 | + C if (0, t) p .

2/p

M0 (s, t) + M1 M0 (s, t)

+ K (3/p) [2M2 M0 N (Y ) + M1 (N () + M1 N (Y ))] (s, t)

3/p

and therefore that

=
1/p

N (Y ) M0 + M1 M0 (0, T )



2/p
+ K (3/p) 2M2 M0 + M12 N (Y ) + M1 N () (0, T ) .
Moreover we also have




1
1
|Yst Wst | + Wst f (ys ) Xst
|st | = Yst f (ys ) Xst


2
|Yst Wst | + f 0 (ys ) s Xst
|Yst Wst | + M1 M0 (s, t)

In summary, if we define

:= 2M0 1 +

Hence if we choose 1 > > 0 such that

and assume

M12

, M1

1
(0, t) =
4K (3/p) (2M2 M0 + M12 )
p

1
=: <
2
2

and then choose T such that




1/p
2/p
max (0, T ) , (0, T )
,

1/p

|yt | |y0 | + N (Y ) (0, t)


(9.13)

p
,

|y0 | + .

|yt | |y0 | + for t [t0 , t1 ] ,


|yt | |y0 | + + = |y0 | + 2 for t [t1 , t2 ]
...
|yt | |y0 | + n for t [tn1 , tn ] and

(since < 1, the condition is really (0, T ) ) then we have

(N (Y ) + N ()) and
2

(N (Y ) + N ()) .
2

1/p

|yt | |y0 | + n + (tn , T )


Since

Adding these two equations gives the estimate,

1/p

|y0 | + (0, t)

Choosing 0 = t0 < t1 < t2 < < tn < tn+1 = T such that (tl1 , tl ) = p for
l = 1, 2, . . . , n and (tn , T ) p . It then follows that

N () M0 M1 +

M1
+ M1
4K (3/p) (2M2 M0 + M12 )

then we have shown

max K (3/p) 2M2 M0 +

N (Y ) M0 + M1 M0 +

1
4K (3/p) (2M2 M0 + M12 )

N (Y ) + N () 2M0 (1 + M1 + M1 )


M1
2M0 1 +
+
M
.
1
4K (3/p) (2M2 M0 + M12 )

2/p



1/p
2M2 M0 + M12 N (Y ) + M1 N () (0, T ) .

in which case (by assuming M0

and so for (0, T ) p , we have

so that
N () M0 M1 + K (3/p)

1
2

To be precise, suppose that we take =


is sufficiently large) we have

n p + (tn , T ) =

N (Y ) + N () M0 + M1 M0 + M0 M1 + (N (Y ) + N ())

n+1
X

for t [tn , T ] .

(tl1 , tl ) (0, T )

l=1

Page:

95

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

96

9 Rough ODE

we have n [ (0, T ) (tn , T )] p and therefore we have proven


1/p

|yt | |y0 | + [ (0, T ) (tn , T )] p + (tn , T )

(9.14)

where

p1
C := 1p = 4K (3/p) 2M2 M0 + M12



p1
M1
= 2M0 1 +
+ M1 4K (3/p) 2M2 M0 + M12
2
4K (3/p) (2M2 M0 + M1 )
(9.15)



p1
M1
(0, T )
|yt | |y0 | + 2 [M1 (|y0 | + )] 1 +
+ M1 4K (3/p) M12
4K (3/p) M12


M1
1
+ 2 [M1 (|y0 | + )] 1 +
+ M1
(9.17)
4K (3/p) M12
4K (3/p) M12
C1 (p, M1 ) [|y0 | + ] + C2 (p, M1 , )

(9.18)

and hence we get the bound,


|yt | |y0 | + K 0 (M1 , p, k) (1 + ) (0, T ) for all 0 t T
M0 k(1 + ) where k is a bound on for some constant k and thus we get the
bound

Since
1/p

(tn , T )

h
i
1/p1
(tn , T ) p = (tn , T ) (tn , T )
p
i
h
1p
= (tn , T ) (tn , T ) p p


(tn , T ) 1p 1p 0,

|yt | |y0 | + K 0 (M1 , p, k) (1 + ) (0, T ) for all 0 t T

we may conclude that


|yt | |y0 | + C (0, T ) for all 0 t T.

(9.16)

For M0 large we have the approximate estimate,


p1

|yt | |y0 | + 2 (1 + M1 ) [8K (3/p) M2 ]

M0p (0, T ) .

Remark 9.6. From Eqs. (9.16) and (9.15), in the case that f is linear, so that
f 00 0, we then have M2 = 0 and in this case we learn that


p1

M1
(M1 , p) M0 .
C = 2M0 1 +
+
M
4K (3/p) M12
=K
1
2
4K (3/p) M1
To make use of this sort of nonsensical statement (nonsensical, since M0 = if
f is linear) we must restrict our attention to solutions in a big open ball. STOP
In more detail, let T be the first exit time of y (t) from B (y0 , ) for some
cutoff . In this case we will have for y B (y0 , ) that
|f (y)| M1 (|y0 | + )
so that
M0 M1 (|y0 | + ) .
Using this estimate back in Eq. (9.16) implies that
Page:

96

job:

rpaths

macro:

svmonob.cls

date/time:

11-Mar-2009/12:03

10
Some Open Problems
Problem 10.1 (Measure theoretic approach). Is there are more measure
theoretic approach to the rough path theory? I would guess this would require
are reasonable notion of simple functions. Also, can one get rid of the continuity assumptions? (I have not done a literature search on this point, so there
probably is some work in this direction already.)
Problem 10.2 (Non explosion criteria). It is shown in Fritz and Victoir [6,
Exercise 10.61 on p. 259] that if dy = f (y) dx, x is a geomentric p variation
rough path on Rd and f has bounded derivatives to sufficiently high order,
then the equation has solutions for all time. It is reasonable to ask the following
questions;
1. What happens for non-geometric rough paths?
2. What happens in infinite dimensions, i.e. when d ?
3. What are other sufficient conditions for non-explosion?
4. Can one find necessary condititions as well?
Theorem 10.3. Let p [1, ) and n Z+ such that n 1 p < n. Suppose
that X : G(n) (V ) is a p rough path. Then for any m n there is a
: G(m) (V ) . In particular,
unique extension of X to a p rough path, X
we may extend X to a p rough path,
=
X

X
k=0

(k) : G() (V ) := lim G(m) (V ) .


X
m

Definition 10.4. Let p [1, ) and n Z+ such that n 1 p < n. Suppose


that X : G(n) (V ) is a p rough path. The signature of X is defined by
0,T G() (V ) .
sgn(X) = X
Problem 10.5. How much of X can be recovered from sgn(X)? When X is
finite variation, Hambly and Lyons [10] give a detailed answer to this question.

References

1. A. M. Davie, Differential equations driven by rough paths: an approach via discrete


approximation, Appl. Math. Res. Express. AMRX (2007), no. 2, Art. ID abm009,
40. MR MR2387018
2. R.M. Dudley and R. Norvaisa, An introduction to p-variation and Young integrals
with emphasis on sample functions of stochastic processes, Lecture given at the
Centre for Mathematical Physics and Stochastics, Department of Mathematical
Sciences, University of Aarhus, 1998.
3. Denis Feyel and Arnaud de La Pradelle, Curvilinear integrals along enriched paths,
Electron. J. Probab. 11 (2006), no. 34, 860892 (electronic). MR MR2261056
(2007k:60112)
4. Franco Flandoli, Massimiliano Gubinelli, Mariano Giaquinta, and Vincenzo M.
Tortorelli, Stochastic currents, Stochastic Process. Appl. 115 (2005), no. 9, 1583
1601. MR MR2158261 (2006e:60073)
5. Peter Friz and Nicolas Victoir, A note on the notion of geometric rough paths,
Probab. Theory Related Fields 136 (2006), no. 3, 395416. MR MR2257130
(2007k:60114)
, Multidimensional stochastic processes as rough paths: Theory and appli6.
cations, Springer ??, 2008.
7. M. Gubinelli, Controlling rough paths, J. Funct. Anal. 216 (2004), no. 1, 86140.
MR MR2091358 (2005k:60169)
8. M. Gubinelli, Ramification of rough paths, 2006.
9. Massimiliano Gubinelli, Antoine Lejay, and Samy Tindel, Young integrals and
SPDEs, Potential Anal. 25 (2006), no. 4, 307326. MR MR2255351 (2007k:60182)
10. B. M. Hambly and T. J. Lyons, Uniqueness for the signature of a path of bounded
variation and the reduced path group, Ann. Math. ?? (2009), no. 1, 50.

11. Antoine Lejay, On rough differential equations, Electron. J. Probab. 14 (2009),


no. 12, 341364 (electronic).
12. Terry Lyons and Zhongmin Qian, System Control and Rough Paths, Oxford University Press, 2002, Oxford Mathematical Monographs.
, System control and rough paths, Oxford Mathematical Monographs, Ox13.
ford University Press, Oxford, 2002, Oxford Science Publications. MR MR2036784
(2005f:93001)
14. Terry Lyons and Nicolas Victoir, An extension theorem to rough paths, Ann.
Inst. H. Poincare Anal. Non Lineaire 24 (2007), no. 5, 835847. MR MR2348055
(2008h:60229)
15. Terry J. Lyons, Michael Caruana, and Thierry Levy, Differential equations driven
by rough paths, Lecture Notes in Mathematics, vol. 1908, Springer, Berlin, 2007,
Lectures from the 34th Summer School on Probability Theory held in SaintFlour, July 624, 2004, With an introduction concerning the Summer School by
Jean Picard. MR MR2314753
16. Richard Montgomery, A tour of subriemannian geometries, their geodesics and
applications, Mathematical Surveys and Monographs, vol. 91, American Mathematical Society, Providence, RI, 2002. MR MR1867362 (2002m:53045)
17. Peter Mrters, Yuval Peres with appendix by Schramm, and Werner, Brownian
motion, http://people.bath.ac.uk/maspm/book.pdf, 2006.
18. L. C. Young, An inequality of H
older type connected with Stieltjes integration,
Acta Math. (1936), no. 67, 251282.

Das könnte Ihnen auch gefallen