Sie sind auf Seite 1von 34

Chapter 7

Local Gradients on the Poisson Space


We study a class of local gradient operators on Poisson space that have
the derivation property. This allows us to give another example of a gra-
dient operator that satises the hypotheses of Chapter 3, this time for a
discontinuous process. In particular we obtain an anticipative extension of
the compensated Poisson stochastic integral and other expressions for the
Clark predictable representation formula. The fact that the gradient oper-
ator satises the chain rule of derivation has important consequences for
deviation inequalities, computation of chaos expansions, characterizations of
Poisson measures, and sensitivity analysis. It also leads to the denition of
an innite dimensional geometry under Poisson measures.
7.1 Intrinsic Gradient on Conguration Spaces
Let X be a Riemannian manifold with volume element , cf. e.g. [14]. We
denote by T
x
X the tangent space at x X, and let
TX =
_
xX
T
x
X
denote the tangent bundle to X. Assume we are given a dierential operator
L dened on (
1
c
(X) with adjoint L

, satisfying the duality relation


Lu, V )
L
2
(X,;TX)
= u, L

V )
L
2
(X,)
, u (
1
c
(X), V (
1
c
(X, TX).
In the sequel, L will be mainly chosen equal to the gradient
X
on X.
We work on the Poisson probability space (
X
, T
X
,
X

) introduced in
Denition 6.1.2.
Denition 7.1.1. Given a compact subset of X, we let o denote the set
of functionals F of the form
F() = f
0
1
{()=0}
+

n=1
1
{()=n}
f
n
(x
1
, . . . , x
n
), (7.1.1)
N. Privault, Stochastic Analysis in Discrete and Continuous Settings,
Lecture Notes in Mathematics 1982, DOI 10.1007/978-3-642-02380-4 7,
c Springer-Verlag Berlin Heidelberg 2009
247
248 7 Local Gradients on the Poisson Space
where f
n
(
1
c
(
n
) is symmetric in n variables, n 1, with the notation
= x
1
, . . . , x
n

when () = n,
X
.
In the next denition the dierential operator L on X is lifted to a dier-
ential operator

D
L
on
X
.
Denition 7.1.2. The intrinsic gradient

D
L
is dened on F o of the form
(7.1.1) as

D
L
x
F() =

n=1
1
{()=n}
n

i=1
L
x
i
f
n
(x
1
, . . . , x
n
)1
{x
i
}
(x), (dx) a.e.,

X
.
In other words if () = n and = x
1
, . . . , x
n
we have

D
L
x
F =

L
x
i
f
n
(x
1
, . . . , x
n
), if x = x
i
for some i 1, . . . , n,
0, if x / x
1
, . . . , x
n
.
Let 1 denote the space of functionals of the form
1 =
_
f
__
X

1
(x)(dx), . . . ,
_
X

n
(x)(dx)
_
,

1
, . . . ,
n
(

c
(X), f (

b
(R
n
), n N ,
and
| =
_
n

i=1
F
i
u
i
: u
1
, . . . , u
n
(

c
(X), F
1
, . . . , F
n
1, n 1
_
,
Note that for F 1 of the form
F = f
__
X

1
d, . . . ,
_
X

n
d
_
,
1
, . . . ,
n
(

c
(X),
we have

D
L
x
F() =
n

i=1

i
f
__
X

1
d, . . . ,
_
X

n
d
_
L
x

i
(x), x .
The following result is the integration by parts formula satised by

D
L
.
7.1 Intrinsic Gradient on Conguration Spaces 249
Proposition 7.1.3. We have for F 1 and V (
1
c
(X, TX):
IE
_

D
L
F, V )
L
2
(X,d;TX)
_
= IE
_
F
_
X
L

V (x)(dx)
_
Proof. We have
IE
_

D
L
F, V )
L
2
(X,d;TX)
_
= IE
_

D
L
x
F, V (x))
TX
_
=

n=1
IE
_
1
{()=n}
n

i=1

D
L
x
i
F, V (x
i
))
TX
_
= e
()

n=1
()
n
n!
n

i=1
_

L
x
i
f
n
(x
1
, . . . , x
n
), V (x
i
))
TX
(dx
1
)
()

(dx
n
)
()
= e
()

n=1
1
n!
n

i=1
_

f
n
(x
1
, . . . , x
n
)L

x
i
V (x
i
)(dx
1
) (dx
n
)
= e
()

n=1
1
n!
_

f
n
(x
1
, . . . , x
n
)
n

i=1
L

x
i
V (x
i
)(dx
1
) (dx
n
)
= IE
_
F
_
X
L

V (x)(dx)
_
.

In particular when L =
X
is the gradient on X we write

D instead of

D
X
and obtain the following integration by parts formula:
IE
_

DF, V )
L
2
(X,d;TX)
_
= IE
_
F
_
X
div
X
V (x)(dx)
_
, (7.1.2)
provided
X
and div
X
satisfy the duality relation

X
u, V )
L
2
(X,;TX)
= u, div
X
V )
L
2
(X,)
,
u (
1
c
(X), V (
1
c
(X, TX).
The next result provides a relation between the gradient
X
on X and its
lifting

D on , using the operators of Denition 6.4.5.
250 7 Local Gradients on the Poisson Space
Lemma 7.1.4. For F 1 we have

D
x
F() =

+
x
F() on (, x)
x
X : x . (7.1.3)
Proof. Let
F = f
__
X

1
d, . . . ,
_
X

n
d
_
, x X,
X
,
and assume that x . We have

D
x
F() =
n

i=1

i
f
__
X

1
d, . . . ,
_
X

n
d
_

i
(x)
=
n

i=1

i
f
_

1
(x) +
_
X

1
d(x), . . . ,
n
(x) +
_
X

n
d(x)
_

i
(x)
=
X
f
_

1
(x) +
_
X

1
d(x), . . . ,
n
(x) +
_
X

n
d(x)
_
=
X

+
x
f
__
X

1
d(x), . . . ,
_
X

n
d(x)
_
=
_

+
x
F
_
(x)
=

+
x
F().

The next proposition uses the operator


X
dened in Denition 6.4.1.
Proposition 7.1.5. For V (

c
(X; TX) and F 1 we have

DF(), V )
L
2
(X,d;TX)
(7.1.4)
=
X
DF(), V )
L
2
(X,;TX)
+
X
(
X
DF, V )
TX
)().
Proof. This identity follows from the relation

D
x
F() = (
X
x
D
x
F)(x), x ,
and the application to u =
X
DF, V )
TX
of the relation

X
(u) =
_
X
u(x, x)(dx)
_
X
u(x, )(dx),
cf. Relation (6.5.2) in Proposition 6.5.2.
In addition, for F, G 1 we have the isometry

DF,

DG)
L
2

(TX)
=

+
F,

+
G)
L
2

(TX)
, (7.1.5)
7.1 Intrinsic Gradient on Conguration Spaces 251

X
, as an application of Relation (7.1.3) that holds (dx)-a.e. for xed

X
.
Similarly from (7.1.5) and Proposition 6.5.2 we have the relation

DF,

DG)
L
2

(TX)
=
X
_

X
DF,
X
DG)
TX
_
+
X
DF,
X
DG)
L
2

(TX)
,
(7.1.6)

X
, F, G 1. Taking expectations on both sides in (7.1.4) using Relation
(6.4.5), we recover Relation (7.1.2) in a dierent way:
IE[

DF(), V )
L
2
(X,d;TX)
] = IE[
X
DF, V )
L
2
(X,;TX)
]
= IE[F
X
(div
X
V )],
V (

c
(X; TX), F 1.
Denition 7.1.6. Let

denote the adjoint of



D under

, dened as
IE

_
F

(G)
_
= IE

DF,

DG)
L
2

(TX)
_
,
on G 1 such that
1 F IE

DF,

DG)
L
2

(TX)
_
extends to a bounded operator on L
2
(
X
,

).
We close this section with a remark on integration by parts characterization
of Poisson measures, cf. Section 6.6, using the local gradient operator instead
of the nite dierence operator. We now assume that div
X

is dened on
X
f
for all f (

c
(X), with
_
X
g(x)div
X


X
f(x)(dx) =
_
X

X
g(x),
X
f(x))
T
x
X
(dx),
f, g (
1
c
(X).
As a corollary of our pointwise lifting of gradients we obtain in particular a
characterization of the Poisson measure. Let
H
X

= div
X


X
denote the Laplace-Beltrami operator on X.
Corollary 7.1.7. The isometry relation
IE

DF,

DG)
L
2

(TX)
_
= IE

X
DF,
X
DG)
L
2

(TX)

, (7.1.7)
252 7 Local Gradients on the Poisson Space
F, G 1, holds under the Poisson measure

with intensity . Moreover,


under the condition
(

c
(X) = H
X

f : f (

c
(X),
Relation (7.1.7) entails =

.
Proof.
i) Relations (6.4.5) and (7.1.6) show that (7.1.7) holds when =

.
ii) If (7.1.7) is satised, then taking F = I
n
(u
n
) and G = I
1
(h), h, u
(

c
(X), Relation (7.1.6) implies
IE

_
((H
X

h)uI
n1
(u
(n1)
))
_
= IE

X
DF,
X
h)
TX
_
= 0, n 1,
hence =

from Corollary 6.6.3.

We close this section with a study of the intrinsic gradient



D when X = R
+
.
Recall that the jump times of the standard Poisson process (N
t
)
tR
+
are
denoted by (T
k
)
k1
, with T
0
= 0, cf. Section 2.3. In the next denition, all
(

functions on

d
= (t
1
, . . . , t
d
) R
d
+
: 0 t
1
< < t
d

are extended by continuity to the closure of


d
.
Denition 7.1.8. Let o denote the set of smooth random functionals F of
the form
F = f(T
1
, . . . , T
d
), f (
1
b
(R
d
+
), d 1. (7.1.8)
We have

D
t
F =
d

k=1
1
{T
k
}
(t)
k
f(T
1
, . . . , T
d
), dN
t
a.e.,
with F = f(T
1
, . . . , T
d
), f (

b
(
d
), where
k
f is the partial derivative of
f with respect to its k-th variable, 1 k d.
Lemma 7.1.9. Let F o and h (
1
b
(R
+
) with h(0) = 0. We have the
integration by parts formula
IE
_

DF, h)
L
2
(R
+
,d)
_
= IE
_
F
_
d

k=1
h

(T
k
)
_
T
d
0
h

(t)dt
__
.
7.1 Intrinsic Gradient on Conguration Spaces 253
Proof. By integration by parts on
d
using Relation (2.3.4) we have, for
F o of the form (7.1.8),
IE[

DF, h
L
2
(R
+
,dN
t
)
] =
d

k=1
_

0
_
t
d
0

_
t
2
0
e
t
d
h(t
k
)
k
f(t
1
, . . . , t
d
)dt
1
dt
d
=
_

0
e
t
d
_
t
d
0

_
t
2
0
h(t
1
)
1
f(t
1
, . . . , t
d
)dt
1
dt
d
+
d

k=2
_

0
e
t
d
_
t
d
0

_
t
k+1
0
h(t
k
)

t
k
_
t
k
0

_
t
2
0
f(t
1
, . . . , t
d
)dt
1
dt
d

k=2
_

0
e
t
d
_
t
d
0

_
t
k+1
0
h(t
k
)
_
t
k
0
_
t
k2
0

_
t
2
0
f(t
1
, . . . , t
k2
, t
k
, t
k
, . . . , t
d
)dt
1
d

t
k1
dt
d
=
_

0
e
t
d
_
t
d
0

_
t
2
0
h

(t
1
)f(t
1
, . . . , t
d
)dt
1
dt
d
+
_

0
e
t
d
_
t
d
0

_
t
3
0
h(t
2
)f(t
2
, t
2
, . . . , t
d
)dt
2
dt
d

k=2
_

0
e
t
d
_
t
d
0

_
t
k+1
0
h

(t
k
)
_
t
k
0

_
t
2
0
f(t
1
, . . . , t
d
)dt
1
dt
d
+
_

0
e
t
d
h(t
d
)
_
t
d
0

_
t
2
0
f(t
1
, . . . , t
d
)dt
1
dt
d
+
d1

k=2
_

0
e
t
d
_
t
d
0

_
t
k+2
0
h(t
k+1
)
_
t
k+1
0
_
t
k1
0

_
t
2
0
f(t
1
, ., t
k1
, t
k+1
, t
k+1
, ., t
d
)dt
1
d

t
k
dt
d

k=2
_

0
e
t
d
_
t
d
0

_
t
k+1
0
h(t
k
)
_
t
k
0
_
t
k2
0

_
t
2
0
f(t
1
, ., t
k2
, t
k
, t
k
, ., t
d
)dt
1
dt
d
=
d

k=1
_

0
e
t
d
_
t
d
0

_
t
k+1
0
h

(t
k
)
_
t
k
0

_
t
2
0
f(t
1
, . . . , t
d
)dt
1
dt
d
+
_

0
e
t
d
h(t
d
)
_
t
d
0

_
t
2
0
f(t
1
, . . . , t
d
)dt
1
dt
d
= IE
_
F
_
d

k=1
h

(T
k
)
_
T
d
0
h

(t)dt
__
,
where d

t
k
denotes the absence of dt
k
in the multiple integrals with respect
to dt
1
dt
d
.
As a consequence we have the following corollary which directly involves the
compensated Poisson stochastic integral.
254 7 Local Gradients on the Poisson Space
Corollary 7.1.10. Let F o and h (
1
b
(R
+
) with h(0) = 0. We have the
integration by parts formula
IE[

DF, h)
L
2
(R
+
,d)
] = IE
_
F
_

0
h

(t)d(N
t
t)
_
. (7.1.9)
Proof. From Lemma 7.1.9 it suces to notice that if k > d,
IE[Fh

(T
k
)] =
_

0
e
t
k
h

(t
k
)
_
t
k
0

_
t
d
0

_
t
2
0
f(t
1
, . . . , t
d
)dt
1
dt
k
=
_

0
e
t
k
h(t
k
)
_
t
k
0

_
t
d
0

_
t
2
0
f(t
1
, . . . , t
d
)dt
1
dt
k

_

0
e
t
k1
h(t
k1
)
_
t
k1
0

_
t
d
0

_
t
2
0
f(t
1
, . . . , t
d
)dt
1
dt
k1
= IE[F(h(T
k
) h(T
k1
))]
= IE
_
F
_
T
k
T
k1
h

(t)dt
_
,
in other terms the discrete-time process
_
n

k=1
h

(T
k
)
_
T
k
0
h

(t)dt
_
k1
=
_
_
T
k
0
h

(t)d(N
t
t)
_
k1
is a martingale.
Alternatively we may also use the strong Markov property to show directly
that
IE
_
F
_

k=d+1
h

(T
k
)
_

T
d+1
h

(s)ds
__
= 0.
By linearity the adjoint

of

D is dened on simple processes u | of the
form u = hG, G o, h (
1
(R
+
), from the relation

(hG) = G
_

0
h

(t)d(N
t
t) +h,

DG)
L
2
(R
+
,dN
t
)
.
Relation (7.1.9) implies immediately the following duality relation.
Proposition 7.1.11. For F o and h (
1
c
(R
+
) we have :
IE
_

DF, hG)
L
2
(R
+
,dN
t
)
_
= IE
_
F

(hG)
_
.
7.2 Damped Gradient on the Half Line 255
Proof. We have
IE
_

DF, hG)
L
2
(R
+
,dN
t
)
_
= IE
_

D(FG), h)
L
2
(R
+
,dN
t
)
F

DG, h)
L
2
(R
+
,dN
t
)
_
= IE
_
F(G

(h) h,

DG)
L
2
(R
+
,dN
t
)
)
_
= IE
_
F
_
G
_

0
h

(t)d(N
t
t) +h,

DG)
L
2
(R
+
,dN
t
)
__
= IE
_
F

(hG)
_
.

7.2 Damped Gradient on the Half Line


In this section we construct an example of a gradient which, has the derivation
property and, unlike

D, satises the duality Assumption 3.1.1 and the Clark
formula Assumption 3.2.1 of Section 3.1. Recall that the jump times of the
standard Poisson process (N
t
)
tR
+
are denoted by (T
k
)
k1
, with T
0
= 0, cf.
Section 2.3.
Let
r(t, s) = (s t), s, t R
+
,
denote the Green function associated to equation

Lf := f

, f (

([0, ))
f

(0) = f

() = 0.
In other terms, given g (

([0, )), the solution of


g(t) = f

(t), f

(0) = f

() = 0,
is given by
f(t) =
_

0
r(t, s)g(s)ds, t R
+
.
Let also
r
(1)
(t, s) =
r
t
(t, s)
= 1
],t]
(s), s, t R
+
,
256 7 Local Gradients on the Poisson Space
i.e.
f(t) =
_

0
r
(1)
(t, s)g(s)ds
=
_
t
0
g(s)ds, t R
+
, (7.2.1)
is the solution of

= g,
f(0) = 0.
Let o denote the space of functionals of the form
1 =
_
F = f(T
1
, . . . , T
d
) : f (
1
b
(R
d
), d 1
_
,
and let
| =
_
n

i=1
F
i
u
i
: u
1
, . . . , u
n
(
c
(R
+
), F
1
, . . . , F
n
o, n 1
_
.
Denition 7.2.1. Given F o of the form F = f(T
1
, . . . , T
d
), we let

D
s
F =
d

k=1
1
[0,T
k
]
(s)
k
f(T
1
, . . . , T
d
).
Note that we have

D
s
F =
d

k=1
r
(1)
(T
k
, s)
k
f(T
1
, . . . , T
d
)
=
_

0
r
(1)
(t, s)

D
t
FdN
t
.
From Proposition 2.3.6 we have the following lemma.
Lemma 7.2.2. For F of the form F = f(T
1
, . . . , T
n
) we have
IE[

D
t
F[T
t
] =

N
t
<kn
IE[
k
f(T
1
, . . . , T
n
)[T
t
]
=

N
t
<kn
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t

k
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)ds
N
t
+1
ds
n
.
7.2 Damped Gradient on the Half Line 257
According to Denition 3.2.2, ID([a, )), a > 0, denotes the completion of o
under the norm
|F|
ID([a,))
= |F|
L
2
()
+
_
IE
__

a
[

D
t
F[
2
dt
__
1/2
,
i.e. (

D
t
F)
t[a,)
is dened in L
2
([a, )) for F ID([a, )). Clearly, the
stability Assumption 3.2.10 is satised by

D since
1
[0,T
k
]
(t) = 1
{N
t
<k}
is T
t
-measurable, t R
+
, k N. Hence the following lemma holds as a con-
sequence of Proposition 3.2.11. For completeness we provide an independent
direct proof.
Lemma 7.2.3. Let T > 0. For any T
T
-measurable random variable F
L
2
() we have F ID
[T,)
and

D
t
F = 0, t T.
Proof. In case F = f(T
1
, . . . , T
n
) with f (

c
(R
n
), F does not depend on
the future of the Poisson process after T, it does not depend on the k-th
jump time T
k
if T
k
> T, i.e.

i
f(T
1
, . . . , T
n
) = 0 for T
k
> T, 1 k i n.
This implies

i
f(T
1
, . . . , T
n
)1
[0,T
i
]
(t) = 0 t T i = 1, . . . , n,
and

D
t
F =
n

i=1

i
f(T
1
, . . . , T
n
)1
[0,T
i
]
(t) = 0 t T.
Hence

D
t
F = 0, t T.
Proposition 7.2.4. We have for F o and u (
c
(R
+
):
IE[

DF, u)
L
2
(R
+
,dt)
] = IE
_
F
_

0
u(t)(dN
t
dt)
_
. (7.2.2)
Proof. We have, using (7.2.1),
IE
_

DF, u)
L
2
(R
+
,dt)
_
= IE
__

0
_

0
r
(1)
(s, t)

D
s
Fu(t)dN
s
dt
_
258 7 Local Gradients on the Poisson Space
= IE
_
_

D

F,
_

0
u(t)dt
_
L
2
(R
+
,dN
t
)
_
= IE
_
F
_

0
u(t)d(N
t
t)
_
,
from Corollary 7.1.10.
The above proposition can also be proved by nite dimensional integration
by parts on jump times conditionally to the value of N
T
, see Proposition 7.3.3
below.
The divergence operator dened next is the adjoint of

D.
Denition 7.2.5. We dene

on | by

(hG) = G
_

0
h(t)(dN
t
dt) h,

DG)
L
2
(R
+
)
,
G o, h L
2
(R
+
).
The closable adjoint

: L
2
( [0, 1]) L
2
()
of

D is another example of a Skorokhod type integral on the Poisson space.
Using this denition we obtain the following integration by parts formula
which shows that the duality Assumption 3.1.1 is satises by

D and

.
Proposition 7.2.6. The divergence operator

: L
2
( R
+
) L
2
()
is the adjoint of the gradient operator

D : L
2
() L
2
( R
+
),
i.e. we have
IE
_
F

(u)
_
= IE
_

DF, u)
L
2
(R
+
)
_
, F o, u |. (7.2.3)
Proof. It suces to note that Proposition 7.2.4 implies
IE[

DF, hG)
L
2
(R
+
,dt)
] = IE
_

D(FG), h)
L
2
(R
+
,dt)
F

DG, h)
L
2
(R
+
,dt)
_
= IE
_
F
_
G
_

0
h(t)d(N
t
t) h,

DG)
L
2
(R
+
,dt)
__
,
(7.2.4)
for F, G o.
7.2 Damped Gradient on the Half Line 259
As a consequence, the duality Assumption 3.1.1 of Section 3 is satised by

D and

and from Proposition 3.1.2 we deduce that

D and

are closable.
Recall that from Proposition 6.4.9, the nite dierence operator
D
t
F = 1
{N
t
<n}
(f(T
1
, . . . , T
N
t
, t, T
N
t
+1
, . . . , T
n1
) f(T
1
, . . . , T
n
)),
t R
+
, F = f(T
1
, . . . , T
n
), dened in Chapter 6 satises the Clark formula
Assumption 3.2.1, i.e. by Proposition 4.2.3 applied to
t
= 1, t R
+
, we
have
F = IE[F] +
_

0
IE[D
t
F [ T
t
]d(N
t
t), (7.2.5)
F L
2
().
On the other hand, the gradient

D has the derivation property and for this
reason it can be easier to manipulate than the nite dierence operator D in
recursive computations. Its drawback is that its domain is smaller than that
of D, due to the dierentiability conditions it imposes on random functionals.
In the next proposition we show that the adapted projections of (D
t
F)
tR
+
and (

D
t
F)
tR
+
coincide, cf. e.g. Proposition 20 of [102], by a direct compu-
tation of conditional expectations.
Proposition 7.2.7. The adapted projections of

D and D coincide, i.e.
IE[

D
t
F [ T
t
] = IE[D
t
F [ T
t
], t R
+
.
Proof. We have
IE[

D
t
F|F
t
] =
n

k=1
1
[0,T
k
]
(t) IE[
k
f(T
1
, . . . , T
n
)|F
t
]
=

N
t
<kn
IE[
k
f(T
1
, . . . , T
n
)|F
t
]
=

N
t
<kn
_

0
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t

k
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)ds
N
t
+1
ds
n
=
n

k=N
t
+2
_

t
e
(s
n
t)
_
s
n
t



s
k
_
s
k
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)ds
N
t
+1
ds
n
+
n

k=N
t
+2
_

t
e
(s
n
t)
_
s
n
t


_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, ., s
k2
, s
k
, s
k
, s
k+1
, ., s
n
)
ds
N
t
+1

ds
k1
ds
n
260 7 Local Gradients on the Poisson Space
1
{n>N
t
}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t

N
t
+1
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n
= 1
{N
t
<n1}
_

t
e
(s
n
t)

s
n
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n

n1

k=N
t
+2
_

t
e
(s
n
t)
_
s
n
t

s
k
_
s
k
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)ds
N
t
+1
ds
n
+
n

k=N
t
+2
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, ., s
k2
, s
k
, s
k
, s
k+1
, ., s
n
)ds
N
t
+1

ds
k1
ds
n
1
{n>N
t
}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t

N
t
+1
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n
= 1
{N
t
<n1}
_

t
e
(s
n
t)

s
n
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n

n1

k=N
t
+2
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
k1
, s
k+1
, s
k+1
, . . . , s
n
)ds
N
t
+1

ds
k
ds
n
+
n

k=N
t
+2
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, ., s
k2
, s
k
, s
k
, s
k+1
, ., s
n
)ds
N
t
+1

ds
k1
ds
n
1
{n>N
t
}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t

N
t
+1
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n
= 1
{N
t
<n1}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n
+1
{N
t
<n1}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+2
, s
N
t
+2
, . . . , s
n
)
ds
N
t
+1
ds
n
1
{n>N
t
}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t

N
t
+1
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n
= 1
{N
t
<n1}
_

0
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n
7.2 Damped Gradient on the Half Line 261
+1
{N
t
<n1}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+2
, s
N
t
+2
, . . . , s
n
)
ds
N
t
+1
ds
n
1
{N
t
<n1}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+3
t
f(T
1
, . . . , T
N
t
, s
N
t
+2
, s
N
t
+2
, . . . , s
n
)
ds
N
t
+2
ds
n
+1
{N
t
<n1}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+3
t
f(T
1
, . . . , T
N
t
, t, s
N
t
+2
, . . . , s
n
)
ds
N
t
+2
ds
n
1
{n=N
t
+1}
_

t
e
(s
n
t)
f(T
1
, . . . , T
n1
, s
n
)ds
n
+1
{n=N
t
+1}
f(T
1
, . . . , T
n1
, t)
= 1
{n>N
t
}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+2
t
f(T
1
, . . . , T
N
t
, s
N
t
+1
, . . . , s
n
)
ds
N
t
+1
ds
n
+1
{n<N
t
}
_

t
e
(s
n
t)
_
s
n
t

_
s
N
t
+3
t
f(T
1
, . . . , T
N
t
, t, s
N
t
+2
, . . . , s
n
)
ds
N
t
+2
ds
n
= IE[D
t
F|F
t
],
from Lemma 6.4.10.
As a consequence of Proposition 7.2.7 we also have
IE[

D
t
F [ T
a
] = IE[D
t
F [ T
a
], 0 a t. (7.2.6)
For functions of a single jump time, by Relation (2.3.6) we simply have
IE[

D
t
f(T
n
)[T
t
] = 1
{N
t
<n}
(t) IE[f

(T
n
)[T
t
]
= 1
{N
t
<n}
_
1
{N
t
n}
f

(T
n
) +
_

t
f

(x)p
n1N
t
(x t)dx
_
=
_

t
f

(x)p
n1N
t
(x t)dx
= f(t)p
n1N
t
(0) +
_

t
f(x)p

n1N
t
(x t)dx
= f(t)1
{T
n1
<t<T
n
}
+
_

t
f(x)p

n1N
t
(x t)dx,
which coincides with
IE[D
t
f(T
n
)[T
t
]
= IE[1
{N
t
<n1}
(f(T
n1
) f(T
n
)) +1
{N
t
=n1}
(f(t) f(T
n
))[T
t
]
262 7 Local Gradients on the Poisson Space
= IE[(1
{T
n1
>t}
f(T
n1
) +1
{T
n1
<t<T
n
}
f(t) 1
{T
n
>t}
f(T
n
))[T
t
]
= 1
{T
n1
<t<T
n
}
f(t) + IE[(1
{T
n1
>t}
f(T
n1
) 1
{T
n
>t}
f(T
n
))[T
t
]
= 1
{T
n1
<t<T
n
}
f(t) +
_

t
(p
n2N
t
(x t) p
n1N
t
(x t))f(x)dx
= 1
{T
n1
<t<T
n
}
f(t) +
_

t
f(x)p

n1N
t
(x t)dx.
As a consequence of Proposition 7.2.7 and (7.2.5) we nd that

D satises the
Clark formula, hence the Clark formula Assumption 3.2.1 is satised by

D.
Proposition 7.2.8. For any F L
2
() we have
F = IE[F] +
_

0
IE[

D
t
F [ T
t
]d(N
t
t).
In other words we have
F = IE[F] +
_

0
IE[D
t
F[T
t
]d(N
t
t)
= IE[F] +
_

0
IE[

D
t
F[T
t
]d(N
t
t),
F L
2
().
Since the duality Assumption 3.1.1 and the Clark formula Assumption 3.2.1
are satised by

D, it follows from Proposition 3.3.1 that the operator

coincides with the compensated Poisson stochastic integral with respect to


(N
t
t)
tR
+
on the adapted square-integrable processes. This fact is stated
in the next proposition with an independent proof.
Proposition 7.2.9. The adjoint of

D extends the compensated Poisson sto-
chastic integral, i.e. for all adapted square-integrable process u L
2
(R
+
)
we have

(u) =
_

0
u
t
d(N
t
t).
Proof. We consider rst the case where v is a cylindrical elementary pre-
dictable process v = F1
(s,T]
() with F = f(T
1
, . . . , T
n
), f (

c
(R
n
). Since v
is predictable, F is T
s
-measurable hence from Lemma 7.2.3 we have

D
t
F = 0,
s t, and

D
t
v
u
= 0, t u.
7.3 Damped Gradient on a Compact Interval 263
Hence from Denition 7.2.5 we get

(v) = F(

N
T


N
t
)
=
_

0
F1
(t,T]
(s)d

N
s
=
_

0
v
s
d

N
s
.
We then use the fact that

D is linear to extend the property to the linear
combinations of elementary predictable processes. The compensated Poisson
stochastic integral coincides with

on the predictable square-integrable pro-
cesses from a density argument using the Ito isometry.
Since the adjoint

of

D extends the compensated Poisson stochastic integral,
we may also use Proposition 3.3.2 to show that the Clark formula Assumption
3.2.1 is satised by

D, and in this way we recover the fact that the adapted
projections of

D and D coincide:
IE[

D
t
F [ T
t
] = IE[D
t
F [ T
t
], t R
+
,
for F L
2
().
7.3 Damped Gradient on a Compact Interval
In this section we work under the Poisson measure on the compact interval
[0, T], T > 0.
Denition 7.3.1. We denote by o
c
the space of Poisson functionals of the
form
F = h
n
(T
1
, . . . , T
n
), h
n
(
c
((0, )
n
), n 1, (7.3.1)
and by o
f
the space of Poisson functionals of the form
F = f
0
1
{N
T
=0}
+
m

n=1
1
{N
T
=n}
f
n
(T
1
, . . . , T
n
), (7.3.2)
where f
0
R and f
n
(
1
([0, T]
n
), 1 n m, is symmetric in n variables,
m 1.
The elements of o
c
can be written as
F = f
0
1
{N
T
=0}
+

n=1
1
{N
T
=n}
f
n
(T
1
, . . . , T
n
),
264 7 Local Gradients on the Poisson Space
where f
0
R and f
n
(
1
([0, T]
n
), 1 n m, is symmetric in n variables,
m 1, with the continuity condition
f
n
(T
1
, . . . , T
n
) = f
n+1
(T
1
, . . . , T
n
, T).
We also let
|
c
=
_
n

i=1
F
i
u
i
: u
1
, . . . , u
n
(([0, T]), F
1
, . . . , F
n
o
c
, n 1
_
,
and
|
f
=
_
n

i=1
F
i
u
i
: u
1
, . . . , u
n
(([0, T]), F
1
, . . . , F
n
o
f
, n 1
_
.
Recall that under P we have, for all F o
f
of the form (7.3.2):
IE[F] = e
T
f
0
+ e
T
m

n=1

n
_
T
0
_
t
n
0

_
t
2
0
f
n
(t
1
, . . . , t
n
)dt
1
dt
n
.
Denition 7.3.2. Let

D be dened on F o
f
of the form (7.3.2) by

D
t
F =
m

n=1
1
{N
T
=n}
n

k=1
1
[0,T
k
]
(t)
k
f
n
(T
1
, . . . , T
n
).
If F has the form (7.3.1) we have

D
t
F =
n

k=1
1
[0,T
k
]
(t)
k
f
n
(T
1
, . . . , T
n
),
where
k
f
n
denotes the partial derivative of f
n
with respect to its k-th vari-
able as in Denition 7.2.1.
We dene

on u |
f
by

(Fu) = F
_
T
0
u
t
dN
t

_

0
u
t

D
t
Fdt, (7.3.3)
F o
f
, u (([0, T]).
The following result shows that

D and

also satisfy the duality Assumption
3.1.1.
Proposition 7.3.3. The operators

D and

satisfy the duality relation
IE[

DF, u)] = IE[F

(u)], (7.3.4)
F o
f
, u |
f
.
7.3 Damped Gradient on a Compact Interval 265
Proof. By standard integration by parts we rst prove (7.3.4) when u (
([0, T]) and F has the form (7.3.2). We have
IE[

DF, u)]
= e
T
m

n=1

n
n!
n

k=1
_
T
0

_
T
0
_
t
k
0
u(s)ds
k
f
n
(t
1
, . . . , t
n
)dt
1
dt
n
= e
T

n=1

n
n!
n

k=1
_
T
0

_
T
0
f
n
(t
1
, . . . , t
n
)u(t
k
)dt
1
dt
n
e
T

n=1

n
(n 1)!
_
T
0
u(s)ds
_
T
0

_
T
0
f
n
(t
1
, . . . , t
n1
, T)dt
1
dt
n1
.
The continuity condition
f
n
(t
1
, . . . , t
n1
, T) = f
n1
(t
1
, . . . , t
n1
) (7.3.5)
yields
IE
_

DF, u)

= e
T

n=1

n
n!
_
T
0

_
T
0
f
n
(t
1
, . . . , t
n
)
n

k=1
u(t
k
)dt
1
dt
n
e
T
_
T
0
u(s)ds

n=0

n
n!
_
T
0

_
T
0
f
n
(t
1
, . . . , t
n
)dt
1
dt
n
= IE
_
F
_
N
T

k=1
u(T
k
)
_
T
0
u(s)ds
__
= IE
_
F
_
T
0
u(t)d

N(t)
_
.
Next we dene

(uG), G o
f
, by (7.3.3), with for all F o
f
:
IE
_
G

DF, u)

= IE
_

D(FG), u) F

DG, u)

= IE
_
F
_
G
_
T
0
u(t)dN
t

DG, u)
__
= IE
_
F

(uG)

,
which proves (7.3.4).
266 7 Local Gradients on the Poisson Space
Hence, the duality Assumption 3.1.1 of Section 3 is also satised by

D and

, which are closable from Proposition 3.1.2, with domains Dom(



D) and
Dom(

).
The stability Assumption 3.2.10 is also satised by

D and Lemma 7.2.3 holds
as well as a consequence of Proposition 3.2.11, i.e. for any T
T
-measurable
random variable F L
2
() we have

D
t
F = 0, t T.
Similarly,

coincides with the stochastic integral with respect to the com-
pensated Poisson process, i.e.

(u) =
_

0
u
t
d(N
t
t),
for all adapted square-integrable process u L
2
( R
+
), with the same
proof as in Proposition 7.2.9.
Consequently, from Proposition 3.3.2 it follows that the Clark formula
Assumption 3.2.1 is satised by

D, and the adapted projections of

D,

D,
and D coincide:
IE[

D
t
F [ T
t
] = IE[

D
t
F [ T
t
]
= IE[D
t
F [ T
t
], t R
+
,
for F L
2
().
Note that the gradients

D and

D coincide on a common domain under the
continuity condition (7.3.5). In case (7.3.5) is not satised by F the gradient

DF can still be dened in L


2
( [0, T]) on F o
f
while

DF exists only in
distribution sense due to the presence of the indicator function 1
{N
T
=k}
=
1
{[T
k
,T
k+1
)}
(T) in (7.3.2).
Yet when (7.3.5) does not hold, we still get the integration by parts
IE
_

DF, u)

= IE
_
F
N
T

k=1
u(T
k
)
_
(7.3.6)
= IE
_
F
_
T
0
u(t)dN(t)
_
, F o
f
, u |
f
,
under the additional condition
_
T
0
u(s)ds = 0. (7.3.7)
7.4 Chaos Expansions 267
However, in this case Proposition 3.1.2 does not apply to extend

D by clos-
ability from its denition on o
f
since the condition (7.3.7) is required in the
integration by parts (7.3.6).
7.4 Chaos Expansions
In this section we review the application of

D to the computation of chaos
expansions when X = R
+
. As noted above the gradient

D has some properties
in common with D, namely its adapted projection coincides with that of D,
and in particular from Proposition 7.2.7 we have
IE[D
t
F] = IE[

D
t
F], t R
+
.
In addition, since the operator

D has the derivation property it is easier to
manipulate than the nite dierence operator D in recursive computations.
We aim at applying Proposition 4.2.5 in order to compute the chaos
expansions
F = IE[F] +

n=1
I
n
(f
n
),
with
f
n
(t
1
, . . . , t
n
) =
1
n!
IE[

D
t
1


D
t
n
F],
dt
1
dt
n
dP-a.e., n 1.
However, Proposition 4.2.5 cannot be applied since the gradient

D cannot be
iterated in L
2
due to the non-dierentiability of 1
[0,T
k
]
(t) in T
k
. In particular,
an expression such as
IE[

D
t
1


D
t
n
F] (7.4.1)
makes a priori no sense and may dier from IE[D
t
1
D
t
n
F] for n 2.
Note that we have

D
t
n


D
t
1
f(T
k
) = (1)
n
1
[0,T
k
]
(t
n
)f
(n)
(T
k
), 0 < t
1
< < t
n
,
and
IE[

D
t
n


D
t
1
f(T
k
)] = (1)
n
IE[1
[0,T
k
]
(t
n
)f
(n)
(T
k
)]
= (1)
n
_

t
n
f
(n)
(t)p
k1
(t)dt,
0 < t
1
< < t
n
, which diers from
IE[D
t
n
D
t
1
f(T
k
)] =
_

t
n
f(t)P
(n)
k
(t)dt,
268 7 Local Gradients on the Poisson Space
computed in Theorem 1 of [110], where
P
k
(t) =
_
t
0
p
k1
(s)ds, t R
+
,
is the distribution function of T
k
, cf. (6.3.5).
Hence on the Poisson space

D
t
n


D
t
1
, 0 < t
1
< < t
n
, cannot be used
in the L
2
sense as D
t
n
D
t
1
to give the chaos decomposition of a random
variable. Nevertheless we have the following proposition, see [112] for an
approach to this problem gradient

D in distribution sense.
Proposition 7.4.1. For any F

n=0
Dom(D
n

D) we have the chaos
expansion
F = IE[F] +

n1

I
n
(1

n
f
n
),
where
f
n
(t
1
, . . . , t
n
) = IE[D
t
1
D
t
n1

D
t
n
F],
0 < t
1
< < t
n
, n 1.
Proof. We apply Proposition 4.2.5 to

D
t
F, t R
+
:

D
t
F = IE[

D
t
F] +

n=1

I
n
(1

n
IE[D
n

D
t
F]),
which yields
IE[

D
t
F[T
t
] = IE[

D
t
F] +

n=1

I
n
(1

n+1
(,t)
IE[D
n

D
t
F]).
Finally, integrating both sides with respect to d(N
t
t) and using of the
Clark formula Proposition 7.2.8 and the inductive denition (2.7.1) we get
F IE[F] =

n=0

I
n+1
(1

n+1
IE[D
n

DF]).

The next lemma provides a way to compute the functions appearing in


Proposition 7.4.1.
Lemma 7.4.2. We have for f (
1
c
(R) and n 1
D
t

D
s
f(T
n
) =

D
st
f(T
n1
)

D
st
f(T
n
) 1
{s<t}
1
[T
n1
,T
n
]
(s t)f

(s t),
s, t R
+
.
7.4 Chaos Expansions 269
Proof. From Relation (6.4.15) we have
D
t

D
s
f(T
n
) = 1
[0,T
n1
]
(t)
_
1
[0,T
n1
]
(s)f

(T
n1
) 1
[0,T
n
]
(s)f

(T
n
)
_
1
[T
n1
,T
n
]
(t)
_
1
[0,t]
(s)f

(t) 1
[0,T
n
]
(s)f

(T
n
)
_
= 1
{t<s}
_
1
[0,T
n
]
(s)f

(T
n
) 1
[0,T
n1
]
(s)f

(T
n1
),
_
+1
{s<t}
_
1
[0,T
n
]
(t)f

(T
n
) 1
[0,T
n1
]
(t)f

(T
n1
) 1
[T
n1
,T
n
]
(t)f

(t)
_
,
P-a.s.
In the next proposition we apply Lemma 7.4.2 to the computation of the
chaos expansion of f(T
k
).
Proposition 7.4.3. For k 1, the chaos expansion of f(T
k
) is given as
f(T
k
) = IE[f(T
k
)] +

n1
1
n!
I
n
(f
k
n
),
where f
k
n
(t
1
, . . . , t
n
) =
k
n
(f)(t
1
t
n
), t
1
, . . . , t
n
R
+
, and

k
n
(f)(t) =
_

t
f

(s)
n1
p
k
(s)ds, (7.4.2)
= f(t)
n1
p
k
(t) +f, 1
[t,[

n
p
k
)
L
2
(R
+
)
, t R
+
, n 1,
where the derivative f

in (7.4.2) is taken in the distribution sense.


We note the relation
d
k
n
(f)
dt
(t) =
k
n
(f

)(t)
k
n+1
(f)(t), t R
+
.
From this proposition it is clearly seen that f(T
n
)1
[0,t]
(T
n
) is T
[0,t]
-
measurable, and that f(T
n
)1
[t,[
(T
n
) is not T
[t,[
-measurable.
Proof. of Proposition 7.4.3. Let us rst assume that f (
1
c
(R
+
). We have
f
k
1
(t) = IE[

D
t
f(T
k
)]
= IE[1
[0,T
k
]
(t)f

(T
k
)]
=
_

t
p
k
(s)f

(s)ds.
Now, from Lemma 7.4.2, for n 2 and 0 t
1
< < t
n
,
D
t
1
D
t
n1

D
t
n
f(T
k
) = D
t
1
D
t
n2
(

D
t
n
f(T
k1
)

D
t
n
f(T
k
)),
270 7 Local Gradients on the Poisson Space
hence taking expectations on both sides and using Proposition 7.4.1 we have
f
k
n
(t
1
, . . . , t
n
) = f
k1
n1
(t
1
, . . . , t
n2
, t
n
) f
k
n1
(t
1
, . . . , t
n2
, t
n
),
and we can show (4.3.3) by induction, for n 2:
f
k
n
(t
1
, . . . , t
n
) = f
k1
n1
(t
1
, . . . , t
n2
, t
n
) f
k
n1
(t
1
, . . . , t
n2
, t
n
),
=
_

t
n
f

(s)

n2
p
k1
s
n2
(s)ds +
_

t
n
f

(s)

n2
p
k1
s
n2
(s)ds
=
_

0
f

(s)

n1
p
k1
s
n1
p
k
(s)ds.
The conclusion is obtained by density of the (
1
c
functions in L
2
(R
+
, p
k
(t)dt),
k 1.
7.5 Covariance Identities and Deviation Inequalities
Next we present a covariance identity for the gradient

D, as an application
of Theorem 3.4.4.
Corollary 7.5.1. Let n N and F, G

n+1
k=1
ID(
k
). We have
Cov (F, G) =
n

k=1
(1)
k+1
IE
__

k
(

D
t
k


D
t
1
F)(

D
t
k


D
t
1
G)dt
1
dt
k
_
+(1)
n
IE
_
_

n+1
IE
_

D
t
n+1


D
t
1
F [ T
t
n+1
_
IE
_

D
t
n+1


D
t
1
G [ T
t
n+1
_
dt
1
dt
n+1
_
. (7.5.1)
In particular,
Cov (T
m
, f(T
1
, . . . , T
m
)) =
m

i=1
IE[T
i

i
f(T
1
, . . . , T
m
)].
From the well-known fact that exponential random variables
(
k
)
k1
:= (T
k
T
k1
)
k1
can be constructed as the half sums of squared independent Gaussian random
variables we dene a mapping which sends Poisson functionals to Wiener
7.5 Covariance Identities and Deviation Inequalities 271
functionals, cf. [103]. Given F = f(
1
, . . . ,
n
) a Poisson functional, let F
denote the Gaussian functional dened by
F = f
_
X
2
1
+Y
2
1
2
, . . . ,
X
2
n
+Y
2
n
2
_
,
where X
1
, . . . , X
n
, Y
1
, . . . , Y
n
, denote two independent collections of standard
Gaussian random variables. The random variables X
1
, . . . , X
n
, Y
1
, . . . , Y
n
,
may be constructed as Brownian single stochastic integrals on the Wiener
space W. In the next proposition we let D denote the gradient operator of
Chapter 5 on the Wiener space.
Proposition 7.5.2. The mapping : L
p
() L
p
(W) is an isometry. Fur-
ther, it satises the intertwining relation
2[

DF[
2
L
2
(R
+
)
= [DF[
2
L
2
(R
+
)
, (7.5.2)
Proof. The proposition follows from the fact that F and F have same
distribution since the half sum of two independent Gaussian squares has an
exponential distribution. Relation (7.5.2) follows by a direct calculation.
Proposition 3.6.2 applies in particular to the damped gradient operator

D:
Corollary 7.5.3. Let F Dom(

D). We have
P(F IE[F] x) exp
_

x
2
2|

DF|
2
L
2
(R
+
,L

())
_
, x > 0.
In particular if F is T
T
measurable and |

DF|

K then
P(F IE[F] x) exp
_

x
2
2K
2
T
_
, x 0.
As an example we may consider F = f(
1
, . . . ,
n
) with
n

k=1

k
[
k
f(
1
, . . . ,
n
)[
2
K
2
, a.s.
Applying Corollary 4.7.4 to F, where is the mapping dened in Deni-
tion 7.5.2 and using Relation (7.5.2) yields the following deviation result for
the damped gradient

D on Poisson space.
Corollary 7.5.4. Let F Dom(

D). Then
P(F IE[F] x) exp
_

x
2
4|

DF|
2
L

(,L
2
(R
+
))
_
.
272 7 Local Gradients on the Poisson Space
The above result can also be obtained via logarithmic Sobolev inequalities,
i.e. by application of Corollary 2.5 of [76] to Theorem 0.7 in [4] (or Relation
(4.4) in [76] for a formulation in terms of exponential random variables). A
sucient condition for the exponential integrability of F is |[

DF[
L
2
(R
+
)
|

< , cf. Theorem 4 of [103].


7.6 Some Geometric Aspects of Poisson Analysis
In this section we use the operator

D to endow the conguration space on
R
+
with a (at) dierential structure.
We start by recalling some elements of dierential geometry. Let M be a
Riemannian manifold with volume measure dx, covariant derivative , and
exterior derivative d. Let

and d

denote the adjoints of and d under


a measure on M of the form (dx) = e
(x)
dx. The Weitzenb ock formula
under the measure states that
d

d + dd

+R Hess ,
where R denotes the Ricci tensor on M. In terms of the de Rham Laplacian
H
R
= d

d + dd

and of the Bochner Laplacian H


B
=

we have
H
R
= H
B
+R Hess . (7.6.1)
In particular the term Hess plays the role of a curvature under the
measure . The dierential structure on R can be lifted to the space of con-
gurations on R
+
. Here, o is dened as in Denition 7.1.8, and | denotes
the space of smooth processes of the form
u(, x) =
n

i=1
F
i
()h
i
(x), (, x) R
+
, (7.6.2)
h
i
(

c
(R
+
), F
i
o, i = 1, . . . , n. The dierential geometric objects to be
introduced below have nite dimensional counterparts, and each of them has
a stochastic interpretation. The following table describes the correspondence
between geometry and probability.
Notation Geometry Probability
manifold probability space
element of point measure on R
+
(

c
(R
+
) tangent vectors to test functions on R
+
Riemannian metric on Lebesgue measure
7.6 Some Geometric Aspects of Poisson Analysis 273
d gradient on stochastic gradient

D
| vector eld on stochastic process
du exterior derivative of u | two-parameter process
, bracket of vector elds on bracket on | |
R curvature tensor on trilinear mapping on |
d

divergence on stochastic integral operator


We turn to the denition of a covariant derivative
u
in the direction u
L
2
(R
+
), rst for a vector eld v (

c
(R
+
) as

u
v(t) = v(t)
_
t
0
u
s
ds, t R
+
,
where v(t) denotes the derivative of v(t), and then for a vector eld
v =
n

i=1
F
i
h
i
|
in the next denition.
Denition 7.6.1. Given u | and v =
n

i=1
F
i
h
i
|, let
u
v be dened as

u
v(t) =
n

i=1
h
i
(t)

D
u
F
i
F
i

h
i
(t)
_
t
0
u
s
ds, t R
+
, (7.6.3)
where

D
u
F =

DF, u)
L
2
(R
+
)
, F o.
We have

uF
(vG) = Fv

D
u
G+FG
u
v, u, v (

c
(R
+
), F, G o. (7.6.4)
We also let, by abuse of notation,
(
s
v)(t) :=
n

i=1
h
i
(t)

D
s
F
i
F
i

h
i
(t)1
[0,t]
(s),
for s, t R
+
, in order to write

u
v(t) =
_

0
u
s

s
v
t
ds, t R
+
, u, v |.
The following is the denition of the Lie-Poisson bracket.
274 7 Local Gradients on the Poisson Space
Denition 7.6.2. The Lie bracket u, v of u, v (

c
(R
+
) is dened as the
unique element of (

c
(R
+
) satisfying
(

D
u

D
v


D
v

D
u
)F =

D
w
F, F o.
The bracket , is extended to u, v | via
Ff, Gg(t) = FGf, g(t) +g(t)F

D
f
Gf(t)G

D
g
F, t R
+
, (7.6.5)
f, g (

c
(R
+
), F, G o. Given this denition we are able to prove the
vanishing of the associated torsion term.
Proposition 7.6.3. The Lie bracket u, v of u, v | satises
u, v =
u
v
v
u, (7.6.6)
i.e. the connection dened by has a vanishing torsion
T(u, v) =
u
v
v
u u, v = 0, u, v |.
Proof. For all u, v (

c
(R
+
) we have
(

D
u

D
v


D
v

D
u
)T
n
=

D
u
_
T
n
0
v
s
ds +

D
v
_
T
n
0
u
s
ds
= v
T
n
_
T
n
0
u
s
ds u
T
n
_
T
n
0
v
s
ds
=
_
T
n
0
_
v(t)
_
t
0
u
s
ds u(t)
_
t
0
v
s
ds
_
dt
=

D

u
v
v
u
T
n
.
Since

D is a derivation, this shows that
(

D
u

D
v


D
v

D
u
)F =

D

u
v
v
u
F
for all F o, hence

D
{u,v}
=

D
u

D
v


D
v

D
u
=

D

u
v
v
u
, u, v (

c
(R
+
),
which shows that (7.6.6) holds for u, v (

c
(R
+
). The extension to u, v |
follows from (7.6.4) and (7.6.5).
Similarly we show the vanishing of the associated curvature.
Proposition 7.6.4. The Riemannian curvature tensor R of vanishes on
|, i.e.
R(u, v)h := [
u
,
v
]h
{u,v}
h = 0, u, v, h |.
7.6 Some Geometric Aspects of Poisson Analysis 275
Proof. We have, letting u(t) =
_
t
0
u
s
ds, t R
+
:
[
u
,
v
]h = u
..

v
h v
..

u
h = u
.
..
v

h v
.
..
u

h = uv

h + vu

h,
and

{u,v}
h =
u v v u
h = (

u v v u)

h = (u v v u)

h,
hence R(u, v)h = 0, h, u, v (

c
(R
+
). The extension of the result to | follows
again from (7.6.4) and (7.6.5).
Clearly, the bracket , is antisymmetric, i.e.:
u, v = v, u, u, v (

c
(R
+
).
Proposition 7.6.5. The bracket , satises the Jacobi identity
u, v, w +w, u, v +v, u, w = 0, u, v, w (

c
(R
+
),
hence | is a Lie algebra under , .
Proof. The vanishing of R(u, v) in Proposition 7.6.4 shows that
[
u
,
v
] =
{u,v}
h, u, v |,
hence

{{u,v},w}
+
{w,{u,v}}
+
{v,{u,w}}
= [
{u,v}
,
w
] + [
w
,
{u,v}
] + [
v
,
{u,w}
]
= 0, u, v, h |.

However, , does not satisfy the Leibniz identity, thus it can not be con-
sidered as a Poisson bracket.
The exterior derivative

Du of a smooth vector eld u | is dened from

Du, h
1
h
2
)
L
2
(R
+
)L
2
(R
+
)
=
h
1
u, h
2
)
L
2
(R
+
)

h
2
u, h
1
)
L
2
(R
+
)
,
h
1
, h
2
|, with the norm
|

Du|
2
L
2
(R
+
)L
2
(R
+
)
:= 2
_

0
_

0
(

Du(s, t))
2
dsdt, (7.6.7)
where

Du(s, t) =
1
2
(
s
u
t

t
u
s
), s, t R
+
, u |.
The next result is analog to Proposition 4.1.4.
276 7 Local Gradients on the Poisson Space
Lemma 7.6.6. We have the commutation relation

D
u

(v) =

(
u
v) +u, v)
L
2
(R
+
)
, (7.6.8)
u, v (

c
(R
+
), between

D and .
Proof. We have

D
u

(v) =

k=1
v(T
k
)
_
T
k
0
u
s
ds
=

_
v

_

0
u
s
ds
_

_

0
v(t)
_
t
0
u
s
dsdt
=

(
u
v) +u, v)
L
2
(R
+
)
,
by (7.6.3).
As an application we obtain a Skorohod type isometry for the operator

.
Proposition 7.6.7. We have for u |:
IE
_
[

(u)[
2
_
= IE
_
|u|
2
L
2
(R
+
)
_
+ IE
__

0
_

0

s
u
t

t
u
s
dsdt
_
.
(7.6.9)
Proof. Given u =
n

i=1
h
i
F
i
| we have
IE
_

(h
i
F
i
)

(h
j
F
j
)
_
= IE
_
F
i

D
h
i

(h
j
F
j
)
_
= IE
_
F
i

D
h
i
(F
j

(h
j
)

D
h
j
F
j
)
_
= IE
_
(F
i
F
j

D
h
i

h
j
+F
i

(h
j
)

D
h
i
F
j
F
i

D
h
i

D
h
j
F
j
)
_
= IE
_
(F
i
F
j
h
i
, h
j
)
L
2
(R
+
)
+F
i
F
j

(
h
i
h
j
) +F
i

(h
j
)

D
h
i
F
j
F
i

D
h
i

D
h
j
F
j
)
_
= IE
_
(F
i
F
j
h
i
, h
j
)
L
2
(R
+
)
+

D

h
i
h
j
(F
i
F
j
) +

D
h
j
(F
i

D
h
i
F
j
) F
i

D
h
i

D
h
j
F
j
)
_
= IE
_
(F
i
F
j
h
i
, h
j
)
L
2
(R
+
)
+

D

h
i
h
j
(F
i
F
j
) +

D
h
j
F
i

D
h
i
F
j
+F
i
(

D
h
j

D
h
i
F
j


D
h
i

D
h
j
F
j
))
_
= IE
_
(F
i
F
j
h
i
, h
j
)
L
2
(R
+
)
+

D

h
i
h
j
(F
i
F
j
) +

D
h
j
F
i

D
h
i
F
j
+F
i

D

h
j
h
i

h
i
h
j
F
j
)
_
= IE
_
(F
i
F
j
h
i
, h
j
)
L
2
(R
+
)
+F
j

D

h
i
h
j
F
i
+F
i

D

h
j
h
i
F
j
+

D
h
j
F
i

D
h
i
F
j
)
_
7.7 Chaos Interpretation of Time Changes 277
= IE
_
F
i
F
j
h
i
, h
j
)
L
2
(R
+
)
+F
j
_

0

D
s
F
i
_

0

t
h
j
(s)h
i
(t)dtds
+F
i
_

0

D
t
F
j
_

0

s
h
i
(t)h
j
(s)dsdt
+
_

0
h
i
(t)

D
t
F
j
dt
_

0
h
j
(s)

D
s
F
i
ds
_
,
where we used the commutation relation (7.6.8).
Proposition (7.6.7) is a version of the Skorohod isometry for the operator

and it diers from from Propositions 4.3.1 and 6.5.4 which apply to nite
dierence operators on the Poisson space.
Finally we state a Weitzenb ock type identity on conguration space under
the form of the commutation relation


D =

+ Id
L
2
(R
+
)
,
i.e. the Ricci tensor under the Poisson measure is the identity Id
L
2
(R
+
)
on
L
2
(R
+
) by comparison with (7.6.1).
Theorem 7.6.8. We have for u |:
IE
_
[

(u)[
2
_
+ IE
_
|

Du|
2
L
2
(R
+
)L
2
(R
+
)
_
(7.6.10)
= IE
_
|u|
2
L
2
(R
+
)
_
+ IE
_
|u|
2
L
2
(R
+
)L
2
(R
+
)
_
.
Proof. Relation (7.6.10) for u =
n

i=1
h
i
F
i
| follows from Relation (7.6.7)
and Proposition 7.6.7.
7.7 Chaos Interpretation of Time Changes
In this section we study the Poisson probabilistic interpretation of the opera-
tors introduced in Section 4.8. We refer to Section 5.8 for their interpretation
on the Wiener space. We now prove that

+D is identied to the operator

D under the Poisson identication of and L


2
(B).
Lemma 7.7.1. On the Poisson space,

satises the relation

t
(FG) = F

t
G+G

t
F D
t
FD
t
G, t R
+
, F, G o. (7.7.1)
Proof. We will use the multiplication formula for multiple Poisson stochastic
integrals of Proposition 6.2.5:
278 7 Local Gradients on the Poisson Space
I
n
(f
n
)I
1
(g) = I
n+1
(f
n
g) +nf, g)I
n1
(f
n1
) +nI
n
((fg) f
n1
),
f, g L
4
(R
+
). We rst show that

t
(I
n
(f
n
)I
1
(g)) = I
n
(f
n
)

t
I
1
(g)+I
1
(g)

t
I
n
(f
n
)D
t
I
1
(g)D
t
I
n
(f
n
),
t R
+
, when f, g (
1
c
(R
+
) and f, f)
L
2
(R
+
)
= 1. Indeed, we have
I
n
(f
n
)

t
I
1
(g) +I
1
(g)

t
I
n
(f
n
)
= I
n
(f
n
)I
1
(g

1
[t,)
) nI
1
(g)I
n
((f

1
[t,)
) f
(n1)
)
= n
_
I
n+1
((f

1
[t,)
) f
(n1)
g) + (n 1)I
n
((fg) (f

1
[t,)
) f
(n2)
)
+I
n
((gf

1
[t,)
) f
(n1)
) +f

1
[t,)
, g)
L
2
(R
+
)
I
n1
(f
(n1)
)
+(n 1)f, g)
L
2
(R
+
)
I
n1
((f

1
[t,)
) f
(n2)
)
_
I
n+1
((g

1
[t,)
) f
n
) nI
n
((g

1
[t,)
f) f
(n1)
)
ng

1
[t,)
, f)
L
2
(R
+
)
I
n1
(f
(n1)
)
= nI
n+1
((f

1
[t,)
) f
(n1)
g) I
n+1
((g

1
[t,)
) f
n
)
n(n 1)I
n
((f

1
[t,)
) (fg) f
(n2)
)
nI
n
((gf

1
[t,)
) f
(n1)
) nI
n
((fg

1
[t,)
) f
(n1)
)
+nf(t)g(t)I
n1
(f
(n1)
) n(n 1)f, g)
L
2
(R
+
)
I
n1
((f

1
[t,)
) f
(n2)
)
=

t
_
I
n+1
(f
n
g) +nI
n
(f
(n1)
(fg)) +nf, g)
L
2
(R
+
)
I
n1
(f
(n1)
)
_
+nf(t)g(t)I
n1
(f
(n1)
)
=

t
(I
n
(f
n
)I
1
(g)) +D
t
I
1
(g)D
t
I
n
(f
n
), f, g (
1
c
(R
+
).
We now make use of the multiplication formula for Poisson stochastic inte-
grals to prove the result on o by induction. Assume that (7.7.1) holds for
F = I
n
(f
n
) and G = I
1
(g)
k
for some k 1. Then, using the product rule
Proposition 4.5.2 or Proposition 6.4.8 for the operator D
t
we have

t
(I
n
(f
n
)I
1
(g)
k+1
)
= I
1
(g)

t
(I
n
(f
n
)I
1
(g)
k
) +I
n
(f
n
)I
1
(g)
k

t
I
1
(g)
D
t
I
1
(g)D
t
(I
1
(g)
k
I
n
(f
n
))
= I
1
(g)
_
I
1
(g)
k

t
I
n
(f
n
) +I
n
(f
n
)

t
_
I
1
(g)
k
_
D
t
_
I
1
(g)
k
_
D
t
I
n
(f
n
)
_
+I
n
(f
n
)I
1
(g)
k

t
I
1
(g) D
t
I
1
(g)
_
I
1
(g)
k
D
t
I
n
(f
n
)
+I
n
(f
n
)D
t
_
I
1
(g)
k
__
D
t
I
1
(g)D
t
I
1
(g)
k
D
t
I
n
(f
n
)
= I
1
(g)
k+1

t
I
n
(f
n
) +I
n
(f
n
)

t
_
I
1
(g)
k+1
_
D
t
_
I
1
(g)
k+1
_
D
t
I
n
(f
n
),
t R
+
.
7.7 Chaos Interpretation of Time Changes 279
Proposition 7.7.2. We have the identity

D = D +

on the space o.
Proof. Lemma 7.7.1 shows that (

+D) is a derivation operator since


(

t
+D
t
)(FG) =

t
(FG) +D
t
(FG)
= F

t
G+G

t
F D
t
FD
t
G+D
t
(FG)
= F(

t
+D
t
)G+G(

t
+D
t
)F, F, G o.
Thus it is sucient to show that
(D
t
+

t
)f(T
k
) =

Df(T
k
), k 1, f (
1
b
(R). (7.7.2)
Letting
[t
denote the projection

[t
f = f1
[t,)
, f L
2
(R
+
),
we have
(D
t
+

t
)f(T
k
) = (D
t
+

t
)

nN
1
n!
I
n
(f
k
n
)
=

n1
1
(n 1)!
I
n1
(f
k
n
(, t))

n1
1
(n 1)!
I
n
(
[t
Id
(n1)

1
f
k
n
)
=

nN
1
n!
I
n
_
f
k
n+1
(, t) n
[t
Id
(n1)

1
f
k
n
_
,
where Id : L
2
(R
+
) L
2
(R
+
) is the identity operator. Now,
f
k
n+1
(t, t
1
, . . . , t
n
) n
[t
Id
(n1)

1
f
k
n
(t
1
, . . . , t
n
)
=
k
n+1
(f)(t
1
t
n
t)
1
{t<t
1
t
n
}
_

k
n
(f

) +
k
n+1
(f)
_
(t
1
t
n
)
=
k
n+1
(f)1
{t
1
t
n
<t}

k
n
(f

)(t
1
t
n
)1
{t
1
t
n
>t}
=
k
n
(f

[t
)(t
1
t
n
),
which coincides with n-th term, in the chaos expansion of 1
[0,T
k
]
f

(T
k
) by
Proposition 7.4.3, k N, n 1. Hence Relation (7.7.2) holds and we have
D +

=

D.
Since both and

= +

coincide with the Ito integral on adapted


processes, it follows that

vanishes on adapted processes. By duality this


280 7 Local Gradients on the Poisson Space
implies that the adapted projection of

is zero, hence by Proposition 7.7.2,

D is written as a perturbation of D by a gradient process with vanishing


adapted projection.
7.8 Notes and References
The notion of lifting of the dierential geometry on a Riemannian manifold
X to a dierential geometry on
X
has been introduced in [3], and the in-
tegration by parts formula (7.1.2) can be found been obtained therein, cf.
also [16]. In Corollary 7.1.7, our pointwise lifting of gradients allows us to
recover Theorem 5-2 of [3], page 489, as a particular case by taking expec-
tations in Relation (7.1.5). See [20], [93], [106], for the locality of

D and

.
See [2] and [30] for another approaches to the Weitzenb ock formula on con-
guration spaces under Poisson measures. The proof of Proposition 7.6.7 is
based on an argument of [43] for path spaces over Lie groups. The gradient

D is called damped in reference to [44], cf. Section 5.7. The gradient



D of
Denition 7.2.1 is a modication of the gradient introduced in [23], see also
[36]. However, the integration by parts formula of [23] deals with processes of
zero integral only, as in (7.3.6). A dierent version of the gradient

D, which
solves the closability issue mentioned at the end of Section 7.3, has been used
for sensitivity analysis in [71], [117], [118]. The combined use of D
n
and

D for
the computation of the chaos expansion of the jump time T
d
, d 1, and the
Clark representation formula for

D can be found in [102]. The construction
of

D and D can also be extended to arbitrary Poisson processes with adapted
intensities, cf. [32], [104], [105].

Das könnte Ihnen auch gefallen