Sie sind auf Seite 1von 22

BSDEs with polynomial growth generators

IRMAR, Universite Rennes 1, 35 042 Rennes Cedex, FRANCE


pbriand@maths.univ-rennes1.fr

Philippe Briand Rene Carmona July 22, 1998


Abstract

Statistics & Operations Research, Princeton University, Princeton NJ 08544, USA


rcarmona@princeton.edu
revised January 9, 1999

In this paper, we give existence and uniqueness results for backward stochastic di erential equations when the generator has polynomial growth in the state variable. We deal with the case of xed terminal time as well as the case of random terminal time. The need for this type of extension of the classical existence and uniqueness results comes from the desire to provide a probabilistic representation of the solutions of semilinear partial di erential equations in the spirit of a nonlinear Feynman-Kac formula. Indeed in many applications of interest, the nonlinearity is polynomial, see e.g. the Allen-Cahn equation or the standard nonlinear heat and Schrodinger equations.

1 Introduction
It is by now well-known that there exists a unique, adapted and square integrable, solution to a backward stochastic di erential equation (BSDE for short) of type Yt = +
Z

provided that the generator is Lipschitz in both the variables y and z. We refer to the original work of E. Pardoux and S. Peng 14, 15] for the general theory and to N. El Karoui, S. Peng and M.-C. Quenez 6] for a survey of the applications of this theory in nance. Since the rst existence and uniqueness result established by E. Pardoux and S. Peng in 1990, a lot of works, including R. W. R. Darling, E. Pardoux 5], S. Hamadene 8], M. Kobylanski 9], J.P. Lepeltier, J. San Martin 10, 11], see also the references therein, have tried to weaken the Lipschitz assumption on the generator. Most of these works deal only with real-valued BSDEs 8, 9, 10, 11] because of their dependence on the use of the comparison theorem for BSDEs (see e.g. N. El Karoui, S. Peng, M.-C. Quenez 6, Theorem 2.2]). Furthermore, except in 11], the generator is always assumed to be at most linear in the state variable. Let us mention nevertheless an exception: in 11], J.-P. Lepeltier and J. San Martin accomodate a growth of the generator ? ? of the following type: C 1 + jxj log jxj , C 1 + jxj log log jxj .. . On the other hand, one of the most promising eld of application for the theory of BSDEs is the analysis of elliptic and parabolic partial di erential equations (PDEs for short) and we refer to E. Pardoux 12] for a survey of their relationships. Indeed, as it was revealed by S. Peng 18] and by E. Pardoux, S. Peng 15] (see also the contributions of G. Barles, R. Buckdahn, 1

f(s; Ys ; Zs)ds ?

Zs dWs;

0 t T;

ity solutions in the most general case) of semilinear PDEs. This provides a generalization to the nonlinear case of the well known Feynman-Kac formula. In many examples of semilinear PDEs, the nonlinearity is not of linear growth (as implied by a global Lipschitz condition) but instead, it is of polynomial growth, see e.g. the nonlinear heat equation analyzed by M. Escobedo, O. Kavian and H. Matano in 7]) or the Allen-Cahn equation (G. Barles, H. M. Soner, P. E. Souganidis 2]). If one attempts to study these semilinear PDEs by means of the nonlinear version of the Feynman-Kac formula, alluded to above, one has to deal with BSDEs whose generators with nonlinear (though polynomial) growth. Unfortunately, existence and uniqueness results for the solutions of BSDE's of this type were not available when we rst started this investigation, and lling this gap in the literature was at the origin of this paper. In order to overcome the di culties introduced by the polynomial growth of the generator, we assume that the generator satis es a kind of monotonicity condition in the state variable. This condition is very useful in the study of BSDEs with random terminal time. See the works of S. Peng 18], R. W. R. Darling, E. Pardoux 5], Ph. Briand, Y. Hu 4] for attempts in the spirit of our investigation. Even though it looks rather technical at rst, it is especially natural in our context: indeed, it is plain to check that it is satis ed in all the examples of semilinear PDEs quoted above. The rest of the paper is organized as follows. In the next section, we x some notation, we stae our main assumptions and we prove a technical proposition which will be needed in the sequel. In section 3, we deal with the case of BSDEs with xed terminal time: we prove an existence and uniqueness result and we establish some a priori estimates for the solutions of BSDEs in this context. In section 4, we consider the case of BSDEs with random terminal times. BSDEs with random terminal times play a crucial role in the analysis of the solutions of elliptic semilinear PDEs. They were rst introduced by S. Peng 18] and then studied in a more general framework by R. W. R. Darling, E. Pardoux 5]. These equations are also considered in 12]. Acknowledgments. The rst named author would like to thank the Statistics & Operations Research Program of Princeton University for its warm hospitality. Note added in proof: During the completion of this manuscript we learned that E. Pardoux solved a similar problem in a somehow more general set-up. We thank him for providing us with a copy of 13].

E. Pardoux 1], Ph. Briand 3], E. Pardoux, F. Pradeilles, Z. Rao 16], E. Pardoux, S. Zhang 17] among others), BSDEs provide a probabilistic representation of solutions (viscos-

2 Preliminaries
Let ?

2.1 Notation and Assumptions


Ft

( ; F ; IP) be a probability space carrying a d-dimensional Brownian motion (Wt )t 0, and usual we assume that each - eld Ft has been t 0 be the ltration generated by (Wt )t 0. As ? augmented with the IP-null sets to make sure that Ft t 0 is right continuous and complete. For y 2 IRk , we denote by jyj its Euclidean norm and if z belongs to IRk d , j z j denotes tr(zz ) 1=2. For q > 1, we de ne the following spaces of processes:

Sq = Hq =

progressively measurable;
(

t 2 IRk ; t

j Sq := IE sup j tjq < 1 , 0 t T j j q := IE q


Z
0

jq

progressively measurable;

2 IRk d;

j tj dt
2

q=2

<1 ,

and we consider the Banach space Bq = S q Hq endowed with the norm:

j (Y; Z)j q = IE q

sup

t T

jYtjq

+ IE

j Ztj 2dt

q=2

We now introduce the generator of our BSDEs. We assume that f is a function de ned on ? 0; T] IRk IRk d , with values in IRk in such a way that the process f(t; y; z) t2 0;T ] is progressively measurable for each (y; z) in IRk IRk d . Furthermore we make the following assumption. (A 1). There exist constants 0, 2 IR, C 0 and p > 1 such that IP ? a:s:, we have: 1. 8t; 8y; 8(z; z 0); f(t; y; z) ? f(t; y; z 0 ) j z ? z 0j ; ? 2. 8t; 8z; 8(y; y0 ); (y ? y0 ) f(t; y; z) ? f(t; y0 ; z) ? jy ? y0 j2; ? 3. 8t; 8y; 8z; f(t; y; z) f(t; 0; z) + C 1 + jyjp ); 4. 8t; 8z; y 7?! f(t; y; z) is continuous. We refer to the condition (A 1).2 as a monotonicity condition. Our goal is to study the BSDE Yt = +
Z

when the generator f satis ?es the above assumption. In the classical case p = 1, the terminal condition and the process f(t; 0; 0) t2 0;T ] are assumed to be square integrable. In the nonlinear ? case p > 1, we need stronger integrability conditions on both and f(t; 0; 0) t2 0;T ] . We suppose that: (A 2). is an FT -measurable random variable with values in IRk such that
i h IE j j2p + IE Z

f(s; Ys ; Zs)ds ?

Zs dWs;

0 t T;

(1)

f(s; 0; 0) 2 ds

< 1:

Remark. We consider here only the case p > 1 since the case p = 1 is treated in the works of
R. W. R. Darling, E. Pardoux 5] and E. Pardoux 12].

We end these preliminaries by establishing an a priori estimate for BSDEs in the case where and f(t; 0; 0) are bounded. The following proposition is a mere generalization of a result of S. Peng 19, Theorem 2.2] who proved the same result under a stronger assumption on f namely, 8t; y; z; f(t; y; z) + jyj + j z j : Our contribution is merely to remark that his proof requires only an estimate of y f(t; y; z) and thus that the result should still true in our context. We include a proof for the sake of completeness. Proposition 2.1 Let ?(Yt; Zt) t2 0;T ] 2 B 2 be a solution of the BSDE (1). Let us assume moreover that for each t, y, z , y f(t; y; z) jyj + jyj2 + jyj j z j ; and, j j 1 : Then, for each " > 0, we have, setting = " + 2 + 2 if " + 2 + 2 > 0, and = 1 otherwise,
0

2.2 A First a priori Estimate

sup jYtj2
t T

2? e T + " e T ?1 :

Proof. Let us x t 2 0; T], will be chosen later in the proof. Applying It^'s formula to o e s?t jYs j between t and T, we obtain:
( ) 2

jYtj +
2

s?t) ?

jYs j + j Zsj ds = j j e
2 2 2

T ?t) + 2

provided we write Mt for 2

e (s?t) Ys f(s; Ys ; Zs)ds ? Mt ;

jYt j2 +

e (s?t) Ys Zs dWs. Using the assumption on ( ; f) it follows that:


2

? e (s?t) jYs j2 + j Zsj 2 ds

Z T e T + 2 e (s?t) jYsj + jYsj2 + jYsj j Zs j ds ? Mt:

Using the inequality 2ab

a2
2

+ b2, we obtain, for any " > 0,


2 2 e (s?t) " + (" + 2 + 2 )jYsj2 ds t Z T Z T + e (s?t) j Zsj 2ds ? 2 e (s?t) Ys Zs dWs; 2

jYtj +
2

s?t) ?

jYs j + j Zsj ds

e +
t

T 2? e T + " e T ? 1 ? 2 e (s?t)Ys Zs dWs : t Taking the conditional expectation with respect to Ft of both sides, we get immediately that:

and choosing = " + 2 +

yields the inequality

jYtj2

8t 2 0; T];
which completes the proof.

jYtj2

2? e T + " e T ?1 ;

3 BSDEs with Fixed Terminal Times


The goal of this section is to study the BSDE (1) for xed (deterministic) terminal time T under the assumption (A 1) and (A 2). We rst prove uniqueness, then we prove an a priori estimate and nally we turn to existence.

3.1 Uniqueness and a priori Estimates

This subsection is devoted to the proof of uniqueness and to the study of the integrability properties of the solutions of the BSDE (1). Theorem 3.1 If (A 1).1{2 hold, the BSDE (1) has at most one solution in the space B2 . Proof. Suppose that we have two solutions in the space B2 , say (Y 1 ; Z 1) and (Y 2; Z 2). Setting Y Y 1 ? Y 2 and Z Z 1 ? Z 2 for notational convenience, for each real number and for each t 2 0; T], taking expectations in It^'s formula gives: o The vanishing of the expectation of the stochastic integral is easily justi ed in view of Burkholder's inequality. Using the monotonicity of f and the Lipschitz assumption, we get:
Z T h i h IE e t j Yt j2 + e s j Zs j 2ds IE 2 Z Z T h i hZ T i ? 1 2 IE e tj Yt j2 + e s j Zs j 2ds = IE e s 2 Ys f(s; Ys1; Zs ) ? f(s; Ys2; Zs ) ? j Ys j2 ds : t t

Z T i e s j Ys jjj Zs j ds ? ( + 2 ) e sj Ys j2ds :

Hence, we see that


t

We conclude the proof of uniqueness by choosing = 2 ? 2 + 1. 2 We close this section with the derivation of some a priori estimates in the space B2p . These estimates give short proofs of existence and uniqueness in the Lipschitz context. They were introduced in a \ Lp framework" by N. El Karoui, S. Peng, M.-C. Quenez 6] to treat the case of Lipschitz generators. Proposition 3.2 For i = 1; 2 we let (Y i ; Z i ) 2 B2p be a solution of the BSDE
2

Z T h hZ T i hZ T i i IE e t j Ytj2 + e sj Zs j 2ds (2 2 ? 2 ? )IE e s j Zs j 2ds : e s j Ys j2ds + 1 IE 2

Yti = i +

where ( i ; f i ) satis es the assumptions (A 1) and (A 2) with constants i , i and Ci . Let " such " that 0 < " < 1 and ( 1 )2 =" ? 2 1. Then there exists a constant Kp which depends only on p and on " such that:

i f i (s; Ysi ; Zs)ds ?

i Zs dWs;

0 t T;

IE sup
0

where = 1 ? 2 , Y Y 1 ? Y 2, Z Z 1 ? Z 2 and f f 1 ( ; Y 2 ; Z 2) ? f 2 ( ; Y 2; Z 2 ). Moreover, if > ( 1 )2 =" ? 2 1, we have also, setting = ? ( 1 )2 =" + 2 1,

t T

ep t j Yt j2p +

t j Zt j 2dt p

" Kp IE

pT j

j
T

p+

e 2 s j fs jds

IE
2

Proof. As usual we start with It^'s formula to see that: o Z T Z T ? t j Yt j + sj Zs j ds = e T j j + 2 e e e s Ys f (s; Ys ; Zs ) ? f (s; Ys ; Zs ) ds
t
2 2

p e t j Yt j2dt

" Kp

pT 2p p IE e j j +

e 2 sj fs jds
1 1

:
2 2

?
Z

where we set Mt = 2 e s Ys Zs dWs for each t 2 0; T]. In order to use the monotonicity of t f 1 and the Lipschitz assumption on f 1 , we split one term into three parts, precisely we write: ? ? 1 2 1 1 Ys f 1 (s; Ys1; Zs ) ? f 2 (s; Ys2 ; Zs ) = Ys f 1 (s; Ys1; Zs ) ? f 1 (s; Ys2; Zs ) ? 1 1 2 + Ys f (s; Ys2; Zs ) ? f 1 (s; Ys2; Zs ) ? 1 2 2 + Ys f (s; Ys2; Zs ) ? f 2 (s; Ys2; Zs ) ; ? and the inequality 2 1 jYsj j Zsj ( 1 )2 =" jYsj2 + "j Zs j 2 implies that: Z T Z T 2 1 e t j Yt j2 + (1 ? ") e sj Zs j 2ds e T j j2 + e s ? ? 2 1 + ( ") j Ys j2ds t t +2
Z T e tj Yt j2 + (1 ? ") e s j Zs j 2ds + Z Z

e sj Ys j2ds ? Mt

Setting = + 2 1 ? ( 1 )2=", the previous inequality can be rewritten in the following way
T t t

e s j Ysj j fs jds ? Mt : e T j j2 ? Mt +2
Z

e s j Ys j2ds

e s j Ys j j fs jds:

(2)

Taking the conditional expectation with respect to Ft of the previous inequality, we deduce since the conditional expectation of Mt vanishes,
Z T e t j Ytj2 IE e T j j2 + 2 e s j Ys j j fs jds Ft ;
0

and since p > 1, Doob's maximal inequality implies: IE sup ep t j Yt j2p 0 t T Kp IE ep T j j2p + Kp IE ep T j

Z
0

p e s j Ys j j fs jds

p+

sup

t T

e p =2)tj Ytjp
(

Z
0

p e( =2)sj fs jds :

where we use the notation Kp for a constant depending only on p and whose value could be changing from line to line. Thanks to the inequality ab a2 =2 + b2=2, we get Z T 2p IE sup ep t j Yt j2p Kp IE e pT j j2p + e( =2)sj fs jds + 1 IE sup ep tj Yt j2p ; 2 0 t T 0 t T 0 which gives IE sup ep t j Ytj2p 0 t T Kp IE e pT j j2p +
Z

2p e( =2)sj fs jds :

(3)

Now coming back to the inequality (2), we have since " < 1, Z T 1 e T j j2 + 2 Z T e s j Y j j f jds ? 2 Z T e s Y Z dW ; sj Zs j 2ds e s s s s s 1?" 0 0 0 and by Burkholder-Davis-Gundy's inequality we obtain IE
Z

p e s j Zs j 2ds

" Kp IE e pT j j2p + " +Kp IE


Z
0

p=2 e2 sj Ys j2j Zs j 2ds ;


Z
0

p e sj Ys j j fs jds

and thus it follows easily that: IE


Z

s j Zs j 2ds p

" Kp IE

pT j

p+

sup

t T

e p =2)tj Yt jp
(

p e( =2)sj fs jds

" +Kp IE sup e(p =2)tj Ytjp 0 t T

Z
0

p=2 e sj Zs j 2ds ;
Z

which yields the inequality, using one more time the inequality ab a2 =2 + b2=2, IE
Z

p e s j Zs j 2ds

" Kp IE e pT j j2p + sup ep t j Ytj2p +

1 + 2 IE

Z
0

t T

e s j Zs j 2ds
h

2p e( =2)sj fs jds

:
i

Taking into account the upper bound found for IE sup0 t T ep tj Yt j2p given in (3), we derive from the above inequality, IE
Z

p e s j Zs j 2ds

" Kp IE e pT j j2p +

2p e( =2)s j fs jds ;

which concludes the rst part of this proposition. For the second assertion we simply remark that (2) gives
Z
0

e sj Ys j2ds

Z T Z T e T j j2 + 2 e s j Ys j j fs jds ? 2 e s Ys Zs dWs ;
0 0

A similar computation gives:


pIE
Z
0

p e s j Ysj2 ds

" Kp IE e pT j j2p + sup ep tj Yt j2p +

Z
0

1 + 2 IE

Z
0

t T

2p e( =2)sj fs jds

e s j Zs j 2ds

which completes the proof using the rst part of the proposition already shown and keeping in mind that if > ( 1 )2=" ? 2 1 then > 0. 2

Corollary 3.3 Under the assumptions and with the notation of the previous proposition, there exists a constant K, depending only on p, T , 1 and 1 such that:
IE sup j Yt j2p +
0

Z
0

t T

j Zt j 2dt

KIE j j2p +

Z
0

j fs jds
Z
0

:
p

Proof. From the previous proposition, we have (taking " = 1=2):


IE sup
0

t T
?

ep t j Yt j2p +

Z
0

e
Z

t j Zt j 2dt p T

Kp IE e
p

pT j

p+

e 2 s j fs jds
T
0

and thus e?pT IE sup j Yt j2p + 0 t T

j Zt j 2dt

Kp epT + IE j j2p +

j fs jds

It is enough to set K = epj jT Kp to conclude the proof. 2 Remark. It is plain to check that the assumptions (A1 ).3{4 are not needed in the above proofs of the results of Proposition 3.2 and its corollary.

Corollary 3.4 Let (Yt; Zt) t T 2 B p be a solution of the BSDE (1) and let us assume that 2 L p and assume also that there exists a process (ft ) t T 2 H p(IRk ) such that 8(s; y; z) 2 0; T] IRk IRk d ; y f(s; y; z) jyj jfs j ? jyj + jyj j z j : " Then, if 0 < " < 1 and =" ? 2 , there exists a constant Kp which depends only on p and on
2 0 2 0 2 2

" such that:

IE sup ep t jYtj2p + 0 t T

Z
0

p e t j Ztj 2dt

" Kp IE e pT j j2p +

Z
0

e 2 s jfs jds

Proof. As usual we start with It^'s formula to see that: o


t t

Z T Z T Z T e tjYt j2 + e s j Zsj 2ds = e T j j2 + 2 e s Ys f(s; Ys ; Zs )ds ? e s jYsj2ds ? Mt ;

provided we set Mt = 2 e s Ys Zs dWs for each t 2 0; T]. Using the assumption on y f(s; y; z) t ? 2 and then the inequality 2 jYs j j Zsj =" jYs j2 + "j Zsj 2, we deduce that
Z T e t jYtj2 + (1 ? ") e sj Zs j 2ds

Z T 2 e T j j2 + e s ? ? 2 + " jYsjds t

+2 Since
Z

t Tj

e s jYsj jfsjds ? Mt :
Z

2 ? 2 =", the previous inequality implies e


t jYtj2 + (1 ? ") T t

s j Zsj 2ds

j +2
2

e s jYsj jfsjds ? Mt :

This inequality is exactly the same as the inequality (2). As a consequence we can complete the proof of this as in the proof of Proposition 3.2. 2 In this subsection, we study the existence of solutions for the BSDE (1) under the assumptions (A 1) and (A 2). We shall prove that the BSDE (1) has a solution in the space B2p . We may assume, without lost of generality, that the constant is equal to 0. Indeed, (Yt; Zt )t2 0;T ] solves the BSDE (1) in B 2p if and only if, setting for each t 2 0; T], Y t = e? t Yt ; and Z t = e? t Zt ; ? the process Y ; Z solves in B2p the following BSDE: Yt = +
Z

3.2 Existence

where = e? T and f(t; y; z) = e? t f(t; e t y; e t z) + y. Since ; f satis es the assumption ? (A 1) and (A 2) with = , = 0 and C = C exp T (p ? 1) + + ? + j j, we shall assume that = 0 in the remaining of this section. Our proof is based on the following strategy: rst, we solve the problem when the function f does not depend on the variable z and then we use a x point argument using the a priori estimate given in subsection 3.1, Proposition 3.2 and Corollary 3.3. The following proposition gives the rst step. Proposition 3.5 Let the assumptions (A 1) and (A 2) hold. Given a process (Vt)0 t T in the ? space H2p , there exists a unique solution (Yt ; Zt ) t2 0;T ] in the space B2p to the BSDE Yt = +
Z

f(s; Y s ; Z s )ds ?

T t

Z s dWs ;

0 t T;
?

Proof. We shall write in the sequel h(s; y) in ?place of f(s; y; Vs ). Of course h satis es the assumption (A 1) with the same constants as f and h( ; 0) belongs to H p since f is Lipschitz with respect to z and the process V belongs to H p . What we would like to do is to construct a sequence
2 2

f(s; Ys ; Vs)ds ?

T t

Zs dWs ;

0 t T:

(4)

of Lipschitz (globally in y uniformly with respect to (!; s)) functions hn which approximate h and which are monotone. However, we only manage to construct a sequence for which each hn is monotone in a given ball (the radius depends on n). As we will see later in the proof, this \ local" monotonicity is su cient to obtain the result. This is mainly due to Proposition 2.1 whose key idea can be traced back to a work of S. Peng 19, Theorem 2.2]. 8

We shall use an approximate identity. LetR : IRk ?! IR+ be a nonnegative C 1 function with the unit ball for support and such that (u)du = 1 and de ne for each integer n 1, n (u) = nk (nu). We denote also, for each integer n, by n a C 1 function from IRk to IR+ such that 0 n 1, n (u) = 1 for juj n and n (u) = 0 as soon as juj n + 1. We set moreover 8 8 if j j n; < h(s; y) < if jh(s; 0)j n; and, ~ n(s; y) = : n h(s; y) h n=: n otherwise. otherwise, jh(s; 0)j ~ Such an hn satis es the assumption (A 1) and moreover we have j n j n and j~ n(s; 0)j n. h h i p 1=2 2 + 1 where r] stands as usual for the integer part of Finally we set q(n) = e (n + 2C) 1 + T r and we de ne ? hn(s; ) = n h s 2 0; T]: q(n)+1 ~ n(s; ) We rst remark that hn (s; y) = 0 whenever jyj q(n)+3 and that hn (s; ) is globally Lipschitz with respect to y uniformly in (!; s). Indeed, hn (s; ) is a smooth function with compact support and thus we have supy2IRk rhn(s; y) = supjyj q(n)+3 rhn(s; y) and,?from the growth assumption on f (A 1).3, it is not hard to check that j~ n(s; y)j n ^ jh(s; 0)j + C 1 + jyjp which implies that h

jj

rhn(s; y)

n n + C(1 + 2p?1 jyjp ) + C2p?1

r (u) du:

As an immediate consequence, the function hn is globally Lipschitz with respect to y uniformly in (!; s). In addition j nj n and jhn(s; 0)j n ^ jh(s; 0)j + 2C and thus Theorem 5.1 in 6] provides a solution (Y n ; Z n) to the BSDE Ytn = n +
Z Z

T t

hn(s; Ysn )ds ?

which belongs actually to Bq for each q > 1. In order to apply Proposition 2.1 we observe that, for each y, y hn (s; y) = = +
Z Z

n Zs dWs ;

0 t T;

(5)

n (u) q(n)+1 (y ? u)y

~ hn (s; y ? u)du ~ n(s; y ? u) ? hn (s; ?u) du ~ h ~ hn (s; ?u)du:

n (u) q(n)+1 (y ? u)y n (u) q(n)+1 (y ? u)y

Hence, we deduce that, since the function ~ n(s; ) is monotone (recall that = 0) in this section) h and in view of the growth assumption on f we have: ? 8(s; y) 2 0; T]; y hn (s; y) n ^ jh(s; 0)j + 2C jyj: (6) This estimate will turn out to be very useful in the sequel. Indeed, we can apply Proposition 2.1 to the BSDE (5) to show that, for each n, choosing " = 1=T, sup jYtnj (n + 2C)e1=2 1 + T 2 : 0 t T
Z
0

(7)

On the other hand, the inequality (6) allows one to use Corollary 3.4, to obtain, for a constant Kp depending only on p:
n2IN

sup IE

sup jYtn j2p + 0 t T

p j Ztnj 2dt

Kp IE j j 9

p+

Z
0

jh(s; 0)j + 2C ds

(8)

It is worth noting that, thanks to jh(s; 0)j jf(s; 0; 0)j + j Vs?, the right hand side of the previous j inequality is nite. We want to prove that the sequence (Y n ; Z n) IN converges towards the ? solution of the BSDE (4) and in order to do that we rst show that the sequence (Y n; Z n ) IN is a Cauchy sequence in the space B2. This fact relies mainly on the following property: hn satis es the monotonicity condition in the ball of radius q(n). Indeed, x n 2 IN and let us pick y; y0 such that jyj q(n) and jy0 j q(n). We have: (y ? y0 ) (hn(s; y) ? hn (s; y0 ) = (y ? y0 )
Z

?(y ? y0 )
Z

h n (u) q(n)+1 (y ? u)~ n(s; y ? u)du h n (u) q(n)+1 (y0 ? u)~ n (s; y0 ? u)du:

But, since jyj; jy0 j q(n) and since the support of n is included in the unit ball, we get from the fact that q(n)+1 (x) = 1 as soon as jxj q(n) + 1, (y ? y0 ) (hn(s; y) ? hn (s; y0 ) =
n (u)(y ? y0 )
?

~ n(s; y ? u) ? hn (s; y0 ? u) du: ~ h

Hence, by the monotonicity of ~ n , we get h 8y; y0 2 B(0; q(n)); (y ? y0 ) (hn (s; y) ? hn(s; y0 ) 0: (9) ? n n We now turn to the convergence of (Y ; Z ) IN . Let us x two integers m and n such that m n. It^'s formula gives, for each t 2 0; T], o

j Y t j2 +

where we have set = m ? n , Y Y m ? Y n and Z Z m ? Z n. We split one term of the previous inequality into two parts, precisely we write: ? ? ? Ys hm (s; Ysm ) ? hn(s; Ysn) = Ys hm (s; Ysm ) ? hm (s; Ysn) + Ys hm (s; Ysn ) ? hn (s; Ysn ) : But in view of the estimate (7), we have jYsm j q(m) and jYsn j q(n) q(m). Thus, using the property (9), the rst part of the right hand side of the previous inequality is non-positive and it follows that

j Zs j 2ds = j j2 + 2

T t

Ys hm (s; Ysm ) ? hn (s; Ysn ) ds ? 2

Ys Zs dWs ;

j Y t j2 +

In particular, we have IE IE sup j Yt j2


0

j Zs j 2ds j j2 + 2
hZ
0

j Ys j hm (s; Ysn ) ? hn (s; Ysn ) ds ? 2


Z
0 2

Ys Zs dWs : (10)
i

j Zs j ds
2

2IE j j +
Z
0

j Ys j hm (s; Ysn ) ? hn (s; Ysn ) ds ;


Z
0

and coming back to (10), Burkholder's inequality implies


t T

KIE j j2 +
i

j Ys j hm (s; Ysn ) ? hn (s; Ysn ) ds +


KIE j j2 +
h h Z
0

j Ysj2 j Zs j 2ds
i

1 2

and then using the inequality ab a2 =2 + b2 =2 we obtain the following inequality: IE sup j Ytj2 0 t T
h

j Ys j hm (s; Ysn ) ? hn(s; Ysn) ds

1 IE sup j Y j2 i + K 2 IEh Z T j Z j 2dsi; +2 t s 2 0 t T 0 10

from which we get, for another constant still denoted by K, IE sup j Yt j2 + 0 t T


2

j Zs j 2ds

KIE j j2 +
2

j Ys j hm (s; Ysn ) ? hn(s; Ysn) ds :

Obviously, since 2 L tends to 0 in L as n; m ! 1 with m n. So, we have only to prove that hZ T i IE j Ys j hm (s; Ysn) ? hn (s; Ysn ) ds ?! 0; as n ! 1: 0 For any nonnegative number k, we write
m Sn

p,

= IE

hZ

Rm = IE n IE IE
hZ hZ

hZ

1jYsn j

jYsm j k j Ysj hm (s; Ysn) ? hn (s; Ysn ) ds ; jYsm j>k j Ysj hm (s; Ysn) ? hn (s; Ysn ) ds ;
i i

and with these notations we have


T

1jYsn j

and hence, the following inequality:


T
0

m j Ys j hm (s; Ysn ) ? hn (s; Ysn ) ds = Sn + Rm n


i hZ
0

j Ys j
IE
hZ
0

hm (s; Ysn) ? hn (s; Ysn ) ds


T

kIE
hZ
0

First we deal with Rm and using Holder's inequality we get the following upper bound: n Rm n Setting Am n

jyj k

sup hm (s; y) ? hn(s; y) ds + Rm : (11) n


p+1 p i n ) ? hn (s; Y n ) p2+1 ds 2p : hm (s; Ys s
i

1jYsnj
T
0

jYsm j k ds

? i p2p1

IE

2p Ys j p+1 2p

= IE

hZ

p j Ys j p2+1 hm (s; Ysn ) ? hn(s; Ysn) p+1 ds for notational convenience, we have

Rm n Rm n

Z
0

IP

jYsn j + jYsm j
h?

k ds

p?1 2p

Am n

p+1 2p ;

and Chebyshev's inequality yields: k ?p


1

Z
0

IE

p?1 i +1 n j + jY m j 2p ds 2p Am p2p jYs n s


h

2p T

p?1 2p
h

sup IE sup jY nj2p n2IN 0 t T t


i

? i p2p1

k1?p Am 2p : n

p+1

(12)

We have already seen that supn2IN IE sup0 t T jYtnj2p is nite (cf. (8)) and we shall prove that Am remains bounded as n; m vary. To do this, let us recall that n Am = IE n
hZ

j Ys j p+1 hm (s; Ysn) ? hn (s; Ysn ) p+1 ds ;


r+r
1 1

2p

2p

1 and using Young's inequality (ab r ar + r1 br whenever we deduce that 1 h Z T j Y j2pdsi + p IEh Z T Am p + 1 IE s n p+1 0 0

= 1) with r = p+1 and r = p+1 , p


i

hm (s; Ysn ) ? hn (s; Ysn ) 2ds :

11

The rst part of the last upper bound remains bounded as n; m vary since from (8) we know that h i supn2IN IE sup0 t T jYtnj2p is nite. Moreover, we derive easily from the assumption (A 1) that ? hn (s; y) n ^ h(s; 0) + 2p C 1 + jyjp , and then, hm (s; Ysn) ? hn (s; Ysn ) IE
hZ
0

2 h(s; 0) + 2p+1 C 1 + jYsn jp ; Kp IE


hZ
0

which yields the inequality, taking into account the assumption (A 1).1,
T

hm (s; Ysn ) ? hn(s; Ysn ) 2 ds

jf(s; 0; 0)j2 + j Vsj 2 + 1 + jYsn j2p ds :

Taking into account (8) and the integrability assumption on both V and f( ; 0; 0), we have proved that supn m Am < 1. n Coming back to the inequality (12), we get, for a constant , Rm k1?p , and since p > 1, n Rm can be made arbitrary small by choosing k large enough. Thus, in view of the estimate (11), n it remains only to check that, for each xed k > 0, IE
hZ
0

jyj k

sup hm (s; y) ? hn(s; y) ds

i ?

goes to 0 as n tends to in nity uniformly with respect to m to get the convergence of (Y n ; Z n) IN in the space B2 . But, since h(s; ) is continuous (IP ? a:s:; 8s), hn(s; ) converges towards h(s; ) ? uniformly on compact sets. Taking into account that supjyj k hn(s; y) h(s; 0) + 2p C 1 + kp Lebesgue's convergence ?theorem gives the result. Thus, the sequence (Y n ; Z n)?IN converges towards a progressively measurable process (Y; Z) in the space B2 . Moreover, since (Y n ; Z n) IN is bounded in B 2p (see (8)), Fatou's lemma implies that (Y; Z) belongs also to the space B2p . It remains to check that (Y; Z) solves the BSDE (4) which is nothing but Yt = +
Z

Of course, we want to pass to the limit in the BSDE (5). Let us rst remark that n ?! in Z T Z T ndWs ?! 2p L and that for each t 2 0; T], Zs Zs dWs since Z n converges to Z in the space t t H2 (IRk d ). Actually, we only need to prove that for t 2 0; T],
Z

h(s; Ys )ds ?

Zs dWs ;

0 t T:

t
hZ
0

hn(s; Ysn)ds ?!
i hZ
0

T t

h(s; Ys )ds;

as n ! 1: 0; T]). Indeed,
hZ
0

For this, we shall see that hn( ; Y n) tends to h( ; Y ) in the space L1( IE
T

hn(s; Ysn)?h(s; Ys ) ds

IE

hn (s; Ysn )?h(s; Ysn) ds +IE

h(s; Ysn)?h(s; Ys ) ds :

The rst term of the right hand side of the previous inequality tends to 0 as n goes to 1 by the Z T same argument we use earlier in the proof to see that IE j Ys j jhm(s; Ysn ) ? hn (s; Ysn )jds goes 0 to 0. For the second term, we shall rstly prove that there exists a converging subsequence. Indeed, since Y n converges to Y is the space S 2, there exists a subsequence (Y nj ) such that IP{a.s.,

8t 2 0; T];
12

Ytnj ?! Yt :

Since h(t; ) is continuous (IP{a.s., 8t), IP{a.s. 8t; h(t; Ytnj ) ?! h(t; Yt) . Moreover, since Y 2 S 2p and (Yn )IN is bounded in S 2p ((8)), it is not hard to check from the growth assumption on f that hZ T i sup IE h(s; Ysnj ) ? h(s; Ys ) 2ds < 1; and then the result follows by uniform integrability of the sequence. Actually, the convergence hold for the whole sequence since each subsequence has a converging subsequence. Finally, we can pass to the limit in the BSDE (5) and the proof is complete. 2 With the help of this proposition, we can now construct a solution (Y; Z) to the BSDE (1). We claim the following result: Theorem 3.6 Under the assumptions (A 1) and (A 2), the BSDE (1) has a unique solution (Y; Z)
in the space B 2p.
j 2IN
0

Proof. The uniqueness part of this statement is already proved in Theorem 3.1. The rst step in the proof of the existence is to show the result when T is su ciently small. According to Theorem 3.1 and Proposition 3.5, let us de ne the following function from B p into itself. For (U; V ) 2 B p, (U; V ) = (Y; Z) where (Y; Z) is the unique solution in B p of the BSDE:
2 2 2

Yt = +

f(s; Ys ; Vs)ds ?

Zs dWs;

0 t T:
?

Next? we prove that is a strict contraction provided that T is small enough. Indeed, if U 1; V 1 ? and U 2 ; V 2 are both elements of the space B2p , we have, applying Proposition 3.2 for Y i ; Z i = ? i i U ; V , i = 1; 2, IE sup j
0

t T

Yt j p +
2

Z
0

j Zt j dt
2

Kp IE

Z
0

jf(s; Ys2 ; Vs1) ? f(s; Ys2 ; Vs2 jds

where Y Y 1 ? Y 2, Z Z 1 ? Z 2 and Kp is a constant depending only on p. Using the Lipschitz assumption on f, (A 1).1, and Holder's inequality we get the inequality IE sup j
0

t T

Yt j p +
2

Z
0

j Zt j dt
2

Kp p T p IE
2

Z
0

j Vs1 ? Vs2 j 2ds

Hence, if T is such that Kp 2p T p < 1, is a strict contraction and thus has a unique xed point in the space B 2p which is the unique solution of the BSDE (1). The general case is treated by subdividing the time interval 0; T] into a nite number of intervals whose lengths are small enough and using the above existence and uniqueness result in each of the subintervals. 2

4 The Case of Random Terminal Times


In this section, we brie y explain how to extend the results of the previous section to the case of a random terminal time. Let us recall that (Wt )t 0 is a d-dimensional Brownian motion, de ned on a probability space ? ( ; F ; IP) and that Ft t 0 is the complete ? -algebra generated by (Wt )t 0. Let be a stopping time with respect to Ft t 0 and let us assume that is nite IP{a.s. Let us consider also a random variable F {measurable and a function f de ned on IR+ IRk IRk d ? with values in IRk and such that the process f( ; y; z) is progressively measurable for each (y; z). 13

4.1 Notation and Assumptions

We study the following BSDE with the random terminal time : Yt = +


Z

t^

f(s; Ys ; Zs )ds ?

t^

Zs dWs;

t 0:
?

(13)

By a solution of this equation, we always mean a progressively measurable process (Yt ; Zt ) t 0 with values in IRk IRk d such that Zt = 0 if t > . Moreover, since is nite IP{a.s., (13) implies that Yt = if t . We need to introduce further notation. Let us consider q > 1 and 2 IR. We say that a progressively measurable process with values in IRn belongs to Hq (IRn ) if IE
Z

q=2 e t j tj 2dt < 1:

Moreover, we say that belongs to the space S q ; (IRn ) if IE sup e(q=2) (t^ ) j tjq < 1:
t
0

We are going to prove an existence and uniqueness result for the BSDE (13) under assumptions which are very similar to those made in section 2 for the study of the case of BSDEs with xed terminal times. Precisely, we will suppose in the framework of random terminal times the following two assumptions: (A 3). There exist constants 0, 2 IR, C 0, p > 1 and 2 f0; 1g such that IP ? a:s:, we have: 1. 8t; 8y; 8(z; z 0); f(t; y; z) ? f(t; y; z 0 ) j z ? z0j ; ? 2. 8t; 8z; 8(y; y0 ); (y ? y0 ) f(t; y; z) ? f(t; y0 ; z) ? jy ? y0 j2; 3. 8t; 8y; 8z; f(t; y; z) f(t; 0; z) + C( + jyjp ); 4. 8t; 8z; y 7?! f(t; y; z) is continuous,

(A 4). is F -measurable and there exists a real number such that > ? 2 and
2

IE e + e

+ ep

jj

p+

e f(s; 0; 0) ds +
2

e( =2)s f(s; 0; 0) ds

< 1:

Remark. In the case < 0, which may occur if is an unbounded stopping time, our integrability
conditions are ful lled if we assume that IE e j j
2

p+

Z
0

e( =2)s f(s; 0; 0) 2 ds

< 1:

For notational convenience, we will simply write, in the remaining of the paper, S q ; and Hq instead of S q ; (IRk ) and Hq (IRk d ) respectively.

4.2 Existence and Uniqueness

In this section, we deal with the existence and uniqueness of the solutions of the BSDE (13). We claim the following proposition.
2 2

Proposition 4.1 Under the assumptions (A 3) and (A 4), there exists at most a solution of the BSDE (13) in the space S ; H .
14

get

Proof. Let (Y ; Z ) and (Y ; Z ) be two solutions of (13) in the space S ; H . Let us notice rst that Yt = Yt = if t and Zt = Zt = 0 on the set ft > g. Applying It^'s formula, we o
1 1 2 2 1 2 1 2 2 2

e (t^ ) j Yt^ j2 +

t^

e s j Zs j 2ds = 2

1 2 e s Ys f(s; Ys1 ; Zs ) ? f(s; Ys2 ; Zs ) ds t^ Z Z s j Ysj2 ds ? 2 e s Ys Zs dWs ; e ?

where we have set Y Y 1 ? Y 2 and Z Z 1 ? Z 2 . It is worth noting that, since f is Lipschitz in z and monotone in y, we have, for each " > 0, ? 8(t; y; y0 ; z; z 0); 2(y ? y0 ) f(t; y; z) ? f(t; y0 ; z 0) (?2 + 2 =")jy ? y0 j2 + "j z ? z 0 j 2: (14) Moreover, by Burkholder's inequality the continuous local martingale
n Z t^
0

t^

t^

e s Ys Zs dWs; t 0 = IE KIE
Z
0

is a uniformly integrable martingale. Indeed, IE


DZ

s Ys

Zs dWs 1

E1=2

e2 sj Ys j2j Zs j 2ds sup e


t t j Yt j2 1=2

1 2

Z
0

e s j Zs j 2ds

1 2

E1=2 K IE sup e t j Y j2 + Z e s j Z j 2ds ; e s Ys Zs dWs 1 t s 2 0 t 0 0 which is nite since ( Y; Z) belongs to the space S 2; H2 . Thanks to the inequality > 2 ? 2 , we can choose " such that 0 < " < 1 and > 2 =" ? 2 . Using the inequality (14), we deduce that, the expectation of the stochastic integral vanishing in view of the above computation, for each t,

and then,

IE

DZ

IE e (t^ ) j Yt^ j2 + (1 ? ")

which gives the result. 2 Before proving the existence part of the result, let us introduce a sequence of processes whose construction is due to R. W. R. Darling and E. Pardoux 5, pp. 1148{1149]. Let us set b b = 2 =2 ? and let (Y n; Z n) be the unique solution of the classical (the terminal time is deterministic) BSDE on 0; n]
b Ytn = IE e

t^

e s j Zs j 2ds

0;

Fn +
IE ep j
Z
0

n^

Since IE e2p j j2p

t^ i j2p and since

Z n b bn b bn e s f(s; e? s Ysn ; e? sZs ) ? Ysn ds ? Zs dWs :

IE

f(s; 0; 0) ds
2

IE

Z
0

e s f(s; 0; 0) 2ds

b b the assumption (A 4) and Theorem 3.6 ensure that (Y n ; Z n) belongs to the space B2p (on the interval 0; n]). In view of 12, Proposition 3.1], we have b b b Y n (t^ ) = Ytn ; and, Ztn = 0 on ft > g:

15

Since e

belongs to L2p (F ) there exists a process ( ) in H0 such that t = 0 if t > and 2 e


b Ytn = IE e

= IE e

sdWs :

We introduce still new notation. For each t > n we set:


b Ft = t; and, Ztn = t;

and for each nonnegative t:


b b Ytn = e? (t^ ) Ytn; and, Ztn = e? (t^ ) Ztn : This process satis es Ytn = Ytn and Ztn = 0 on ft > g and moreover (Y n; Z n ) solves the BSDE ^

Ytn = +

where fn (t; y; z) = 1t nf(t; y; z) + 1t>n y (cf 5]). We start with a technical lemma. K( ; f) = KIE ep j j2p + sup IE
I N

t^

n fn (s; Ysn; Zs )ds ?

t^

n Zs dWs ;

t 0;

(15)

Lemma 4.2 Let the assumptions (A 3) and (A 4) hold. Then, we have, with the notation
Z
0

e( =2)s f(s; 0; 0) ds
Z
0

;
p

and, also, for = ? 2 ,

sup ep (t^ t 0
)

jYtn j2p +
Z
0

Z
0

s jY nj2 ds p + s
Z
0

n e s j Zs j 2ds

K( ; f);

(16) (17)

IE
0

sup ep (t^ t 0
( ) 2

j
2

tj p +
2

s j sj2ds p +
2

p e s j sj 2ds

i h KIE ep j j2p :

Proof. Firstly, let us remark that Ztn = t = 0 if t > and, since Ytn = if t , we have supt ep t^ jYtn j p = sup t ep t jYtn j p. Moreover, since > 2 we can nd " such that 0 < " < 1 and > =" ? 2 . Applying Proposition 3.2 (actually a very mere extension to deal
0

with bounded stopping times as terminal times), we get IE


0

sup ep t jYtnj2p +
t n^

Z
0

n^

e s jYsn j2ds +

Z
0

n^

n e s j Zs j 2ds

p
Z
0

KE ep (n^ ) jY n (n^ )j2p +


n We have Yn^ = Ynn = e? (n^ ) IE e Fn^ =2 ? > 0 and using Jensen's inequality,

n^

e( =2)s f(s; 0; 0) ds

and then we deduce immediately that, since


h

IE ep (n^ ) jY n (n^ )j2p Hence, for each integer n, IE


0

= IE IE e( =2? )(n^ ) e IE ep j j2p :


h i

Fn^

pi

(18)

sup

t n^

ep t jYtn j2p +

Z
0

n^

s jY nj2 ds p + s

Z
0

n^

n e s j Zs j 2ds

K( ; f):

16

It remains to prove that we can nd the same upper bound for IE


n^ <t

sup ep t jYtn j2p +

n^

e s jYsnj2ds +

n^

n e s j Zs j 2ds

b b But the expectation is over the set fn < g and coming back to the de nition of (Yn ; Zn) for t > n, it is enough to check that

IE

sup ep( ?2 )(t^ t 0

tj p +
2

p e ?2 )sj sj2 ds +
(

p e( ?2 )s j sj 2ds

i h KIE ep j j2p

to get the inequality (16) of the lemma and thus to complete the proof since, in view of the de nition of , the previous inequality is nothing but the inequality (17). But, for each n, ( ; ) solves the the following BSDE:
t = IE
Z

e e

Fn^ ?

and by Proposition 3.2, since = ? 2 > 0, IE


0

t
Z
0

sdWs ; n^

0 t n;
i h KIE ep (n^ ) j n^ j2p :

sup

t n^

ep t j t j2p +

n^

s j s j2ds p +

p e s j sj 2ds

We have already seen (cf (18)) that IE ep (n^ ) j n^ j2p IE ep j j2p and thus the proof of this rather technical lemma is complete. 2 With the help of this useful lemma we can construct a solution to the BSDE (13). This is the aim of the following theorem. Theorem 4.3 ;Under the assumptions (A 3) and (A 3), the BSDE (13) has a unique solution (Y; Z)
in the space S 2

H which satis es moreover


2

IE sup ep (t^ ) jYtj2p +


t
0

e s jYsj2ds +

Z
0

e s j Zsj 2ds

K( ; f):

Proof. The uniqueness part of this claim is already proved in Proposition 4.1. We concentrate ourselves on the existence part. We split the proof into the two following steps: rst we show that ? the sequence (Y n ; Z n) IN is a Cauchy sequence in the space S 2; H2 and then we shall prove that the limiting process is indeed a solution. Let us rst recall that for each integer n, the process (Y n; Z n ) satis es Ytn = Ytn and Ztn = 0 ^ on ft > g and moreover solves the BSDE (15) whose generator fn is de ned in the following way: fn (t; y; z) = 1t nf(t; y; z) + 1t>n y. If we x m n, It^'s formula gives, since we have also o m m n n Ym^ = Ym = Ym^ = Ym = e? (m^ ) m , for t m,
e (t^ ) j Yt^ j2 +
Z

m^

t^

e s j Zs j 2ds = 2

m^

?
m^

t Z ^m^ t^

m n e s Ys fm (s; Ysm ; Zs ) ? fn (s; Ysn ; Zs ) ds

e s j Ysj2 ds ? 2
?

m^

where we have set Y e


(

t^

j Yt^ j +
2

Y m ? Y n, Z Z m ? Z n . It follows from the de nition of fn , e


s j Zs j 2ds

t^

e s Ys Zs dWs ;

t^

= 2

m^

t Z ^m^

m n e s Ys f(s; Ysm ; Zs ) ? f(s; Ysn ; Zs ) ds

+2

t Z^ m^ t^

e s j Ys j2ds ? 2

m^

1s>ne s Ys

n f(s; Ysn ; Zs ) ? Ysn ds:

t^

e s Ys Zs dWs

17

Since > 2 ? 2 , we can nd " such that 0 < " < 1 and = ? 2 =" + 2 > 0. Using the inequality (14) with this ", we deduce from the previous inequality, e (t^ ) j Yt^ j2 + (1 ? ")
Z

m^

t^

e s j Zs j 2 ds

?
+2

m^

t^ Z m^
(

e s j Ys j2ds ? 2 e

t_n)^

e s Ys t^ s j Ys j: f(s; Y n ; Z n) ? Y n s s s

m^

Zs dWs ds:

Now, using the inequality 2ab $a2 + b2=$ for the second term of the right hand side of the previous inequality, with $ < , we get, for each t m, noting = min(1 ? "; ? $) > 0, Z m^ 1 Z m^ e s f(s; Y n ; Z n) ? Y n 2 ds (t^ ) 2 s j Ys j2 + j Zs j 2 ds e j Yt^ j + e s s s $ n^ t^ (19) Z m^ s Ys Zs dWs : ?2 e In particular, we have, the expectation of the stochastic integral vanishes (cf Lemma 4.2), IE IE
hZ

t^

m^

Coming back to the inequality (19), Burkholder's inequality yields


0

e s j Ysj2 + j Zs j 2 ds KIE
Z

KIE

hZ

m^

n^

n e s f(s; Ysn ; Zs ) ? Ysn 2 ds :


Z
0

But, by an argument already used, KIE


Z

sup e t j Yt j2 t m^
m^

m^

n^

n e s f(s; Ysn ; Zs ) ? Ysn 2 ds+

m^

e2 s j Ysj2 j Zs j 2ds
Z
0

1 2

e2 s j Ys j2j Zs j 2ds

1 2

KIE 1 2 IE
0

As a consequence we obtain the inequality: IE


0

sup e t j Yt j2 + K IE 2 t m^
2

t m^

sup e t j Yt j2

1 2

m^
Z
0

e s j Zs j 2ds
m^

1 2

e s j Zs j 2ds :

and since Ytm = Ytn if t m, Yti = on ft g for each i, t = 0 on ft > g we deduce from the previous inequality IE sup e (t^ ) j Yt j2 +
t
0

t m^

sup e

t j Y t j2 +

m^

e j Ys j + j Zs j ds
2 2

KIE

m^

n^ Ztm = Ztn

n e s f(s; Ysn ; Zs ) ? Ysn 2ds ;

= t as soon as t m and ?n ; (20)

n where we have set ?n = IE e s f(s; Ysn ; Zs ) ? Ysn 2 ds . But the growth assumption on f n^ (A 3).3 implies that, up to a constant, ?n is upper bounded by

hZ

e s j Ys j2ds +

e s j Zs j 2ds
i

IE

hZ

n^

n e s f(s; 0; 0) 2 + + jYsn j2 + j Zs j 2 + jYsnj2p ds :


hZ i

Since, by assumption (A 4), IE e s jf(s; 0; 0)j2ds and IE e are nite, the rst two terms of 0 the ? previous upper bound tends to 0 as n goes to 1. Moreover, coming back to the de nition of b b Y n; Z n for t > n, we have IE
hZ i h i n e s jYsn j2 + j Zs j 2 ds = IE e( ?2 )s j s j2 + j sj 2 ds ; n^ n^ Z

18

and by Lemma 4.2 (cf (17)) the quantity above tends also to 0 with n going to 1. It remains to check that the same is true for IE
hZ

n^

e s jYsnj2pds = IE

hZ

n^

i e( ?2p )s j sj2pds ;

where, let us recall it, s means IE e Fs . By Jensen's inequality, it is enough to show the following: hZ i IE e( ?2 p)s IE ep j jp Fs 2 ds ?! 0; as n ! 1: If > 2p , since IE e2p j j2p IE ep j j2p < 1 and IE e j j2p < 1, Lemma 4.1 in 5] gives hZ i IE e( ?2 p)sIE ep j jp Fs 2ds < 1; from which we get the result. Now, we deal with the case 2p which implies 0 < 2 < time Jensen's inequality, we have IE
hZ
0

n^

2p < p . Using once more

n^

e ?2 p)s IE ep
(

jp

Fs ds
2

IE IE

hZ

hZ

n^

i IE e2p j j2p Fs ds i IE e(2 ? )p ep j j2p Fs ds ;

and since > 2 we have IE e(2 ? )p ep j j2p Fs follows, IE


hZ

n^

e(2 ? )p(s^ ) IE ep j j2p Fs . Hence, it


i e(2 ? )psIE ep j j2p Fs ds

n^

i e( ?2 p)s IE ep j jp Fs 2 ds

IE IE

hZ h

n^ ep j

j2p

iZ

e(2 ? )ps ds:

Since 2 ? < 0 and IE ep j j2p < 1, we complete the proof of the last case. Thus we have shown that ?n converges to 0 as n tends to 1 and coming back to the inequality (20), we get IE sup e
t
0 (

t^

j Yt j +
2

s j Ys j2ds +

e s j Zs j 2ds ?! 0;

as n tends to 1, uniformly in m. In particular the sequence (Y n ; Z n) IN is a Cauchy sequence in S 2; H2 and thus converges in this space to a process (Y; Z). Moreover, taking into account the inequality (16) of Lemma 4.2, Fatou's lemma implies IE sup ep (t^ t 0
)

jYtj p +
2

s jYsj2ds p +

e s j Zsj 2ds

K( ; f):

(21)

It remains to check that the process (Y; Z) solves the BSDE (13). To do this, we follow the discussion of R. W. R. Darling, E. Pardoux 5, pp. 1150{1151]. Let us pick a real number such that < 0 ^ =2 ^ p (this implies that < ) and let us x a nonnegative real number t. Since (Yn ; Zn) solves the BSDE (15), we have, from It^'s formula, for n t, o e (t^ ) Ytn = e +
Z

+
n^

n n e s Zs dWs e s f(s; Ysn; Zs ) ? Ysn ds ? t^ t^ s Y n ? f(s; Y n ; Z n) ds; e s s s

19

and we want to pass to the limit in this equation knowing that


"

IE sup e
t
0

t^

jYt ? Ytnj2 +

Z
0

s jYs ? Y n j2ds + s

Z
0

n e s j Zs ? Zs j 2ds ?! 0: =

We have, e (t^ ) Ytn ?! e (t^ ) Yt in L2 . Moreover, Holder's inequality gives IE


hZ
0

i e sjYsn ? Ys jds

IE

hZ

e s jYsn ? Ys j2ds
Z

1 2

IE
Z

hZ

i e(2 ? )sds

1 2

from which we deduce, since 2 < , that also that


Z

t^

n e s Zs dWs converges to

IE
hZ

t^

e s Ysn ds tends to e sYs ds in L1 . We remark t^ t^ e s Zs dWs in L2 since, thanks to 2 < , dWs


2

hZ i 1=2 1 n p ? 2 IE e s Ysn ? f(s; Ysn; Zs ) 2ds ; IE e n^ n^ and we have already proved that the right hand side tends to 0 (see the de nition of ?n). It Z n remains to study the term f(s; Ysn ; Zs )ds. But, since f is Lipschitz in z, we have

Using Holder's inequality, we have


s

t^

s?

n Zs ? Zs
i

IE

hZ
0

n e s j Zs ? Zs j 2ds :

n Ysn ? f(s; Ysn; Zs ) ds

i h i n n e s f(s; Ysn; Zs ) ? f(s; Ysn; Zs ) ds p ? 2 IE e s j Zs ? Zs j 2ds t^ n^ and thus goes to 0 with n. So now, it su ces to show that

IE

hZ

t^

1 2

IE

hZ

to control the limit in the equation. We prove this by showing that each subsequence has a subsequence for which the above convergence hold. Indeed, if we pick a subsequence (still denoted by Y n ), since we have IE supt 0 e ?(t^ ) jYt ? Ytnj2 ?! 0 there exist a subsequence still denoted in the same way such that IP{a.s. 8t; Ytn ?! Yt . By the continuity of the function f, IP{a.s. ? 8t; f(t; Ytn ; Zt) ?! f(t; Yt ; Zt ) . If we prove that sup IE
I N

i e s f(s; Ysn ; Zs ) ? f(s; Ys ; Zs ) ds ?! 0;

hZ

i e s f(s; Ysn; Zs ) ? f(s; Ys ; Zs ) 2 ds < 1;

then the sequence jf( ; Y n ; Z ) ? f( ; Y ; Z ) will be a uniformly integrable sequence for the nite measure e s 1s ds dIP (remember that < 0)and thus converging in L1 (e s1s ds dIP) which is the desired result. But from the growth assumption on f, we have IE
hZ
0

f(s; Ysn; Zs ) ? f(s; Ys ; Zs ) 2ds

KIE

hZ

+KIE Since > , the inequalities (16){(21), implies that sup IE


I N

hZ

i n e s jf(s; 0; 0)j2 + j Zs j 2 + j Zs j 2 ds i e s + jYsn j2p + jYsj2p ds :

hZ

i n e s jf(s; 0; 0)j2 + + j Zs j 2 + j Zsj 2 ds

20

is nite. Moreover, IE

hZ
0

iZ 1 h e s jYsnj2p ds IE sup ep t jYtnj2p e( ?p )sds: 0 t 0

Since p > , we conclude the proof of the convergence of the last term by using the rst part of the inequalities (16){(21). Passing to the limit when n goes to in nity, we get, for each t, e
(

t^

Yt = e

It then follows by It^'s formula that (Y; Z) solves the BSDE (13). o

t^

f(s; Ys ; Zs) ? Ys ds ?

t^

e sZs dWs :

References
1] G. Barles, R. Buckdahn, and E. Pardoux, Backward stochastic di erential equations and integral-partial di erential equations, Stochastics Stochastics Rep. 60 (1997), no. 1-2, 57{83. 2] G. Barles, H. M. Soner, and P. E. Souganidis, Front propagation and phase eld theory, SIAM J. Control Optim. 31 (1993), no. 2, 439{469. 3] Ph. Briand, BSDE's and viscosity solutions of semilinear PDE's, Stochastics Stochastics Rep. 64 (1998), no. 1-2, 1{32. 4] Ph. Briand and Y. Hu, Stability of BSDEs with random terminal time and homogenization of semilinear elliptic PDEs, J. Funct. Anal. 155 (1998), no. 2, 455{494. 5] R. W. R. Darling and E. Pardoux, Backwards SDE with random terminal time and applications to semilinear elliptic PDE, Ann. Probab. 25 (1997), no. 3, 1135{1159. 6] N. El Karoui, S. Peng, and M.-C. Quenez, Backward stochastic di erential equations in nance, Math. Finance 7 (1997), no. 1, 1{71. 7] M. Escobedo, O. Kavian, and H. Matano, Large time behavior of solutions of a dissipative semilinear heat equation, Comm. Partial Di erential Equations 20 (1995), no. 7-8, 1427{1452. 8] S. Hamadene, Equations di erentielles stochastiques retrogrades : le cas localement lipschitzien, Ann. Inst. H. Poincare Probab. Statist. 32 (1996), no. 5, 645{659. 9] M. Kobylanski, Resultats d'existence et d'unicite pour des equations di erentielles stochastiques retrogrades avec des generateurs a croissance quadratique, C. R. Acad. Sci. Paris Ser. I Math. 324 (1997), no. 1, 81{86. 10] J.-P. Lepeltier and J. San Martin, Backward stochastic di erential equations with continuous coe cients, Statist. Probab. Lett. 32 (1997), no. 4, 425{430. , Existence for BSDE with superlinear-quadratic coe cient, Stochastics Stochastics 11] Rep. 63 (1998), no. 3-4, 227{240. 12] E. Pardoux, Backward stochastic di erential equations and viscosity solutions of systems of semilinear parabolic and elliptic PDEs of second order, Stochastic analysis and related topics VI (The Geilo Workshop, 1996) (L. Decreusefond, J. Gjerde, B. ksendal, and A. S. Ustunel, eds.), Progr. Probab., vol. 42, Birkhauser Boston, Boston, MA, 1998, pp. 79{127. 13] , BSDEs, weak convergence and homogenization of semilinear PDEs, Nonlinear analysis, di erential equations and control (Montreal, QC, 1998), Kluwer Acad. Publ., Dordrecht, 1999, pp. 503{549. 21

14] E. Pardoux and S. Peng, Adapted solution of a backward stochastic di erential equation, Systems Control Lett. 14 (1990), no. 1, 55{61. 15] , Backward stochastic di erential equations and quasilinear parabolic partial di erential equations, Stochastic partial di erential equations and their applications (Charlotte, NC, 1991) (B. L. Rozovskii and R. B. Sowers, eds.), Lecture Notes in Control and Inform. Sci., vol. 176, Springer, Berlin, 1992, pp. 200{217. 16] E. Pardoux, F. Pradeilles, and Z. Rao, Probabilistic interpretation of a system of semi-linear parabolic partial di erential equations, Ann. Inst. H. Poincare Probab. Statist. 33 (1997), no. 4, 467{490. 17] E. Pardoux and S. Zhang, Generalized BSDEs and nonlinear Neumann boundary value problems, Probab. Theory Related Fields 110 (1998), no. 4, 535{558. 18] S. Peng, Probabilistic interpretation for systems of quasilinear parabolic partial di erential equations, Stochastics Stochastics Rep. 37 (1991), no. 1-2, 61{74. 19] , Backward stochastic di erential equations and applications to optimal control, Appl. Math. Optim. 27 (1993), no. 2, 125{144. Philippe Briand Rene Carmona

Institut de Recherche Mathematique Universite Rennes I 35 042 Rennes Cedex, France

pbriand@maths.univ-rennes1.fr http://www.maths.univ-rennes1.fr/~pbriand/

Statistics & Operations Research Program Princeton University Princeton, NJ 08544, USA

rcarmona@princeton.edu http://www.princeton.edu/~rcarmona/

22

Das könnte Ihnen auch gefallen