Sie sind auf Seite 1von 6

Computing multiple integrals involving matrix

exponentials

F. Carbonell∗, J.C. Jimenez and L.M. Pedroso.

Abstract
In this paper, an generalization of a formula proposed by Van Loan in [1978, IEEE Trans. Au-
tomat. Control. 23, 395-404] for the computation of multiple integrals of exponential matrices is
introduced. In this way, the numerical evaluation of such integrals is reduced to the use of a conven-
tional algorithm to compute matrix exponentials. The formula is applied for evaluating some kinds
of integrals that frequentely emerge in a number classical mathematical subjects in the framework of
differential equations, numerical methods and control engineering applications.

AMC (2000): 15A15, 65F30


Key words: multiple integrals, matrix exponential.
1 Introduction
This note deals with the computation of multiple integrals involving matrix exponentials, which fre-
quently emerge in a number of classical mathematical subjects in the framework of differential equations,
numerical methods, control engineering applications, etc. Specifically, integral of the form

Rt Rs1 sk−2
R A11 (t−s1 )
... e A12 eA22 (s1 −s2 ) A23 ....eAkk sk−1 dsk−1 ...ds1 (1)
0 0 0

will be considered, where Aik are di × dk constant matrices, t > 0 and k = 1, 2, ...
At glance, the analytical calculation of these integrals seems to be difficult. Nevertheless, in [19]
a simple explicit formula in terms of certain exponential matrix was given. In that way, a number of
distinctive integrals such as
Rt A11 s
e A12 eA22 s ds
0

and
Rt Rs1 A11 s1
e A12 eA22 (s1 −s2 ) ds2 ds1
0 0

can easily computed as particular cases of the mentioned formula [19]. However, such formula is re-
stricted to integrals with multiplicity k ≤ 4, which obviously limits its usefulness range.

Instituto de Cibernética, Matemática y Física, Departamento de Matemática Interdisciplinaria, Calle 15, No. 551, e/
C y D, Vedado, La Habana 4, C.P. 10400, Cuba

1
The proposal of this note is generalizing the formula introduced in [19] for any positive value of k.
This have been strongly motivated for the need of computing the integrals
Rt
eF(t−s) G(s)ds (2)
0
and
Rt | (t−s)
eF(t−s) G(s)G| (s)eF ds, (3)
0
where F is an r × r constant matrix and G : R → Rr×d a smooth function. Integrals of the type (2)
appear as a term in the analytical solution of linear time-depending control system [16], linear ordinary
and delay differential equations [6]. Integrals like (3) are associated to the notions of controllability
and observability Grammians of linear control systems [16]. They also appear as the covariance matrix
of the solution of linear stochastic differential equations with time-depending additive noise [1] and, in
the context of filtering theory, as the system noise covariance matrix of the extended Kalman filter for
continuous-discrete state-space models with additive noise [10]. In addition, they arise in a number of
numerical schemes for the integration of ordinary differential equations that are based on polynomial
approximations for the remainder term of the variation of constant formula [9], [8], [7], [3], [4].
The paper has three sections. In the first one, the generalized formula is derived, whereas in the
second one the formula is applied to the computation of the integrals (2) and (3). Last section deals
with some computational aspects for implementing such formula.
2 Main result
Theorem 1 in [19] provides a simple way to compute single, double and triple integrals of the form
Zt
B12 (t) ≡ eA11 (t−u) A12 eA22 u du,
0
Zt Zu
B13 (t) ≡ eA11 (t−u) A12 eA22 (u−r) A23 eA33 r drdu
0 0
and
Zt Zu Zr
B14 (t) ≡ eA11 (t−u) A12 eA22 (u−r) A23 eA33 (r−w) A34 eA44 w dwdrdu,
0 0 0
in terms of a single exponential matrix, but not integrals of the form (1) with k ≥ 5. Next theorem
overcomes this restriction.
Theorem 1 Let d1 , d2 , ..., dn , be positive integers. If the n × n block triangular matrix A =[(Alj )]l,j=1:n
is defined by ⎛ ⎞
A11 A12 ... A1n
⎜ 0 A22 ... A2n ⎟
⎜ ⎟
A =⎜ . ⎟,
⎝ 0 0 . . . .. ⎠
0 0 0 Ann
where (Alj ), l, j = 1, ..., n are dl × dj matrices such that dl = dj for l = j. Then for t > 0
⎛ ⎞
B11 (t) B12 (t) ... B1n (t)
⎜ 0 B22 (t) ... B2n (t) ⎟
⎜ ⎟
eAt = ⎜ . .. ⎟ ,
⎝ 0 0 . . . ⎠
0 0 0 Bnn (t)

2
with

Bll (t) = eAll t , l = 1, ..., n (4)


Rt P Rt Rs1 Rsk
j−l−1 P
Blj (t) = M(l,j) (t, s1 )ds1 + ... M(l,i1 ,...,ik ,j) (t, s1 , ..., sk+1 )dsk+1 ...ds1 , (5)
0 k=1 0 0 0 l<i1 <...<ik <j
l = 1, ..., n − 1, j = l + 1, ..., n,

where for any multi-index (i1 , ..., ik ) ∈ Nk and vector (s1 , ..., sk ) ∈ Rk the matrices M(i1 ,...,ik ) (s1 , ..., sk )
are defined by µk−1 ¶
Q Ai i (sr −sr+1 )
M(i1 ,...,ik ) (s1 , ..., sk ) = e rr Air ir+1 eAik ik sk .
r=1

Proof. This proof shall be based on complete induction techniques. The identities (4-5) hold for
n = 1, 2, 3, 4 by Theorem 1 in [19]. Suppose that they also hold for n = m > 4. Then, let’s show that
(4-5) hold for any (m + 1) × (m + 1) block triangular matrix A = [(Alj )]l,j=1:m+1 .
Let us rewrite the matrix A as the 2 × 2 block triangular matrix A e = [(A
e lj )]i,j=1:2 defined by
µ ¶
e [(Alj )]l,j=1:m [(Al,m+1 )]l=1:m
A= ,
0 Am+1,m+1

and let B(t) = [(Blj (t))]l,j=1:m+1 be the (m + 1) × (m + 1) block triangular matrix given by
h
B(t) = eAt = eAt .
e it is obtained
Then, by applying the identities (4-5) with n = 2 to the matrix A
h
[(Bl,j (t))]l,j=1:m = eA11 t , (6)
Am+1,m+1 t
Bm+1,m+1 (t) = e , (7)
R h
t
[(Bl,m+1 (t))]l=1:m = eA11 (t−s1 ) Ae 12 eAm+1,m+1 s1 ds1 . (8)
0

e 11 = [(Alj )]l,j=1:m then (6) and (7) imply that


Since A

Bll (t) = eAll t , l = 1, ..., m + 1 (9)


Rt P Rt Rs1 Rsk
j−l−1 P
Blj (t) = M(l,j) (t, s1 )ds1 + ... M(l,i1 ,...,ik ,j) (t, s1 , ..., sk+1 )dsk+1 ...ds1 , (10)
0 k=1 0 0 0 l<i1 <...<ik <j
l = 1, ..., m − 1, j = l + 1, ..., m,
h e 12 = [(Al,m+1 )]l=1:m , the identity (8)
On the other hand, since eA11 (t−s1 ) = [(Bl,j (t − s1 ))]l,j=1:m and A
implies the following expression for each block (Bl,m+1 (t)), l = 1, ..., m :
m Rt
P
Bl,m+1 (t) = Bl,j (t − s1 )Aj,m+1 eAm+1,m+1 s1 ds1 ,
j=l 0

which by (9) and (10) gives


Rt m Rt t−s
P R 1 (l,j,m+1)
Bl,m+1 (t) = M(l,m+1) (t, s1 )ds1 + M (t, s1 + s2 , s1 )ds2 ds1 + (11)
0 j=l+1 0 0
sk+1
P
m P Rt t−s
j−l−1 R 1Rs2 R P
... M(l,i1 ,...,ik ,j,m+1) (t, s1 + s2 , ..., s1 + sk+2 , s1 )dsk+2 ...ds1 .
j=l+2 k=1 0 0 0 0 l<i1 <...<ik <j

3
For each k = 0, 1, .. the change of variables u1 = s1 + s2 , u2 = s1 + s3 , ..., uk+1 = s1 + sk+2 , uk+2 = s1
in each of the multiple integrals that appear in (11) yields to
Rt P Rt Ru1 uRk
m−l P
Bl,m+1 (t) = M(l,m+1) (t, u1 )du1 + ... M(l,i1 ,...,ik ,j) (t, u1 , ..., uk+1 )duk+1 ...du1 ,
0 k=1 0 0 0 l<i1 <...<ik <j

which combined with (9) and (10) shows that the identities (4) and (5) hold for n = m + 1, and the
proof concludes.
Now we are able to evaluate the integrals B12 (t), B13 (t), B14 (t) and the integrals
Rt Rs1 sk−2
R A11 (t−s1 )
B1k (t) = ... e A12 eA22 (s1 −s2 ) A23 ....eAkk sk−1 dsk−1 ...ds1
0 0 0

for k ≥ 5 as well, all of them computed by just a single exponential matrix. This is, by applying the
theorem above follows that the blocks B12 (t), B13 (t),..., B1k (t) are easily obtained from etA , with
⎡ ⎤
A11 A12 0 0 0
⎢ 0 A22 A23 0 0 ⎥
⎢ ⎥
⎢ .. ⎥

A=⎢ 0 0 A33 . 0 ⎥.

⎢ .. ⎥
⎣ 0 0 0 . Ak−1,k ⎦
0 0 0 0 Akk
3 Application of the generalized formula
This section is devoted to compute integrals of the form (2) and (3) by means of the theorem stated in
the previous section.
Firstly, suppose that G(s) is a r × d polynomial matrix function of degree p. That is,
P
p
G(s) = Gi si ,
i=0

or equivalently,
Rs u1 uR
p Rs R
P i−1
G(s) = G0 + G1 du + ... (i!Gi )dui dui−1 ...du1 .
0 i=2 0 0 0
Then, by Theorem 1 it is easy to check that
Rt
eF(t−s) G(s)ds = B1,p+2 (t) (12)
0

where

B(t) = [(Blj (t))]l,j=1:p+2


= eAt ,

and the block triangular matrix A =[(Alj )]l,j=1:p+2 is given by


⎛ ⎞
F p!Gp (p − 1)!Gp−1 · · · G1 G0
⎜ 0 0d×d Id ··· 0 0 ⎟
⎜ ⎟
⎜ . . .
. .. ⎟
⎜0 0 0d×d . . . ⎟
A =⎜ ⎜ .. .. .. ..
⎟.

⎜. . . . Id 0 ⎟
⎜ ⎟
⎝0 0 ··· 0 0d×d Id ⎠
0 0 ··· 0 0 0d×d

4
In a similar way, it is obtained that

Rt | (t−s)
eF(t−s) G(s)G| (s)eF ds = B1,2p+2 (t)B|11 (t), (13)
0

where B(t) =eAt and the matrix A in this case is given by


⎛ ⎞
F H2p H2p−1 · · · H1 H0
⎜ 0 −F| Id · · · 0 0 ⎟
⎜ ⎟
⎜ | .. .. .. ⎟
⎜0 0 −F . . . ⎟
A=⎜ ⎜ .. .. .. ..
⎟,

⎜. . . . Id 0 ⎟
⎜ ⎟
⎝0 0 ··· 0 −F| Id ⎠
0 0 ··· 0 0 −F|

with P
Hi = (l!j!)Gl G|j , i = 0, ..., 2p.
l+j=i

In case that G :R → Rr×d be a function with p continuous derivatives it can be approximated by


means its truncated Taylor expansion. That is,
Pp G
i i
G(s) ≈ s,
i=0 i!

where the coefficient Gi is the i-th derivative of G at s = 0. In this case, the above procedure to com-
pute the integrals (12) and (13) for polynomial G will give a convenient way to approximate integrals
like (2) and (3). In [11], such approximation for (3) was early considered and an upper bound for it
was also given. However, no closed formula in term of a simple exponential matrix was given for this
approximation. Such is the main improvement of this paper in comparison with [11]. At this point
it worths to remark that such approximations have been successfully applied to the computation of
the predictions provided by the Local Linearization filters for non-linear continuous-discrete state space
models [12, 14], as well as in [2, 15] for computing other types of multiple integrals involving matrix
exponentials. This evidence the practical usefulness of the result achieved in this paper.

4 Computational Aspects
It is obvious that the main computational task on the practical application of Theorem 1 is the use
of an appropiate numerical algorithm to compute matrix exponentials. For instance, those based on
rational Padé approximations, the Schur decomposition or Krylov subspace methods (see [18] and [17]
for excellent reviews on effective methods to compute matrix exponential). The choice of one of them
will mainly depends on the size and structure of the matrix A in Theorem 1. In many cases, it is
enough to use the algorithms developed in [5], which take advantage of the special structure of the
matrix A. For a high dimensional matrix A, Krylov subspace methods are strongly recommended.
Nowadays, a number of professional mathematical softwares such as MATLAB 7.0 provide efficient and
precise codes for computing matrix exponentials. Therefore, the numerical evaluation of the integrals
under consideration can be carried out in an effective, accurate and simple way. In fact, some numerical
experiments have been performed for an application of the classical result of Van Loan (i.e. integrals of
type (1) for k ≤ 4). Specifically, in [13] were compared different numerical algorithms for the evaluation
of some integrals of type (1) with k ≤ 4. Those results could be easily extrapolated to our general
setting since such comparision was mainly focused on the dimension of the block triangular matrix A.

5
Acknowledgment
This work was partially supported by the Research Grant 03-059 RG/MATHS/LA from the Third
World Academy of Science.

References
[1] L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley Interscience Publi-
cations, New York, 1974.
[2] Carbonell, F., Jimenez, J. C. and Biscay, R. Weak Local Linear discretizations for stochastic
differential equations: convergence and numerical schemes. J. Comput. Appl. Math., 197 (2006)
578-596.
[3] S. M. Cox and P. C. Matthews, Exponential time differencing for stiff systems. J. Comput. Phys,
176 (2002) 430-455.
[4] de la Cruz H., Biscay R.J., Carbonell F., Ozaki T., and Jimenez J.C. A higher order Local Lin-
earization method for solving ordinary differential equations, to appear in Appl. Math. Comput.,
2007 (published online August 3, 2006; doi: 10.1016/j.amc.2006.06.096).
[5] L. Dieci, A. Papini, Pade approximations for the exponential of a block triangular matrix, Linear
Algebra Appl. 308 (2000) 103-202.
[6] R. D. Driver, Ordinary and delay differential equations, Springer-Verlag, 1977.
[7] R. A. Friesner, L. S. Tuckerman, B. C. Dornblaser and T. V. Russo, A method for exponential
propagation of large systems of stiff nonlinear differential equatyions. J. Sci. Comput, 4 (1989)
327-354.
[8] A. Iserles, Quadrature methods for stiff ordinary differential systems. Math. Comp, 36 (1981) 171-
182.
[9] R. K. Jain, Some A-stable methods for stiff ordinary differential equations. Math. Comp, 26 (1972)
71-77.
[10] A. H. Jazwinski, Stochastic processes and filtering theory, Academic Press, San Diego, 1970.
[11] J. C. Jimenez, P. Valdes, L. M. Rodriguez, J. Riera, and R. J. Biscay, Computing the noise covari-
ance matrix of the Local Linearization scheme for the numerical solution of stochastic differential
equations, Appl. Math. Lett., 11 (1998) 19-23.
[12] J.C. Jimenez and T. Ozaki, Linear estimation of continuous-discrete linear state space models with
multiplicative noise, Syst. Control Letters, 47 (2002) 91-101.
[13] J.C. Jimenez, A simple algebraic expression to evaluate the local linearization schemes for stochastic
differential equations, Appl. Math. Lett., 15 (2002) 775-780.
[14] J.C. Jimenez and T. Ozaki, Local Linearization filters for nonlinear continuous-discrete state space
models with multiplicative noise. Int. J. Control, 76 (2003) 1159—1170.
[15] J.C. Jimenez, L.M. Pedroso, F. Carbonell and V. Hernandez, Local linearization method for nu-
merical integration of delay differential equations, SIAM J. Numer. Anal., to appear.
[16] P. S. Maybeck, Stochastic models, estimation, and control, Academic Press, New York, 1982.
[17] C.B. Moler, C.F. Van Loan., Nineteen dubious methods for computing the matrix exponential,
twenty-five years later. SIAM Rev. 45, 1 (2003) 3-49
[18] Sidje, R. B., EXPOKIT: software package for computing matrix exponentials, AMC Trans. Math.
Software, 1998, 24, 130-156.
[19] C. F. Van Loan, Computing integrals involving the matrix exponential, IEEE Transactions on
Automatic Control, 23 (1978) 395-404.

Das könnte Ihnen auch gefallen