Sie sind auf Seite 1von 5

Schur complement

From Wikipedia, the free encyclopedia


In linear algebra and the theory of matrices, the Schur complement of a matrix block
(i.e., a submatrix within a larger matrix) is defined as follows. Suppose A, B, C, D are
respectively pp, pq, qp and qq matrices, and D is invertible. Let
so that M is a (p+q)(p+q) matrix.
Then the Schur complement of the block D of the matrix M is the pp matrix
It is named after Issai Schur who used it to prove Schur's lemma, although it had been
used previously.
[1]
Emilie Haynsworth was the first to call it the Schur complement.
[2]
Contents
1 Background
2 Application to solving linear equations
3 Applications to probability theory and statistics
4 Schur complement condition for positive definiteness
5 See also
6 References
Background
The Schur complement arises as the result of performing a block Gaussian elimination
by multiplying the matrix M from the right with the "block lower triangular" matrix
Schur complement - Wikipedia, the free encyclopedia 1 of 5
Here I
p
denotes a pp identity matrix. After multiplication with the matrix L the Schur
complement appears in the upper pp block. The product matrix is
This is analogous to an LDU decomposition. That is, we have shown that
and inverse of M thus may be expressed involving D
1
and the inverse of Schur's
complement (if it exists) only as
C.f. matrix inversion lemma which illustrates relationships between the above and the
equivalent derivation with the roles of A and D interchanged.
If M is a positive-definite symmetric matrix, then so is the Schur complement of D in
M.
If p and q are both 1 (i.e. A, B, C and D are all scalars), we get the familiar formula for
the inverse of a 2-by-2 matrix:
provided that AD BC is non-zero.
Moreover, the determinant of M is also clearly seen to be given by
Schur complement - Wikipedia, the free encyclopedia 2 of 5
which generalizes the determinant formula for 2x2 matrices.
Application to solving linear equations
The Schur complement arises naturally in solving a system of linear equations such as
where x, a are p-dimensional column vectors, y, b are q-dimensional column vectors,
and A, B, C, D are as above. Multiplying the bottom equation by and then
subtracting from the top equation one obtains
Thus if one can invert D as well as the Schur complement of D, one can solve for x,
and then by using the equation one can solve for y. This reduces the
problem of inverting a matrix to that of inverting a pp matrix
and a qq matrix. In practice one needs D to be well-conditioned in order for this
algorithm to be numerically accurate.
Applications to probability theory and statistics
Suppose the random column vectors X, Y live in R
n
and R
m
respectively, and the
vector (X, Y) in R
n+m
has a multivariate normal distribution whose covariance is the
symmetric positive-definite matrix
where is the covariance matrix of X, is the covariance
matrix of Y and is the covariance matrix between X and Y.
Then the conditional covariance of X given Y is the Schur complement of C in :
If we take the matrix above to be, not a covariance of a random vector, but a sample
Schur complement - Wikipedia, the free encyclopedia 3 of 5
covariance, then it may have a Wishart distribution. In that case, the Schur
complement of C in also has a Wishart distribution.
Schur complement condition for positive definiteness
Let X be a symmetric matrix given by
Let S be the Schur complement of A in X, that is:
Then
is positive definite if and only if and are both positive definite:
.
is positive definite if and only if and are both positive
definite:
.
If is positive definite, then is positive semidefinite if and only if is
positive semidefinite:
, .
If is positive definite, then is positive semidefinite if and only if
is positive semidefinite:
, .
The first and third statements can be derived
[3]
by considering the minimizer of the
quantity
Schur complement - Wikipedia, the free encyclopedia 4 of 5
as a function of v (for fixed u).
See also
Woodbury matrix identity
Quasi-Newton method
Haynsworth inertia additivity formula
Gaussian process
Total least squares
References
^ Zhang, Fuzhen (2005). The Schur Complement and Its Applications. Springer.
doi:10.1007/b105056 (http://dx.doi.org/10.1007%2Fb105056). ISBN 0-387-24271-6.
1.
^ Haynsworth, E. V., "On the Schur Complement", Basel Mathematical Notes, #BNB 20,
17 pages, J une 1968.
2.
^ Boyd, S. and Vandenberghe, L. (2004), "Convex Optimization", Cambridge University
Press (Appendix A.5.5)
3.
Retrieved from "http://en.wikipedia.org/w/index.php?title=Schur_complement&
oldid=603485716"
Categories: Linear algebra
This page was last modified on 9 April 2014 at 18:40.
Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. By using this site, you agree to the Terms of Use
and Privacy Policy. Wikipedia is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.
Schur complement - Wikipedia, the free encyclopedia 5 of 5

Das könnte Ihnen auch gefallen