Beruflich Dokumente
Kultur Dokumente
0 0 0
Q∗ Q∗ = lim Ql Ql = lim Qn Qn = Ik (3)
∗ ∗−1 ∗ ∗
and so Λ = Q ΣQ . We deduce that the diagonal elements of Λ are the eigenvalues of Σ.
Moreover, since the diagonal elements of Λl are ranked in decreasing order, the diagonal elements of
Λ∗ must be ranked in decreasing order as well. Hence Λ∗ = Λ. We have shown that the only cluster
point of the sequence Λn is Λ, and so by fact 2 we have shown that Λn → Λ.
Next we show convergence of Qn . Suppose the eigenvalues of Σ are distinct. Then the columns
of Q are unique up to a sign. Choose
√ a sign for each column and let Q be the resulting matrix.
Since Λn − Λ → 0 and kQn k → k, we have Qn (Λn − Λ) → 0. Therefore (i) implies
1
Next, for each n, let Rn = Q−1 Qn . Then Rn is a sequence of k × k matrices. Using the eigende-
composition of Σ, continuity of matrix multiplication and the previous result we get
Let Rnij with i 6= j be any off diagonal element of the matrix Rn in position (i, j). Then the previous
calculation shows that
(λi − λj )Rnij → 0
and, because eigenvalues are assumed to be distinct, we conclude Rnij → 0. Also, by (ii) we have
that
0 0 0 0
Rn Rn = Qn (Q−1 ) Q−1 Qn = Qn Qn → Ik
so we deduce that, for each i,
(Rnii )2 → 1
Now define the ith diagonal element of Dn , denoted din , as follows
for each i. We have shown that Rn Dn → Ik . Finally, continuity of matrix multiplication allows us
to conclude
Qn Dn = QRn Dn → Q
Theorem 2. Let Σ be a k × k real symmetric matrix with eigendecomposition QΛQ0 , where diagonal
elements of Λ are ranked in decreasing order. Let Qn be a sequence of random k × k matrices and Λn
be a sequence of random k × k diagonal matrices with elements ranked in decreasing order. Suppose
that
(i) ΣQn − Qn Λn → 0 almost surely
0
(ii) Qn Qn → Ik almost surely
where convergence is with respect to the Frobenius norm. Then Λn → Λ almost surely. Moreover,
if the eigenvalues of Σ are distinct, then given a choice of sign for the columns of Q, there exist a
sequence of random diagonal matrices Dn with elements ±1 such that Qn Dn → Q almost surely.
0 0
Proof. ΣQn −Qn Λn → 0 almost surely and Qn Qn → Ik almost surely implies (ΣQn −Qn Λn , Qn Qn ) →
(0, Ik ) jointly almost surely. Therefore, by Theorem 1, we get Λn → Λ almost surely. Suppose the
eigenvalues of Σ are distinct and choose a sign for the columns of Q. Let Dn be as in Theorem 1
0
for each state such that ΣQn − Qn Λn → 0 and Qn Qn → Ik holds, and let Dn = Ik in other states.
Then Qn Dn → Q almost surely.
2
Theorem 3. Let Σ be a k × k real symmetric matrix with eigendecomposition QΛQ0 , where diagonal
elements of Λ are ranked in decreasing order. Let Qn be a sequence of random k × k matrices and Λn
be a sequence of random k × k diagonal matrices with elements ranked in decreasing order. Suppose
that
(i) ΣQn − Qn Λn → 0 in probability
0
(ii) Qn Qn → Ik in probability
where convergence is with respect to the Frobenius norm. Then Λn → Λ in probability. Moreover,
if the eigenvalues of Σ are distinct, then given a choice of sign for the columns of Q, there exist a
sequence of random diagonal matrices Dn with elements ±1 such that Qn Dn → Q in probability.
Proof. I will use the following characterization of convergence in probability.
Fact 3. Xn converges to X in probability if and only if every subsequence of Xn has a sub-
subsequence that converges to X almost surely.
0 0
ΣQn − Qn Λn → 0 in probability and Qn Qn → Ik in probability implies (ΣQn − Qn Λn , Qn Qn ) →
(0, Ik ) jointly in probability. Let Λm be a subsequence of Λn , and consider the corresponding
0
joint subsequence (ΣQm − Qm Λm , Qm Qm ). Then fact 3 implies that there exist a sub-subsequence
0 0
(ΣQl − Ql Λl , Ql Ql ) such that (ΣQl − Ql Λl , Ql Ql ) → (0, Ik ) almost surely. Then Theorem 2 implies
Λl → Λ almost surely. We conclude by fact 3 that Λn → Λ in probability. Suppose the eigenvalues
of Σ are distinct and choose a sign for the columns of Q. We now reuse the arguments√ of Theorem
1, with all limits taken as probability limits. Since Λn − Λ → 0 and kQn k → k in probability, we
have Qn (Λn − Λ) → 0 in probability. Therefore (i) implies
in probability. Let Rnij with i 6= j be any off diagonal element of the matrix Rn in position (i, j).
Then the previous calculation shows that
(λi − λj )Rnij → 0
in probability and, because eigenvalues are assumed to be distinct, we conclude Rnij → 0 in proba-
bility. Also, by (ii) we have that
0 0 0 0
Rn Rn = Qn (Q−1 ) Q−1 Qn = Qn Qn → Ik
so we deduce that, for each i,
(Rnii )2 → 1
in probability. Now define the ith diagonal element of Dn , denoted din , as follows
3
Then din = ±1 in any state. Furthermore, continuity of the square root function implies
p
din Rnii = |Rnii | = (Rnii )2 → 1
Qn Dn = QRn Dn → Q
in probability.
Theorem 4. Let Σ be a k × k real symmetric matrix with eigendecomposition QΛQ0 , where diagonal
elements of Λ are ranked in decreasing order. Let Qn,T be a double sequence of random k×k matrices
and Λn,T be a double sequence of random k × k diagonal matrices with elements ranked in decreasing
order. Suppose that
(i) ΣQn,T − Qn,T Λn,T → 0 in probability
0
(ii) Qn,T Qn,T → Ik in probability
where convergence is with respect to the Frobenius norm. Then Λn,T → Λ in probability. Moreover,
if the eigenvalues of Σ are distinct, then given a choice of sign for the columns of Q, there exist a
double sequence of random diagonal matrices Dn,T with elements ±1 such that Qn,T Dn,T → Q in
probability.
Proof. I will use the following characterization of convergence in probability for double sequences.
Fact 4. Xn,T converges to X in probability as n, T → ∞ if and only if for any two strictly increasing
sequences of natural numbers (nl ) and (Tl ) we have that Xnl ,Tl converges to X in probability as
l → ∞.
Let (nl ) and (Tl ) be strictly increasing sequences of natural numbers. Then (i), (ii) and fact 4
0
imply that ΣQnl ,Tl − Qnl ,Tl Λnl ,Tl → 0 and Qnl ,Tl Qnl ,Tl → Ik in probability as l → ∞. Therefore by
Theorem 3 we deduce that Λnl ,Tl → Λ in probability as l → ∞. By fact 4 again we conclude that
Λn,T → Λ in probability as n, T → ∞. Suppose the eigenvalues of Σ are distinct and choose a sign
for the columns of Q. We now reuse the arguments√of Theorem 3, with all probability limits taken
as n, T → ∞. Since Λn,T − Λ → 0 and kQn,T k → k in probability, we have Qn,T (Λn,T − Λ) → 0
in probability. Therefore (i) implies
ΛRn,T − Rn,T Λ = ΛQ−1 Qn,T − Q−1 Qn,T Λ = Q−1 ΣQn,T − Q−1 Qn,T Λ = Q−1 (ΣQn,T − Qn,T Λ) → 0
ij
in probability. Let Rn,T with i 6= j be any off diagonal element of the matrix Rn,T in position (i, j).
Then the previous calculation shows that
ij
(λi − λj )Rn,T →0
4
ij
in probability and, because eigenvalues are assumed to be distinct, we conclude Rn,T → 0 in prob-
ability. Also, by (ii) we have that
0 0 0 0
Rn,T Rn,T = Qn,T (Q−1 ) Q−1 Qn,T = Qn,T Qn,T → Ik
so we deduce that, for each i,
ii
(Rn,T )2 → 1
in probability. Now define the ith diagonal element of Dn,T , denoted din,T , as follows
din,T = 1{Rn,T
ii ≥0} − 1{Rii <0}
n,T
Then din,T = ±1 in any state. Furthermore, continuity of the square root function implies
q
din,T Rn,T
ii ii
= |Rn,T |= ii )2 → 1
(Rn,T
in probability for each i. We have shown that Rn,T Dn,T → Ik in probability. Finally, continuity of
matrix multiplication allows us to conclude