Beruflich Dokumente
Kultur Dokumente
Solutions 1
a) Show that
p
X
max 1 − p + Fi (ri ), 0 ≤ F (r) ≤ min Fi (ri )
i=1,2,...,p
i=1
for arbitrary r ∈ Rp .
Proof: Let us first look at the right inequality. As for any r ∈ Rp , {X ≤ r} ⊂
{Xi ≤ ri } for all i ∈ {1, . . . , p}, taking probabilities delivers P (X ≤ r) ≤
P (Xi ≤ ri ) = Fi (ri ) for all i ∈ {1, . . . , p}. In summary, we just established that
F (r) ≤ min1≤i≤p Fi (ri ).
Coming toPthe left inequality, it is clearly sufficient to prove that
1 − p + pi=1 Fi (ri ) ≤ F (r). Now, elementaryPp algebraic manipulations lead
to the equivalent inequality
Pp 1 − F (r) ≤ i=1 (1 − Fi (ri )), or in other terms
p
P (∪i=1 {Xi > ri }) ≤ i=1 P (Xi > ri ). The latter obviously holds (by sub-
additivity of the measure P ).
b) For 0 < u < 1 let Fi−1 (u) := min{r ∈ R : Fi (r) ≥ u}. Determine the
distribution function F of the random vector
p
X := Fi−1 (U ) i=1
Proof: Let X := (Fi−1 (U ))pi=1 with U ∼ Unif[0, 1]. Let us mention a simple but
essential result that with help us solving the current question without needing any
additional assumption on F (such as the continuity of the Fi ’s) : for any x ∈ [0, 1]
and u ∈ R, we have the equivalence (Fi−1 (u) ≤ x) ⇔ (Fi (x) ≥ u). From there,
we directly obtain the inequality
= min Fi (ri ),
i∈{1,...,p}
P (X ≤ r) = P (F1−1 (U ) ≤ r1 , F2−1 (1 − U ) ≤ r2 )
= P (U ≤ F1 (r1 ), 1 − U ≤ F2 (r2 ))
= P (U ≤ F1 (r1 ), 1 − F2 (r2 ) ≤ U )
Note that if 1 − F2 (r2 ) ≥ F1 (r1 ), the latter probability is zero. However, in case
1 − F2 (r2 ) < F1 (r1 ), the latter probability is
p
X
F1 (r1 ) − 1 + F2 (r2 ) = 1 − p + Fi (ri ),
i=1
c) Show that the lower bound for F (r) in part (a) is sharp (pointwise). That means,
for arbitrary univariate distribution functions F1 , F2 , . . . , Fp and any vector r ∈
Rp there exists a random vector X with marginal distribution functions
F1 , F2 , . . . , Fp such that
p
X
IP(X ≤ r) = max 1 − p + Fi (ri ), 0 .
i=1
Deduce from these facts the following formulae for random matrices: Let M , M̃ ∈
Rp×q be random matrices such that IE kM k, IE kM̃ k < ∞. Then
IE(M > ) = IE(M )> ,
IE(M + M̃ ) = IE(M ) + IE(M̃ ),
> >
IE(M M̃ ) = IE(M ) IE(M̃ ) if M , M̃ are stoch. independent.
Furthermore, for fixed matrices A ∈ Rk×` , B ∈ Rk×p and C ∈ Rq×` ,
IE(A + BM C) = A + B IE(M )C.
Proof: Let M = (mij ) i≤p , M̃ = (m̃ij ) i≤p ∈ Rp×q be random matrices such that
j≤q j≤q
and
IE(M + M̃ ) = IE(mij + m̃ij ) i≤p
j≤q
= IE(mij ) + IE(m̃ij i≤p
j≤q
= IE(mij ) i≤p + IE(m̃ij ) i≤p
j≤q j≤q
= IE(M ) + IE(M̃ ).
Proof: We utilize the facts that for arbitrary random matrices M , M̃ and fixed matri-
ces A, B, C with suitable dimensions,
IE(M > ) = IE(M )> , (i)
IE(M + M̃ ) = IE(M ) + IE(M̃ ), (ii)
IE(A + BM C) = A + B IE(M )C. (iii)
Then (i) implies that
Cov(Y , X) = IE (Y − IE Y )(X − IE X)>
>
= IE (X − IE X)(Y − IE Y )>
>
= IE (X − IE X)(Y − IE Y )>
= Cov(X, Y )> .
Moreover, (ii) implies that
Cov(X + X̃, Y ) = IE (X + X̃ − IE(X + X̃))(Y − IE Y )>
= IE (X − IE X + X̃ − IE X̃)(Y − IE Y )>
= Cov(X, Y ) + Cov(X̃, Y ).
The next equality follows from the two above calculations, that is
Var(X + X̃) = Cov(X + X̃, X + X̃)
= Cov(X, X + X̃) + Cov(X̃, X + X̃)
= Cov(X + X̃, X)> + Cov(X + X̃, X̃)>
= Cov(X, X)> + Cov(X̃, X)> + Cov(X, X̃)> + Cov(X̃, X̃)>
= Var(X) + Cov(X, X̃) + Cov(X̃, X) + Var(X̃).
= B IE (X − IE X)(Y − IE Y )>
= B Cov(X, Y ).
Finally, (i) and (iii) together imply that
Var(a + BX) = Cov(a + BX, a + BX)
= B Cov(X, a + BX)
= B Cov(a + BX, X)>
= B(B Cov(X, X))>
= B Var(X)B > .
4. Let X ∈ Rn be a random vector which is exchangeable in the sense that X has the
same distribution as (Xπ(i) )ni=1 for any fixed permutation π of {1, 2, . . . , n}.
Then,
I n = αI n + β1n 1> γI n + δ1n 1> = αγI n + (βγ + αδ + nβδ)1n 1>
n n n,
so
αγ = 1 and βγ + αδ + nβδ = 0.
This is true if and only if γ = 1/α and δ = −β/(α(α + nβ)). Therefore,
1 β
Σ−1 = In − 1 1> .
α α(α + nβ) n n