Beruflich Dokumente
Kultur Dokumente
i
xi f xy (xi , y j ) +
=
x f xy (x, y) dx dy
y f xy (x, y) dx dy
+
= E(X) + E(Y).
y j f xy (xi , y j )
= E(X) + E(Y).
120
119
xi f x (xi )
y j fy (y j ) = E(X)E(Y).
xy f x (x) fy (y) dx dy
=
x f x (x) dx
y fy (y) dy = E(X)E(Y).
=
122
= E(XY) x y y x + x y
Proof:
= E(XY) x y
= E(XY) E(X)E(Y).
= E(XY x Y y X + x y )
= E(XY) E( x Y) E(y X) + x y
= E(XY) x E(Y) y E(X) + x y
123
124
Proof:
xy =
Cov(X, Y)
Cov(X, Y)
=
.
x y
V(X) V(Y)
E(X)E(Y).
pendent of Y.
125
126
Proof:
Proof:
Cov(X, Y)
= 0.
V(X) V(Y)
However, note that xy = 0 does not mean the indepen-
Y) is rewritten as follows:
V(X Y) = E ((X Y) E(X Y))2
= E ((X x ) (Y y ))2
= E((X x )2 2(X x )(Y y ) + (Y y )2 )
127
128
8. Theorem: 1 xy 1.
Proof:
Consider the following function of t: f (t) = V(Xt Y),
which is always greater than or equal to zero because
of the denition of variance. Therefore, for all t, we
have f (t) 0. f (t) is rewritten as follows:
129
130
Therefore, we have:
1
Cov(X, Y)
1.
V(X) V(Y)
V(X) V(Y)
(Cov(X, Y))2
V(Y)
0,
V(X)
because the rst term in the last equality is nonnega131
132
pendent of Y.
E(
Proof:
ai X i ) =
ai i ,
V( ai Xi ) =
ai a j Cov(Xi , X j ),
of Y.
V( ai Xi ) =
a2i V(Xi ).
i
133
134
Proof:
For mean of
obtained.
E( ai Xi ) =
E(ai Xi ) =
ai E(Xi ) =
ai i .
i
For variance of i ai Xi , we can rewrite as follows:
2
V( ai Xi ) = E
ai (Xi i )
=E
=E
i
ai (Xi i )
a j (X j j )
i
theorems on mean.
=
i
ai a j (Xi i )(X j j )
ai a j E (Xi i )(X j j )
ai a j Cov(Xi , X j ).
136
11. Theorem: n random variables X1 , X2 , , Xn are mutually independently and identically distributed with
mean and variance 2 . That is, for all i = 1, 2, , n,
E(Xi ) = and V(Xi ) = 2 are assumed. Consider
arithmetic average X = (1/n) ni=1 Xi . Then, mean and
variance of X are given by:
E(X) = ,
2
.
n
138
137
Proof:
The mathematical expectation of X is given by:
n
n
n
1
1
1
Xi ) = E(
Xi ) =
E(Xi )
n i=1
n i=1
n i=1
n
1
1
= n = .
=
n i=1
n
E(X) = E(
n
n
n
1
1
1
Xi ) = 2 V(
Xi ) = 2
V(Xi )
n i=1
n
n i=1
i=1
n
1
2
1 2
.
= 2 n2 =
= 2
n i=1
n
n
V(X) = V(
140
139
V(X) =
Transformation of Variables (
Distribution of Y = 1 (X):
141
fy (y) = | (y)| f x (y) .
We can derive the above transformation of variables from X
to Y as follows. Let f x (x) and F x (x) be the probability den142
F x (x).
The rst equality is the denition of the cumulative distribution function. The second equality holds because of (Y) >
fy (y) = Fy (y) = (y)F x (y) = (y) f x (y) .
(4)
is rewritten as follows:
Fy (y) = P(Y y) = P (Y) (y)
= P X (y) = F x (y) .
144
143
Thus, summarizing the above two cases, i.e., (X) > 0 and
Fy (y) = P(Y y) = P (Y) (y) = P X (y)
= 1 P X < (y) = 1 F x (y) .
Thus, in the case of (X) < 0, pay attention to the second
(X) < 0, equations (4) and (5) indicate the following result:
fy (y) = | (y)| f x (y) ,
which is called the transformation of variables.
equality, where the inequality sign is reversed. Dierentiating Fy (y) with respect to y, we obtain the following result:
fy (y) = Fy (y) = (y)F x (y) = (y) f x (y) . (5)
146
145
Example 1.9:
On Distribution of Y = X2 :
As an example, when we
Since we have:
X = (Y) =
Y
,
1
exp 2 (y )2 ,
2
2
1
which indicates the normal distribution with mean and variance 2 , denoted by N(, 2 ).
147
= F x ( y) F x ( y).
The probability density function of Y is obtained as follows:
1
fy (y) = Fy (y) = f x ( y) + f x ( y) .
2 y
148
is dened as:
x
u
J =
y
u
x
v
.
y
v
150
149
Multivariate Case:
bility density function of X1 , X2 , Xn . Suppose that a oneto-one transformation from (X1 , X2 , , Xn ) to (Y1 , Y2 , , Yn )
is given by:
X1 = 1 (Y1 , Y2 , , Yn ),
X2 = 2 (Y1 , Y2 , , Yn ),
..
.
Xn = n (Y1 , Y2 , , Yn ).
151
152
x1
y1
x2
y1
J =
...
xn
y1
x1
y2
x1
yn
x2
y2
x2
yn
..
.
..
..
.
xn
y2
153
xn
yn
.