Beruflich Dokumente
Kultur Dokumente
Question 1
The estimator is given as:
N 1
1 X 2
x [n]
2 =
N n=0
(1)
First, we find the expected value (should be equal to 2 if the estimator is unbiased):
E{2 } =
N 1
N 1
N 1
1 X
1 X
1 X 2
N 2
E{x2 [n]} =
(var{x} + E{x[n]}2 ) =
( + 02 ) =
= 2
N n=0
N n=0
N n=0
N
(2)
N 1
1 X
N
1
var{x2 [n]} = 2 var{x2 [n]} = var{x2 [n]}
N 2 n=0
N
N
(3)
(4)
Using our knowledge that x[n] is normally distributed, we use the moment generating function to compute
E{x4 [n]}.
The moment generating function, (t), for a Normal distribution is given as:
2 t2
}
2
(5)
n (0) = E{X n }, n 1
(6)
(t) = exp{t +
It is important to note that in general:
Since we are interested in E{x4 [n]}, we take the fourth derivative of (t) and evaluate it at t= 0:
0000
(t) = 3 4 exp{
2 t2
2 t2
2 t2
+ t} + exp{
+ t}(t 2 + )4 + 6 2 exp{
+ t}(t 2 + )2
2
2
2
0000
(7)
(8)
(9)
Question 2
(10)
Question 3
In example 2.1, we saw that x[n] = A + w[n], where w[n] is WGN. The estimator then was:
N 1
1 X
A =
x[n]
N n=0
(11)
This implies that the estimator is a linear sum - x[n] values are simply summed and averaged. Because of
this, the estimator has the same distribution as x[n], or as w[n], hence Gaussian or Normal.
We also know that the estimator is unbiased, and so its expected value is simply A. The variance is easily
found:
=
var{A}
N 1
N 1
N 1
1 X
1 X 2
2
1 X
var{x[n]}
=
var{w[n]}
=
=
N 2 n=0
N 2 n=0
N 2 n=0
N
(12)
Hence we can say that the estimator is normally distributed with a mean of A, and variance of 2 /N .
Question 4
An averaging estimator in this problem is defined as
N
1
X
= 1
hi
h
N n=0
=
E{h}
=
var{h}
N 1
1 X
E{hi } = h
N n=0
N 1
1 X
var(hi )
1
var{hi } =
=
N n=0
N
N
(13)
(14)
(15)
Mean:
= h and E{hi } = h. Similarly for = 0.5, E{h}
= 0.5h and E{hi } = 0.5h.
For = 1, E{h}
So, averaging does not improve the estimation of the mean - when = 0.5, the estimation is biased no
matter what.
= 0.1 and var{hi } = 1. Similarly for = 0.5, var{h}
= 0.1 and var{hi } = 1.
Variance: For = 1, var{h}
So, averaging reduces the variance (which should be expected. However, when = 0.5, the estimation
becomes worse as the distribution narrows around the wrong value.
Page 2 of 6
Question 5
We are told that the estimator 2 is unbiased, which means that E{2 } = 2 .
Next, 2 is expressed as a scaled sum of two squares (1/2(x2 [0] + x2 [1])), where the x[n] terms are normally
distributed.
The chi-squared distribution has a similar form, where statistic Y is chi-squared-distributed:
k
X
Xi i 2
(
)
i
i=1
Y =
(16)
p(y) =
(17)
x[0] x[1]
+ 2
2
(18)
(19)
Next, if we know the pdf of a random variable X, it is possible to calculate the pdf of another variable Y
that is related to x. The basic relationship is:
|fY (y)dy| = |fX (x)dx|
|fY (y)| =
(20)
|fX (x)|
dy/dx
(21)
So in our case,
p(2 ) =
1
2e
1 22
2 2
(22)
2 /2
The chi-squared distribution is only defined for 2 0. From the equation it is evident that the pdf is a
decaying exponential, which is not symmetrical.
Question 6
For the given estimator, we can define the mean and the variance:
A =
N
1
X
an x[n]
(23)
n=0
=
E{A}
N
1
X
E{an x[n]} =
n=0
=
var{A}
N
1
X
N
1
X
an A
(24)
n=0
a2n var{x[n]} =
n=0
N
1
X
a2n 2
(25)
n=0
an = 1
(26)
n=0
Page 3 of 6
J = var{A} +
an 1 =
a +
an 1 =
a2 2 +
an
n
n=0
n=0
n=0
n=0
(27)
n=0
dJ
= 2ai 2 + = 0
dai
(28)
(29)
2 2
The value of ai is a constant, which means all ai are equal. From this we can say that N ai = 1, or ai = 1/N .
ai =
Question 7
When we are interested in evaluating something of the form P {| | > }, we can look at z-scores (a.k.a.
standard score).
Z-score is a normalized metric, so we can write our expression as:
|
|
|
|
< Pr q
Pr q
>q
>q
(30)
var()
var()
var()
var()
The probability (Pr above) can be calculated as:
Z
P r (x > a) =
a
2
1
ex /2 dx
2
(31)
Here, a =
var()
and b =
,
var()
2
1
ex /2 dx <
2
2
1
ex /2 dx
2
(32)
Question 8
Similarly to the previous question, we can normalize the probability expression and rewrite it in terms of an
integral:
Z
A|
2
|
A
1
ex /2 dx
Pr q
>q
=2
(33)
2
var(A)
var(A)
var(A)
,
var(A)
N
.
true since we are looking at absolute values). Since as N , N and hence the probability 0.
NP
1
1
Next, we look at estimator A = 2N
x[n]. The variance of this estimator is:
n=0
=
var(A)
Question 8 continued on next page. . .
2
4N
(34)
Page 4 of 6
Question 9
In Example 2.1 the estimator was:
N 1
1 X
A =
x[n]
N n=0
(35)
=A
E{A}
(36)
Now, for 2 ,
N 1
X
1
= 1 E{(
x[n])2 } = 2
E{}
2
N
N
n=0
N
1
X
E{
N
1
X
x[n]} + var{
n=0
!
x[n]}
(37)
n=0
N A2 + 2
2
2
1
2
2
2
(N
A)
+
N
=
=
+
A
=
+
N2
N
N
N
Since the expected value isnt equal to , the estimator is biased.
=
E{}
(38)
Question 10
We already know that estimator A is unbiased. We only need to show the expected value of 2 :
N
2}
E{(x[n] A)
N 1
N
2 + var{x[n] A}
=
E{x[n] A}
N 1
!2
N 1
1 X
N 1
1
N
E{x[n]} E{
x[m]} + var{
x[n]
=
N 1
N n=0
N
N
N
(N 1)2 2 N 1 2
=
(A A)2 +
+
N 1
N2
N2
N N 1 2
=
= 2
N 1 N
E{2 } =
N
1
X
x[m]}
m=0,m6=n
Question 11
We are dealing with a uniform distribution, the pdf of which is defined as:
fX (x) =
1
1
=
=
ba
1/ 0
(39)
(40)
Page 5 of 6
==
E{}
g(x)f (x)dx
(41)
g(x)f (x)dx
Z1/
=
g(x[0])d(x[0])
0
Z1/
=
g(u)du
0
Next we need to prove that a function g(x[0]) cannot be found to satisfy this condition for all > 0.
Lets take two values of - 1 and 2 that are not equal. Then,
1/
Z 1
g(u)du
1=
0
1/
Z 2
g(u)du
1=
0
0=
g(u)du
1/2
But this is only possible when g(u) is 0, which doesnt represent , and makes the estimator biased.
Page 6 of 6