Sie sind auf Seite 1von 6

Fundamentals of Statistical Signal Processing:

Estimation Theory (Stephen Kay): Chapter 2 Detailed


Solutions

Question 1
The estimator is given as:
N 1
1 X 2
x [n]
2 =
N n=0

(1)

First, we find the expected value (should be equal to 2 if the estimator is unbiased):
E{2 } =

N 1
N 1
N 1
1 X
1 X
1 X 2
N 2
E{x2 [n]} =
(var{x} + E{x[n]}2 ) =
( + 02 ) =
= 2
N n=0
N n=0
N n=0
N

(2)

This implies that the estimator is unbiased.


Next, we calculate the variance:
var{2 } =

N 1
1 X
N
1
var{x2 [n]} = 2 var{x2 [n]} = var{x2 [n]}
N 2 n=0
N
N

(3)

In the above equation,


var{x2 [n]} = E{x4 [n]} E{x2 [n]}2 = E{x4 [n]} 4

(4)

Using our knowledge that x[n] is normally distributed, we use the moment generating function to compute
E{x4 [n]}.
The moment generating function, (t), for a Normal distribution is given as:
2 t2
}
2

(5)

n (0) = E{X n }, n 1

(6)

(t) = exp{t +
It is important to note that in general:

Since we are interested in E{x4 [n]}, we take the fourth derivative of (t) and evaluate it at t= 0:
0000

(t) = 3 4 exp{

2 t2
2 t2
2 t2
+ t} + exp{
+ t}(t 2 + )4 + 6 2 exp{
+ t}(t 2 + )2
2
2
2
0000

(0) = 3 4 = E{x4 [n]}

(7)
(8)

Substituting the obtained expressions, we obtain:


1
2 4
(3 4 4 ) =
N
N
Evidently, as N , variance decreases and the estimator becomes better.
var{2 } =

(9)

Chapter 2 Detailed Solutions

Question 2

In a uniform distribution over (a,b), E{x[n]} = a+b


2 . For this question we have a = 0, b = , E{x[n]} = 2 .
= , we can simply average the samples and multiply the outcome by
Since in an unbiased estimator, E{}
2, so that:
N 1
2 X
x[n]
=
N n=0

(10)

Question 3
In example 2.1, we saw that x[n] = A + w[n], where w[n] is WGN. The estimator then was:
N 1
1 X
A =
x[n]
N n=0

(11)

This implies that the estimator is a linear sum - x[n] values are simply summed and averaged. Because of
this, the estimator has the same distribution as x[n], or as w[n], hence Gaussian or Normal.
We also know that the estimator is unbiased, and so its expected value is simply A. The variance is easily
found:
=
var{A}

N 1
N 1
N 1
1 X
1 X 2
2
1 X
var{x[n]}
=
var{w[n]}
=

=
N 2 n=0
N 2 n=0
N 2 n=0
N

(12)

Hence we can say that the estimator is normally distributed with a mean of A, and variance of 2 /N .

Question 4
An averaging estimator in this problem is defined as
N
1
X
= 1
hi
h
N n=0

=
E{h}

=
var{h}

N 1
1 X
E{hi } = h
N n=0

N 1
1 X
var(hi )
1
var{hi } =
=
N n=0
N
N

(13)

(14)

(15)

Mean:
= h and E{hi } = h. Similarly for = 0.5, E{h}
= 0.5h and E{hi } = 0.5h.
For = 1, E{h}
So, averaging does not improve the estimation of the mean - when = 0.5, the estimation is biased no
matter what.
= 0.1 and var{hi } = 1. Similarly for = 0.5, var{h}
= 0.1 and var{hi } = 1.
Variance: For = 1, var{h}
So, averaging reduces the variance (which should be expected. However, when = 0.5, the estimation
becomes worse as the distribution narrows around the wrong value.

Page 2 of 6

Chapter 2 Detailed Solutions

Question 5
We are told that the estimator 2 is unbiased, which means that E{2 } = 2 .
Next, 2 is expressed as a scaled sum of two squares (1/2(x2 [0] + x2 [1])), where the x[n] terms are normally
distributed.
The chi-squared distribution has a similar form, where statistic Y is chi-squared-distributed:
k
X
Xi i 2
(
)
i
i=1

Y =

(16)

The chi-squared distribution has a pdf (for k = 2) of:


1
ey/2
= ey/2
2(1)
2

p(y) =

(17)

Using our knowledge of x[n], we can write:


Y =

x[0] x[1]
+ 2
2

(18)

Comparing this to the expression for 2 , we can see that


Y 2
2 =
2

(19)

Next, if we know the pdf of a random variable X, it is possible to calculate the pdf of another variable Y
that is related to x. The basic relationship is:
|fY (y)dy| = |fX (x)dx|
|fY (y)| =

(20)

|fX (x)|
dy/dx

(21)

So in our case,
p(2 ) =

1
2e

1 22
2 2

(22)

2 /2

The chi-squared distribution is only defined for 2 0. From the equation it is evident that the pdf is a
decaying exponential, which is not symmetrical.

Question 6
For the given estimator, we can define the mean and the variance:
A =

N
1
X

an x[n]

(23)

n=0

=
E{A}

N
1
X

E{an x[n]} =

n=0

=
var{A}

N
1
X

N
1
X

an A

(24)

n=0

a2n var{x[n]} =

n=0

N
1
X

a2n 2

(25)

n=0

Given the constraints, we can say that


N
1
X

an = 1

(26)

n=0

Question 6 continued on next page. . .

Page 3 of 6

Chapter 2 Detailed Solutions


Then we minimize the variance using Lagrangian multipliers:
! N 1
! N 1
N
1
N
1
N
1
X
X
X
X
X
2 2

J = var{A} +
an 1 =
a +
an 1 =
a2 2 +
an
n

n=0

n=0

n=0

n=0

(27)

n=0

dJ
= 2ai 2 + = 0
dai

(28)

(29)
2 2
The value of ai is a constant, which means all ai are equal. From this we can say that N ai = 1, or ai = 1/N .
ai =

Question 7
When we are interested in evaluating something of the form P {| | > }, we can look at z-scores (a.k.a.
standard score).
Z-score is a normalized metric, so we can write our expression as:

|
|
|


|


< Pr q

Pr q
>q
>q
(30)

var()
var()
var()
var()
The probability (Pr above) can be calculated as:
Z
P r (x > a) =
a

2
1
ex /2 dx
2

(31)

This rewrites the expression as:


Z
a

Here, a =

var()

and b =


,

var()

2
1
ex /2 dx <
2

2
1
ex /2 dx
2

(32)

and a > b. Since we are integrating a decaying exponential, shifting

a makes the probability associated with smaller than that for .

Question 8
Similarly to the previous question, we can normalize the probability expression and rewrite it in terms of an
integral:

Z
A|
2
|
A

1

ex /2 dx
Pr q
>q
=2
(33)
2

var(A)
var(A)


var(A)

The lower limit on the integral is


,

var(A)

= 2 /N , making the limit


where var(A)

 N
.

The value of the

integral reduces with increasing lower limit as this is a decaying exponential


past x = 0 (and this is always

true since we are looking at absolute values). Since as N ,  N and hence the probability 0.
NP
1
1
Next, we look at estimator A = 2N
x[n]. The variance of this estimator is:
n=0

=
var(A)
Question 8 continued on next page. . .

2
4N

(34)
Page 4 of 6

Chapter 2 Detailed Solutions


At first glance this might look like it will reduce the probability faster than the previous estimator, but if we
analyze A more closely, we will notice that it is biased (expect value centered at A/2), and hence P r 1 as
N .

Question 9
In Example 2.1 the estimator was:
N 1
1 X
A =
x[n]
N n=0

(35)

=A
E{A}

(36)

Now, for 2 ,
N 1

X
1
= 1 E{(
x[n])2 } = 2
E{}
2
N
N
n=0

N
1
X

E{

N
1
X

x[n]} + var{

n=0

!
x[n]}

(37)

n=0

 N A2 + 2
2
2
1
2
2
2
(N
A)
+
N

=
=
+
A
=

+
N2
N
N
N
Since the expected value isnt equal to , the estimator is biased.
=
E{}

(38)

Question 10
We already know that estimator A is unbiased. We only need to show the expected value of 2 :
N
2}
E{(x[n] A)
N 1

N 
2 + var{x[n] A}

=
E{x[n] A}
N 1

!2
N 1
1 X
N 1
1
N
E{x[n]} E{
x[m]} + var{
x[n]
=
N 1
N n=0
N
N


N
(N 1)2 2 N 1 2
=
(A A)2 +
+

N 1
N2
N2
N N 1 2
=
= 2
N 1 N

E{2 } =

N
1
X

x[m]}

m=0,m6=n

Thus the estimator is unbiased.

Question 11
We are dealing with a uniform distribution, the pdf of which is defined as:
fX (x) =

1
1
=
=
ba
1/ 0

(39)

Because we want an unbiased estimator, we also know that


=
E{}

Question 11 continued on next page. . .

(40)

Page 5 of 6

Chapter 2 Detailed Solutions


In general we can also write the expected value as
Z

==
E{}

g(x)f (x)dx

(41)

Here g(x) is a measurable function of x, and f(x) is the pdf of x.


Z
=

g(x)f (x)dx

Z1/
=
g(x[0])d(x[0])
0

Cancelling out , we get:


Z1/
1=
g(x[0])d(x[0])
0

Z1/
=
g(u)du
0

Next we need to prove that a function g(x[0]) cannot be found to satisfy this condition for all > 0.
Lets take two values of - 1 and 2 that are not equal. Then,
1/
Z 1

g(u)du

1=
0

1/
Z 2

g(u)du

1=
0

Subtracting these two results in:


1/
Z 1

0=

g(u)du

1/2

But this is only possible when g(u) is 0, which doesnt represent , and makes the estimator biased.

Page 6 of 6

Das könnte Ihnen auch gefallen