Sie sind auf Seite 1von 3

University of Illinois Fall 2011

ECE534: Final Exam


Friday, December 16, 2011
7:00 p.m. 10:00 p.m.
103 Talbot Laboratory
1. (a) For < t < , P{At
2
+t + 1 0} = P{A
1+t
t
2
} = P
_
A
3

1+t
3t
2
_
= Q
_
1+t
3t
2
_
(b) By the quadratic formula, a function of the form at
2
+bt +c with c > 0 is nonnegative
for all t if and only if b
2
4ac 0. So the event in question is the same as {1
2
4A 0}
or {A
1
4
} or {
A
3

1
12
}. The event thus has probability Q
_
1
12
_
.
2. By the balance equations for birth-death processes,
k1
=
k

k
, so

k
=

k1

k
=
_
n
k1
_
p
k1
(1 p)
nk+1
_
n
k
_
p
k
(n k)
1p
=
n! k!(n k)!(1 p)
(k 1)!(n k + 1)!n! p
=
k(1 p)
(n k + 1)p
.
3. (a) R
Y
(s, t) = E[Y
s
Y
t
] = E[X
s
cos(2f
o
s)X
t
cos(2f
o
t)] = R
X
(s t) cos(2f
o
s) cos(2f
o
t).
(b) No. R
Y
(s, t) above is not a function of s and t only. For example, R
Y
(t, t) = R
X
(0) cos
2
(2f
o
t)
is not constant.
(c) Yes, because R
Y
is a continuous function.
(d) Since X, and therefore Y , is a mean zero process, Var(Y
t
) = E[Y
2
t
] = R
Y
(t, t) =
cos
2
(2f
0
t) and therefore
P{Y
t
3} = P
_
Y
t
| cos(2f
o
t)|

3
| cos(2f
o
t)|
_
= Q
_
3
| cos(2f
o
t)|
_
.
This probability is maximized for t that maximizes | cos(2f
o
t)|, or t = 0 (or any t of
the form
n
2fo
.) So max
t
P{Y
t
3} = P{Y
0
3} = Q(3) .
4. (a) The process Y can be viewed as the output of a linear, time invariant system with
impulse response function h(t) =

n=0

n
(t n), and transfer function
H() =
_

n=0

n
(t n)e
jt
dt
=

n=0

n
e
jn
=

n=0
(e
j
)
n
=
1
1 e
j
(b) Since e
j
= cos() j sin(),
|H()|
2
=
1
(1 cos())
2
+
2
sin
2
()
=
1
1 2cos() +
2
.
Another approach is to start with |H()|
2
= H()H

() and use H

() =
1
1e
j
.
(c) Thinking in the time domain, by inspection, X
t
= Y
t
Y
t1
, which describes the LTI
system with impulse response function k(t) = (t) (t 1). In the frequency domain,
Y is obtained from X using transfer function H, so X can be recovered from Y using
the LTI system with transfer function K() =
1
H()
= 1 e
j
, corresponding in the
time domain to the impulse response function k already given.
5. (a) The orthogonality condition is that X

X should be orthogonal to any linear combination


of the Y s, or equivalently, it is to be orthogonal to Y
io,jo
for any i
o
, j
o
. That is, for all
i
o
, j
o
, E
__
X

N
i=1

N
j=1
h
i,j
(X +U
i
+V
j
)
_
(X +U
io
+V
jo
)
_
= 0 or equivalently

2
=
2
_
_
N

i=1
N

j=1
h
i,j
_
_
+
_
_
N

j=1
h
io,j
_
_
+
_
N

i=1
h
i,jo
_
for all i
o
, j
o
.
(b) In words,
2
should equal the sum of
2
times the sum of all the h
i,j
s, plus the i
th
o
row
sum of h plus the j
th
o
column sum of h. Let c be the constant so that Nc is the sum of
all the h
i,j
s. Then all row sums of h and all column sums of h must equal c, and all N
2
equations to be satised reduce to the single equation
2
= (N
2
+ 2)c, or c =

2
N
2
+2
.
Any choice of h such that all row sums of h and all column sums of h are equal to c
yields the same optimal estimator,

X =

E[X|Y ]. Two specic choices of h are h
i,j

c
N
,
or h
i,j
= cI
{i=j}
. (The orthogonality principle implies that

X is unique, but because of
the strong dependencies among the Y s for this problem, h is not uniquely determined.)
(c) By part (b),

X = cNX +c
_

N
i=1
U
i
+

N
j=1
V
j
_
, so
E[

X
2
] = c
2
(N
2

2
+2N) =

4
N
N
2
+2
. Thus, MMSE=E[X
2
]E[

X
2
] =
2


4
N
N
2
+2
=
2
2
N
2
+2
.
6. (a) Yes, because it is dened by a recursion driven by a sequence of independent random
variables.
(b) The sequence X
n
() is monotone nondecreasing in n for each . Also, by induction on
n, X
n
() 1 for all n and . Since bounded monotone sequences have nite limits,
lim
n
X
n
exists in the a.s. sense and the limit is less than or equal to one with
probability one.
(c) Since a.s. convergence of bounded sequences implies m..s. convergence, lim
n
X
n
also
exists in the m.s. sense.
(d) Since (X
n
) converges a.s., it also converges in probability to the same random variable,
so Z = lim
n
X
n
a.s. It can be shown that P{Z = 1} = 1. Here is one of several
proofs. Let 0 < < 1. Let a
0
= 0 and a
k
=
a
k1
+1
2
for k 1. By induction,
a
k
= (1 )(1 2
k
). Consider the sequence of events: {U
i
1 } for i 1. These
events are independent and each has probability . So with probability one, for any
k 1, the probability that at least k of these events happens is one. If at least k of
these events happen, then Z a
k
. So, P{(1 )(1 2
k
) Z 1} = 1. Since can
be arbitrarily close to zero and k can be arbitrarily large, it follows that P{Z = 1} = 1.
ANOTHER APPROACH is to calculate that E[X
n
|X
n1
= v] = v +
(1v)
2
2
. Thus,
E[X
n
] = E[X
n1
] +
E[(1X
n1
)
2
]
2
E[X
n1
] +
(1E[X
n1
])
2
2
. Since E[X
n
] E[Z], it
follows that E[Z] E[Z] +
(1E[Z])
2
2
. So E[Z] = 1. In view of the fact P{Z 1} = 1,
it follows that P{Z = 1} = 1.
2
7. (a) p(z
1
, z
2
, z
3
, z
4
) has the form c2
i
for any possible sequence z
1
, z
2
, z
3
, z
4
, so it is sucient
to track the exponent i. Smaller values of i correspond to larger probabilities. Give a
transition label j if the transition has conditional probably 2
j
Since c is common to all
paths it is omitted.
winner!
2
2
1
1
2
2
2
2
1
2
2
1
1
2
2
2
2
1
2
2
1
1
2
2
2
2
1
1
t=1 t=2 t=3 t=4
0
2
3
0
5
2
2
3
1
3
0
5
3
2 3
4
4
To show how the Viterbi algorithm progresses, we use bold edges to denote the last
edge of the minimum weight path into each node, and inside the circle for each node we
denote the minimum weight of paths up to that node. Tracing back from the winner,
we see (1,0,2,2) is the most likely path.
(b) The joint likelihood p(z, y) is given by
p(z, y) = p(z)p(y|z)
= c2
i
(2
2
)
2
exp
_

t=1
(y
t
z
t
)
2
2
2
_
= c

2
(i+
P
4
t=1
(ytzt)
2
)
where i depends on the sequence z as in part (a), and c

= c(2
2
)
2
. So for y given,
the MAP estimate

Z
MAP
(y) is the path z minimizing i +

4
t=1
(y
t
z
t
)
2
. The number
i is the sum of edge weights for the trellis below, including the weights on the ingoing
edges, which correspond to the prior distribution. A weight (y
t
z)
2
is placed on node
(t, z) for 1 t 4 and z {0, 1, 2}. The MAP estimate is then the path that minimizes
the sum of all edge weights and node weights along the path.
winner
2
2
1
1
2
2
2
2
1
2
2
1
1
2
2
2
2
1
2
2
1
1
2
2
2
2
1
1
t=1 t=2 t=3 t=4
0
2
0
3
5
4
1
0
0
1
4
1
0
1
7
1
5
2
4
7
5
4
4
4
9
16
15
21
9
To show how the Viterbi algorithm progresses, we use bold edges to denote the last
edge of the minimum weight path into each node, and inside the circle for each node we
denote the minimum weight of paths up to that node, including the weight associated
with that node. Tracing back from the winner, we see

Z
MAP
(y) = (1, 0, 1.0).
3

Das könnte Ihnen auch gefallen