Sie sind auf Seite 1von 5

STATS 200B Homework 8

Ying Wang 403797428


March 1st, 2011

1 Problem 1
Let X1 , . . . , Xn ∼ p(x; θtrue ) independently. Consider the estimating equation
n
1X
h(Xi ; θ) = 0
n i=1

where Eθ [h(X, θ)] = 0 for all θ. Let θ̂ be the solution to the above estimating
equation.

1.1 Pleaese show that both method of moments and the


maximum likelihood estimators are special cases.
The original method of moments is as follows
n
1X j
Eθ (X j ) = X , j = 1, . . . , p
n i=1

Let  
X
 X2 
h(x) = 
 
.. 
 . 
Xp
Then we have
n
1X
Eθ (h(X; θ)) = h(Xj ; θ) = m(θ)
n j=1
n
1X
∴ m(θ) = h(Xj ; θ)
n j=1
n
1X
∴ (h(Xj ; θ) − m(θ)) = 0
n j=1

1
Let h̃(X; θ) = h(X; θ), then we have
n
1X
h̃(Xj ; θ) = 0
n j=1

and
E(h̃(X; θ)) = E(h(X; θ)) − m(θ)
= m(θ) − m(θ)
= 0 (1)
Therefore, method of moment is a special case.
Assume X1 , X2 , . . . , Xn ∼ p(X; θ), θ̂M LE is the solution of
n
Y
max L(θ) = p(Xi ; θ)
i=1
Xn
max l(θ) = log p(Xi ; θ)
i=1
n
X ∂
l0 (θ) = log p(Xi ; θ) = 0 (2)
i=1
∂θ

Let h(X; θ) = ∂θ log p(Xi ; θ), we have
n n
1X 1X ∂
h(Xj ; θ) = log p(Xi ; θ) = 0
n j=1 n i=1 ∂θ


Eθ (h(X; θ))) = Eθ ( log p(Xi ; θ))
∂θ
1 ∂
= Eθ ( · p(Xi ; θ))
p(X; θ) ∂θ
Z
1 ∂p(Xi ; θ)
= · · p(Xi ; θ)dX
p(X; θ) ∂θ
Z

= p(X; θ)dX
∂θ

= ·1
∂θ
= 0 (3)
Therefore, maximum likelihood estimator is a special case.

1.2 Please argue that θ̂ → θtrue as n → ∞


Pn Pn
θ̂n is the solution to n1 i=1 h(X; θ) = 0. When n goes to infinity, n1 i=1 h(X; θ)
goes to Eθtrue (h(X; θ)). So as n goes to infinity, θ̂n is the solution to Eθtrue (h(X; θ)) =
0, and θ̂n = θtrue .

2

1.3 Please derive the asymptotic distribution of n(θ̂ −
θtrue ).
For the function
n
1X
h(X; θ) = 0
n i=1

and the vertical line θ̂ = θtrue , the intercept is


n
1X
h(X; θtrue )
n i=1

the slope of the function is


n
1X 0
h (X; θtrue )
n i=1

Then we have
n n
1X 1X 0
h(Xi ; θtrue ) + h (X; θtrue )(θ − θtrue ) = 0
n i=1 n i=1
Pn
(1/n) i=1 h(Xi ; θtrue )
∴ θ̂ − θtrue = − Pn
(1/n i=1 h0 (X; θtrue )
√ Pn
√ (1/ n) i=1 h(Xi ; θtrue )
∴ n(θ̂ − θtrue ) = − Pn (4)
(1/n i=1 h0 (X; θtrue )
Since when n approaches infinity,
n
X
Eθtrue (h(X; θtrue )) = h(X; θtrue ) = 0
i=1

then by Central Limit Theorem (CLT), we have


n
1 X
√ h(Xi ; θtrue ) → N (0, Varθtrue (h(X, θtrue ))) (5)
n i=1

At the same time, when n goes to infinity, by Law of Large Number (LLN) we
have
n
1X 0
h (Xi ; θtrue ) → Eθtrue (h0 (X; θtrue )) (6)
n i=1

By (4), (5), and (6), we have



n(θ̂ − θtrue ) → N (0, Varθtrue (h(X, θtrue ))/Eθtrue (h0 (X, θtrue ))2 ) (7)

3
1.4 Please show that the optimal estimating equation is
the likelihood equation.

Let Vn be the variace of n(θ̂ − θtrue ), then
V arθtrue (h(X, θtrue ))
Vn =
Eθtrue (h0 (X, θtrue ))2 )
V arθtrue (h(X, θtrue ))
= ∂
COVθtrue (h(X, θtrue ), ∂θ log p(X; θ)|θtrue ))2
V arθtrue (h)
= ∂ ∂
Corrθtrue (h, ∂θ log p)2 V arθtrue (h)V ar( ∂θ log p|θtrue )
1
= ∂ ∂
(8)
Corrθtrue (h, ∂θ log p)2 V ar( ∂θ log p|θtrue )

To minimize the variance Vn , we maximize Corrθtrue (h, ∂θ log p)2 by making

h(X; θ) ∝ log p(X; θ) (9)
∂θ
Therefore the optimal esitmating equation is the likelihood equation

log p(X; θ) = 0
∂θ

2 Problem 2
Let X ∼ p(x; θ). Let δ(X) be the unbiased estimator of θ. Please find the lower
bound of the variance of δ(X). Answer: Since δ(X) be the unbiased estimator
of θ, we have
Eθ (δ(X)) = θ (10)
Z
∴ δ(X)p(X, θ)dX = θ
Z
∂ ∂
∴ δ(X)p(X, θ)dX = θ
∂ ∂
Z
∴ δ(X)p0 (X, θ)dX = 1
Z

∴ δ(X) log p(X, θ)p(X, θ)dX = 1
∂θ

∴ E[δ(X) log p(X, θ)] = 1
∂θ
For 3, we have
∂ ∂ ∂
COV (δ(X), log p(X, θ)) = E(δ(X), log p(X, θ)) − E(δ(X)) · E( log p(X, θ))
∂θ ∂θ ∂θ
= 1 − E(δ(X)) · 0
= 1 (11)

4
By Cauchy-Schwarz inequality, we have
∂ ∂
1 = COV (δ(X), log p(X, θ)) ≤ V ar(δ(X)) · V ar( log p(X, θ)) (12)
∂θ ∂θ
1
∴ V ar(δ(X)) ≥ ∂
= I −1 (θ) (13)
V ar( ∂θ log p(X, θ))
Therefore the lower of the variance of δ(X) is I −1 (θ).

Das könnte Ihnen auch gefallen