Sie sind auf Seite 1von 40

E

0
(s
2
) > 0
Let c
o
=
E

0
(st)
E

0
(s
2
)
And consider T c
o
s
E

0
[(t +cs)
2
] = E

0
(t
2
)
E

0
(t
2
)
E

0
(s
2
)
< E

0
(t
2
)
Bu T +c
0
s and T . . .?
(ii) (i)
Assumin (ii) E

0
(st) = 0 s R H
Wheret
Clarim T is a UMVUE
Consider T we know T-t
E

0
((T T

)T) = 0
E

0
(T
2
) = E

0
(TT

)
_
E

0
(T
2
)E

0
(T
2
)
_
E

0
(T
2
)
_
E

0
(T
2
)
var

(T) var

(T

) since T, T


In fact if a UMVUE exist, it is essentially unique.
Note: The last theorem
E

0
(st) = 0 s R
cov

(S, T) = 0 s R
1
If T is UMVUE then for any s R
cov

(T, S) = 0i.e for any other T


cov

(T, T T

) = 0, i,e cov

(T, T) = cov

(T, T

)
If T is also a UMVUE then var

(T) = var

(T

) H so p

(T, T

) =
1 H
T = a

+b

a.e. p

T an T are statistic
T = aT

+b
but since T,T , a = 1, b = 0
Review
Composition of estimates using mean squares error.
m>E control variance and biased
Relative ecient
Let T
1
and T
2
two square integrable estimate of g() (often bit t
1
and t
2
are
function of the data x F
x
(x; ), H )
The relative ecient of T
1
witch respect to T
2
is denote by e(t
2
, t
2
)
e(T
1
, T
2
) =
mse

(T
2
)
mse

(T
1
)
Usually a function of
T
1
is preferred to T
2
if e(T
1
, T
2
)1 1 H with estrict inequality for some
in H
In such case T
2
is said to be admissible?
T
1
is admissible if no preference estimate exist.
The does of Admissible estimate induce some which dons seen so good.
E.G x
1
, x
2
, ...x
1
00 idd N(, 1) Consider T= 17, T is admissible.
E.g (1)
X N(, 1), 0 x R
Consider T= max (x,0) discontinues NOT admissible
Eg(2)
x
1
, ..., x
n
idd N(,
2
, R,
2
R
2
)
2
To estimate : T= x can be s known to be admissible.
To estimate
2
:
T
1
=
1
n + 1
n

i=1
(x
i
x)
T
3
=
1
n 1
n

i=1
(x
i
x)
2
is a MVUE
T
2
=
1
n
n

i=1
(x
i
x)
2
M.L.E
T
2
preference T
3
, T
3
is not admissible
Sten (196?) showed T
1
is ALSO admissible a better estimate
T
5
= min(T
1
,
1
n + 2
n+2

i=1
x
3
i
)
T
3
|T
1
but also not admissible
TA. Brancroft (1743) talk about preliminary text estimate and PT hypotheses
x
1
, ..., x
n
idd N
(n)
(, I)
I = x

(1)
p
(2)
E(

(
i

i
)
2

F{T
n
}
(
h=1
2)a sequence of estimates we expect
? m.s.e(I
n
) as n
To compare consistent sequences of estimates look at of sequence to zew of the
3
m.s.es
Compare m.s.e

(T
1
; n) and m.s.e

(T
2
, n) = e

(T
1:n
; T
2:n
)
mse

(T
1:n
)
mse

(T
2:n
)
x
1
, . . . idd F
x
(x; ), R
? What a consistent
Y = I(x
i
n)
For two seq of estimates {T
1:n
} and {T
2:n
} of g() the lower bound in the
synthetic e if {T
1:n
} with {T
2:n
} is:
a.e(T
1:n
, T
2:n
) = lm
x
e(T
1:n
, T
2:n
)
Also the uppen bound ...
d.l(T
1:n
, T
2:n
) = lm
n
{T
1:n
} is asymptotically more ecient than {T
2:n
} relative to H H if
a.l(T
1:n
, T
2:n
) > 1 H
And
a.l(T
1:n
, T
2:n
) 1 H
If H = H umpired reference to H E.g.(1) Lehmons super ecient example.
x
1
, ..., x
n
idd N(, 1), Rwish to estimate
T
1:n
=
1
n
n

i=1
x
i
T
2:n
=
1
2n
n

i=1
x
i
if |
n

i=1
x
i
| <

nlog n
=
1
n
n

i=1
x
i
, otherwise
4
a.l(T
1:n
, T
2:n
) = 1 if = 0
1
4
if =

x
i
n

x
i
n

log

n
Observation
If a consistent sequence of estimation exist then if of ecient sequence exist it
must be consistent also.
E.G.
x
1
, x
2
, ... idd U(0, )
Wish to estimate
T
1:n
=
2
n
n

i=1
x
i
; unbiased consistent
T
2:n
= x
n:n
; biased consistent
T
3:n
= T
1:n
; if n never consistent
T
3:n
= T
2:n
; if n odd consistent
E

(T
1:n
) =
var

(T
1:n
) =

3n
5
E

(T
2:n
) =
n
n + 1

(T
2
2:n
) =
n
n + 2

2
m.s.e

(T
1:n
) =

2
3n
m.s.e

(T
2:n
) = =
2
2
(n + 1)(n + 2)
e(T
1:n
, T
2:n
) =
6
(n + 1)(n + 2)
_
1 n
< 1n > 2
t
2:n
asignemen preferend T
n:3
ap T
n:n
Some restriccion for getlin x good estimate.
unbiased
Asymptomatically unbiased
We speech DF optimal, within a restricted class of estimates (eg, unbiased,
asymptote unbiased, asymptote normal,....)
When sher (on Rohatgi) speak of on ecient estimate he meant fact is unbia-
sed (or asymmyt unbiased), more specically, one that archive the CRF bound
(archivist asymptotically )
The ecient of on estimate T (Note we have before the eciency of the estima-
tes)
Is its eciency relative to the CRF bound.
* Note: UMVU are often not ecient (only if they archives the CRF bound) I
has owre if we have exponential family and estimate the connect thing.
BAN estimate best asymptotically normal
CAN estimate consistent asymptotically normal
We say that {T
n
} is CAN sequence of estimate g() if there exist a function
() such that.

n
T
n
g()
()
,
d
..
z N(, 1) H
6

2
() is eciency colle the asymptotic variance.
We say that {T
n
} is BAN if it is CAN are there does not exist another CAN
sequence with smaller variance (relative to some interval)
NSU: BAN not unique
Suppose {T
n
} is a BAN sequence and choose {T
n
} such that

nZ
n
D H
T
n
+Z
n
is also BAN
E.G:
x
1
, x
2
, ..., x
n
idd N(, 1)
{x
n
}
n
n=1
is a BAN estimate of
Suppose
x
1
, . . . idd U(0, )whish
Consider
T
n
= x
n:n
or U
n
=
n + 1
n
x
n:n
All there are Stanleigh ecient.
Theorem of types (gnederio?)
If {T
n
} is a sequence of rvs such that there exist l {x
n
} are {b
n
} when
a
n
T
n
+b
n
Z
Ancillary statistics
x f
x
(x, ); H
T= T(x) is an ancillay statistic
if f
T
(t) does not depend on
eg
x
1
, . . . , x
n
N(, 1)
S
2
= T =

(x
i
x)
2
(
n 1
2
, Z)an ullarry
7
Basus: (SBES)
if x f
x
(x; ), H
Suppose I is a complete su for based on x. Let z= w(x) (not a function of T
only), and suppose that z has a distrib which does not depend on H (i,e, it is
onllon)
If follown that z and T are stacost ully indepenp.
Proofs: SBEs
EG:
x
1
, ..., x
n
N(,
2
), , unknown
T
1
=

x
i
T
2
=
1
n 1

(x
i
x)
2
Claim: T
1
and T
2
indep?
Fix > 0 : T
1
is a complete and su for and draly T
2
a distribution that does
not depend on and drealy T
2
ha a distrib that does not depend on .
When does the estimates come from?
1. Maximum lidribood (sher)
2. Method of marmot (Ponix)
3. Minimum
2
4. best linear
5. least sequence
6. minimal
7. Bayesian exist motion and ducial and structural
8
* Not all best estimate are lineal
Example of dencial
x
1
, .., x
n
N(, 1)
We observe x= 17 then
17
N(17, 1)
R
x = +z
where z= 1 up
1
z
= -1 up
1
z
We observe x= 28, no have knwon information 0 +
MAXIMUM LIKELIHOOD
x bin (n,p) when (n,p) {(2,
1
2
), (2,
1
3
), (3,
1
2
), 3(3,
1
3
)} suppose we observe x=2
Note: Likelihood function
(n,p) (2,
1
2
) (2,
1
3
) 3(2,
1
2
) (3,
1
3
)
P
n,p
(x = x)
1
4
1
5
3
8
6
27
Midten february 17 (wednewsday) (02/5/10)
If x bin(n, p) p
t
(0, 1)
x= not head
P(x) inverse when x 0
The in p=0 f
x
(x; 0) =

f(x
i
) is maximum but r=0 / (0,1)
The avoid this problem we reed t on p [0, 1]
Then not always the maximum lire had estimate is well dene.
For x x,
_
5
x
_
p
x
(1 p)
5x
the maximum should be
x
5
x p(x)
0 0
1
1
5
2
2
5
3
3
5
4
4
5
5 1
p =
x
5
* Some time the estimate MV is no stait logical.
(The value for maximum but is not logical)
* Some times f
x
(x, ) o f
x
(x, ) (x variable)
* No allivor exist the density of a variable.
Denition.
Let
x f
x
(x; ) H
9
The random function f
x
(x; ) is called the livelihood function of x and is usually
denoted by L()
Often x= (x
1
, ..., x
n
) when the x

i
s are idd in which case
L() =
n

i=1
f
x
i
(x
i
; )
Denition of

(the mile of bound on x)
For each ,

() H satise
f
x
(x(),

()) = max
H
f
x
(x(), )
If such a choise of

() exist for almost all otherwise the mle is not denided.
Alternativily if x=x, the

will since in the value
x
where
f
x
(x;
x
) = max
H
f
x
(x; )
For alternate all xs. Otherwise the mle is not denided
If H R then we often identify the mle by considering:
f
x
(x; )

When is pos? when neg?, etc.


Often we can wrote

by solving the equality
f
x
(x; )

= 0
Solving to this equality are called wides semymles
l() = log L()
l() = log

f(x
i
; ) =
n

i=1
log f(x
i
; )
If A R
n
We often try to identify

by solving likelihood equations
L()

i
= 0, i = 1, 2, ...., k()
10
Must check to see if we have maximum solving to (x) are called wide some
mle.
eg (1)
x
1
, ..., x
n
idd p() R
+
To nd the mle of
f
x
(x; ) =
n

i=1
e

x
i
x
i
!
I( R
+
)
= e

x
i

x
i
!
= e
n

x
i

x
i
!
log f
x
(x; ) = n +

x
i
log log

x
i
!
log f
x
(x; )

= n +

x
i

= 0

2
log f
x
(x; ) =

x
i

< o
So l() is concove on (o, ) and so we can solving its
l()

= 0
Note
x f
x
(x; ), inH
If we wish to estimate g() where
g : H
H R
n
R
l
The mle of g() denote by

g() is dened to be g(

)
Note: if
L() = g(x; ) h(x)
eg (1)
x
1
, ..., x
n
ood
11
L() =

x
i
x!
l() = log L() > n + (

x
i
) log

log(x
i
!)
l()

= n +

x
i

And

2
l() =

x
i

2
< o if

x
i
> 0
So l() is concave on (0, ) and so we cum, its mare by solving
l()

= 0 then

x
i

Note:
l()

_
> 0 if

x
i
n
>
< if

x
i
<
eg(2)
x
1
, ..., x
n
idd N(,
2
), R,
2
(0, )
If n=1, x
1
then there is a no value to estimate
2
Let
1
? and
2
=
2
mll (

1
,

2
) = (
1
n

x :
i
,
1
n

(x
i
x)
2
)
By ZEHEMON ZHEMEMO

1
=
1
n

x
i

2
=
1
n

(x
i
x)
2
Approach:
L()

1
= 0
L()

2
= 0
12
Solve simultaneously
Check to see it is a global mol
F
x
x
2
=

2
loon at L(
1
,

2
is a function of
1
)
Consider L(x,
2
)
Example
x
1
, x
2
, ..., x
n
idd
f
x
(x) =
1

2
e

(x
1
)

2
I(
1
< x < )
February 8 - 2010
Deck has
Y swit = s types
13 card per swit = c toes x-
Known y cards at random
P (all form swit are represented 3)
x= types represented gammas k selected torem p(x=j)?
max random estimate
Value of the parameter that it wort maximase the density function
Example
x
1
, ..., x
n
idd
f
x
(x,
1
,
2
) =
1

2
e

x
1

2
I(x
1
)
When
1
R and
2
(0, )

A Note:
x
i
d
1
+
2
U
i
when v

i
s or (1, 1)
L(
1
,
2
) =
1

n
2
e

n
i=1
(x
i
)
I(
1
x
1:n
)
If we x
2
, L(
1
,
2
)
1
as long as
1
x
1:n
max will be on the lm
1
=
x
1:n
consider
L(x
1:n
,
2
) =
1

n
2
e

n
i=1
(x
i
x
1:n
)
13

2
(0, )

2
L(x
1:n
,
2
) =
n

2
+
1

2
2

(x
i
x
i:n
)

2
=

(x
i
x
1:n
)
n
On m.l.e. is
(

1
,

2
) = (x
1:n
,
1
n

(x
i
x
1:n
))
by boris theorem we can prove that x
1:n
and
1
n

(x
i
x
1:n
) are independent
(3)
x
1
, ..., x
n
(n > 1) idd U(
1
2
, +
1
2
), R
We often that x
i
can be represented by
x
i
=
1
2
+U
i
U
i
U(0, 1)
L() = = I(x
1:n

1
2
)I(x
n:n
+
1
2
)
= I( x
1:n

1
2
)I( x
1:n
+
1
2
)
= I(x
n:n

1
2
x
1:n
+
1
2
)
is x
n:n

1
2
x
1:n
+
1
2
Yes
x
n:n
x
1:n
1
14
Any statistics T(X) satisfactory
x
1:n

1
2
T(x) x
n:n
+
1
2
Is a m.l.e of based on x e.g=
T
1
= x
1:n
+
1
2
T
2
= x
n:n

1
2
T
3
=
T
1
+T
2
2
=
x
n:n
+x
1:n
2
What about
T
4
=
x
2:n
+x
n+1:n
2
?
T
5
=
x
n:2n
+x
n+1:2n
2
A m.l.e is not unique. Propierties of m.l.es Under suitable regularity conditions
can verify that: (i) m.l.es are allways function of its minimal sucient statistic
Whey? become of factorization are tom
L() = f
x
(x; )
= g(T(x); )h(x)
If you maximise L(), you max g at dp en of g only theory T(x)
(i) If on m.l.e exist it is alway possible to nd on m.l.e. which is a function of
the minimal sucient statistic or x
1
, ..., x
n
idd f
x
(x; )
(ii) Consistent (often mean square consistent and independent unbiased)
Neymar-scutt show ther not all may consistent. (iii) Asymptotically ecient
and usually BAN (exception: U (0, ),...,depend upon of parameters)
Claim: If an estimate archieving the crf lower bound exist, the max likeLihoud
will nd it.
Proof: For equality in the CRF lower bound we need suppose is the notation
parameter.
log f
x
(x; )

= c()(T(x) )
15

= T(x)
Note:
f
x
(x; ) = ()r(x)e
()T(x)
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40

Das könnte Ihnen auch gefallen