Beruflich Dokumente
Kultur Dokumente
c University of New South Wales
School of Risk and Actuarial Studies
1/26
Parameter Estimation
Chebyshevs Inequality
Convergence concepts
2/26
Parameter Estimation
Chebyshevs Inequality
Chebyshevs Inequality
I The Chebyshevs inequality, states that for any random
variable X with mean and variance 2 , the following
probability inequality holds for all > 0:
2
Pr (|X | > ) .
2
I Note that this applies to all distributions, hence also
non-symmetric ones! This implies that:
2
Pr (X > ) Pr (X < ) .
2
I Interesting example: set = k then:
1
Pr (|X | > k ) .
k2
3/26
Parameter Estimation
Chebyshevs Inequality
Solution: We have:
Convergence concepts
Pr ( : Xn () X () , as n ) =1,
a.s.
and we write Xn X , as n .
I Sometimes called strong convergence. It means that beyond
some point in the sequence (), the difference will always be
less than some positive , but that point is random.
5/26
Parameter Estimation
Convergence concepts
Convergence in probability
Pr (|Xn X | > ) 0, as n ,
p
and we write Xn X , as n .
I Difference converges in probability and converges almost
surely: Pr (|Xn X | > ) goes to zero instead of equals zero
p a.s.
as n goes to infinity (hence is weaker than ).
6/26
Parameter Estimation
Convergence concepts
Convergence in distribution
I Xn converges in distribution to the random variable X as
n if and only if, for every x,
8/26
Parameter Estimation
Application of strong convergency: Law of Large Numbers
9/26
Parameter Estimation
Application of strong convergency: Law of Large Numbers
10/26
Parameter Estimation
Application of strong convergency: Law of Large Numbers
I Then, the LLN tells us that the amount each person will end
up paying becomes more predictable as the size of the group
increases. In effect, this amount will become
closer to , the average loss each individual expects.
11/26
Parameter Estimation
Central Limit Theorem
Xn d
N (0, 1) , as n .
n
This holds for all r.v. with finite mean and variance, not only
normal r.v.!
Sn n
Zn =
n
Assumptions
MXi (t) =f t, 2 ;
E [Xi ] =;
Var (Xi ) = 2 < ,
15/26
Parameter Estimation
Central Limit Theorem
16/26
Parameter Estimation
Central Limit Theorem
MX (0) =E[e 0X ] = 1
(1)
MXi (t) =E [Xi ] = 0,
t=0
(2)
=E Xi2 = Var (Xi ) + (E [Xi ])2 = 2 .
MXi (t)
t=0
17/26
Parameter Estimation
Central Limit Theorem
I Now we can align the results from the previous two slides.
I We take the following limit,
18/26
Parameter Estimation
Central Limit Theorem
P (1)i+1 ai
* using log(1 + a)=
i=1 i = a + O(a2 ), with
t2 3/2
a= n + O n1 .
19/26
Parameter Estimation
Central Limit Theorem
Application CLT
20/26
Parameter Estimation
Central Limit Theorem
Application CLT
Solution: Using CLT (why is sample s.d.?)
Xn d
N (0, 1) , as n
n
2
n X n N n , n 2
X n N , / n
S = X1 + X2 + . . . + Xn
Pr (X = k) ,
0 0
0 2 4 0 5 10
x x
Binomial(30,0.1) p.m.f. Binomial(200,0.1) p.m.f.
probability mass function
0.2
n = 30, p = 0.1 probability mass function 0.08 n = 200, p = 0.1
We have:
E[Xn ] =n
Var (Xn ) =n
Xn E[Xn ] Xn n d
Zn = p = Z N(0, 1).
Var (Xn ) n
25/26
Parameter Estimation
Applications of Convergence in Distributions
0 0
0 1 2 0 2 4 6
x x
Poisson(10) p.m.f. Poisson(100) p.m.f.
probability mass function
0 0
0 10 20 30 0 100 200
x x
26/26