Sie sind auf Seite 1von 13

Topic #5:

The Weak Law of Large Numbers


Randy Cogill
SYS 6005 - Stochastic Systems
Fall 2010
1
Relative frequency
Suppose I have a coin that lands on heads with probability p
If I ip n times, let k denote the number of heads
Intuitively,
k
n
p as n
However, making this notion precise is tricky
The weak law of large numbers gives one way of doing this
2
Setting up the weak law
The weak law says the following:
For large n,
k
n
is likely to be close to p
What do we mean by close and likely?
For any measure of closeness > 0,
k
n
is likely to satisfy

k
n
p


for large enough n
For the likely part, lets precisely specify the random quantity...
3
Bernoulli coin ips
Let X
i
be a Bernoulli RV indicating the outcome of ip i
For n ips, we have independent, identically distributed X
1
, . . . , X
n
Each X
i
has parameter p
The frequency of heads in n ips is the random variable
Z
n
=
1
n
n

i=1
X
i
4
Setting up the weak law (cont.)
For any > 0, P(|Z
n
p| ) is close to 1 for large n
Precisely, for any > 0,
lim
n
P(|Z
n
p| ) = 1
More generally, the weak law of large numbers says the following:
Let X
1
, . . . , X
n
be IID random variables
Each X
i
has E[X
i
] = and var(X
i
) <
Let Z
n
=
1
n

n
i=1
X
i
For any > 0, lim
n
P(|Z
n
| ) = 1
5
Probability bounds
Suppose we know the CDF of each X
i
...
...how can we evaluate P(|Z
n
| )?
In general, this can get messy
Suppose instead we could nd some sequence b
1
, b
2
, . . . with:
P(|Z
n
| ) b
n
for all n
lim
n
b
n
= 1
How de we nd the bounds P(|Z
n
| ) b
n
?
Can nd bounds by a surprisingly simple and powerful technique...
6
Probability bounds (cont.)
Suppose I had functions g
1
and g
2
with g
1
(x) g
2
(x) for all x
For any random variable X, we have E[g
1
(X)] E[g
2
(X)]
Can get some useful bounds from clever choices of g
1
and g
2
7
Markovs inequality
Suppose X is a random variable taking nonnegative values
For some given x, consider the function
g
2
(z) =
_
0 if z < x
1 if z x
For this function, E[g
2
(X)] = P(X x)
For given x, consider the function g
1
(z) =
1
x
z
Since g
1
(z) g
2
(z) for all z 0,
P(X x)
1
x
E[X]
8
Chebyshevs inequality
Markovs inequality gives bound in terms of the mean
Chebyshevs inequality gives a related bound in terms of variance
Now suppose X can take any real value
Suppose X has mean E[X] = and variance var(X) =
2
Chebyshevs inequality will give a bound on P(|X | > )
9
Chebyshevs inequality
For some given > 0, consider the function
g
2
(z) =
_
_
_
1 if z <
1 if z > +
0 if z +
For this function, E[g
2
(X)] = P(|X | > )
For given x, consider the function g
1
(z) =
1

2
(z )
2
Since g
1
(z) g
2
(z) for all z 0,
P(|X | > )

2

2
10
The Weak Law of Large Numbers
Theorem:
Let X
1
, . . . , X
n
be independent identical RVs
Each X
k
has mean and variance
2
Dene Z
n
as
Z
n
=
1
n
(X
1
+ + X
n
)
For any > 0,
lim
n
P(|Z
n
| ) = 1
11
Law of Large Numbers (cont.)
Proof (part 1): The expected value of Z
n
is
1
n
E[X
1
+ + X
n
] =
The variance of Z
n
is
E[(Z
n
)
2
] =
1
n
2
E
_
_
(X
1
) + + (X
n
)
_
2
_
=
1
n
2
n

i=1
n

j=1
E
_
(X
i
)(X
j
)

=

2
n
12
Law of Large Numbers (cont.)
Proof (part 2): By Chebyshevs inequality
P(|Z
n
| > )
E[(Z
n
)
2
]

2
=

2
n
2
Therefore,
lim
n
P(|Z
n
| ) lim
n
_
1

2
n
2
_
= 1

Das könnte Ihnen auch gefallen