Beruflich Dokumente
Kultur Dokumente
Chapter 5 sections
1 / 43
Chapter 5 5.1 Introduction
Families of distributions
How:
Parameter and Parameter space
pf /pdf and cdf - new notation: f (x| parameters )
Mean, variance and the m.g.f. ψ(t)
Features, connections to other distributions, approximation
Reasoning behind a distribution
Why:
Natural justification for certain experiments
A model for the uncertainty in an experiment
All models are wrong, but some are useful – George Box
2 / 43
Chapter 5 5.2 Bernoulli and Binomial distributions
Bernoulli distributions
Def: Bernoulli distributions – Bernoulli(p)
A r.v. X has the Bernoulli distribution with parameter p if P(X = 1) = p
and P(X = 0) = 1 − p. The pf of X is
px (1 − p)1−x
for x = 0, 1
f (x|p) =
0 otherwise
3 / 43
Chapter 5 5.2 Bernoulli and Binomial distributions
Binomial distributions
4 / 43
Chapter 5 5.2 Bernoulli and Binomial distributions
Binomial distributions
Let X ∼ Binomial(n, p)
E(X ) = np, Var(X ) = np(1 − p)
To find the m.g.f. of X write X = X1 + · · · + Xn where Xi ’s are
i.i.d. Bernoulli(p). Then ψi (t) = pet + 1 − p and we get
n
Y n
Y
pet + 1 − p = (pet + 1 − p)n
ψ(t) = ψi (t) =
i=1 i=1
Px n
pt (1 − p)n−t = yikes!
cdf: F (x|n, p) = t=0 t
Theorem 5.2.2
If Xi ∼ Binomial(ni , p), i = 1, . . P
. , k and the Xi ’s are independent, then
X = X1 + · · · + Xk ∼ Binomial( ki=1 ni , p)
5 / 43
Chapter 5 5.2 Bernoulli and Binomial distributions
6 / 43
Chapter 5 5.2 Bernoulli and Binomial distributions
Strategy (611):
If all of these tests are negative then none of the 1000 people
have the disease. Total number of tests needed: 10
If one of these tests are positive then we test each of the 100
people in that group. Total number of tests needed: 110
...
If all of the 10 tests are positive we end up having to do 1010 tests
Is this strategy better?
What is the expected number of tests needed?
When does this strategy lose?
7 / 43
Chapter 5 5.2 Bernoulli and Binomial distributions
8 / 43
Chapter 5 5.2 Bernoulli and Binomial distributions
3.8 × 10−8
Question: can we go further - a 611-A strategy
Any further improvement?
9 / 43
Chapter 5 5.3 Hypergeometric distributions
Hypergeometric distributions
Def: Hypergeometric distributions
A random variable X has the Hypergeometric distribution with
parameters N, M and n if it has the pf
N M
x n−x
f (x|N, M, n) = N+M
n
Reasoning:
Say we have a finite population with N items of type I and M items
of type II.
Let X be the number of items of type I when we take n samples
without replacement from that population
Then X has the hypergeometric distribution
10 / 43
Chapter 5 5.3 Hypergeometric distributions
Hypergeometric distributions
N
n and p =
N +M
11 / 43
Chapter 5 5.4 Poisson distributions
Poisson distributions
Show that
f (x|λ) is a pf Var (X ) = λ
t
E(X ) = λ ψ(t) = eλ(e −1)
Px e−λ λk
The cdf: F (x|λ) = k =0 k! = yikes.
12 / 43
Chapter 5 5.4 Poisson distributions
Why Poisson?
13 / 43
Chapter 5 5.4 Poisson distributions
Poisson Postulates
For t ≥ 0, let Xt be a random variable with possible values in N0
(Think: Xt = number of arrivals from time 0 to time t)
(i) Start with no arrivals: X0 = 0
(ii) Arrivals in disjoint time periods are ind.: Xs and Xt − Xs ind. if s < t
(iii) Number of arrivals depends only on period length:
Xs and Xt+s − Xt are identically distributed
(iv) Arrival probability is proportional to period length, if length is small:
P(Xt = 1)
lim =λ
t→0 t
P(Xt >1)
(v) No simultaneous arrivals: limt→0 t =0
If (i) - (v) hold then for any integer n
(λt)n
P(Xt = n) = e−λt that is, Xt ∼ Poisson(λt)
n!
Can be defined in terms of spatial areas too.
14 / 43
Chapter 5 5.4 Poisson distributions
λx
lim fXn (x|n, pn ) = e−λ = f Poisson (x|λ)
n→∞ x!
for all x = 0, 1, 2, . . .
15 / 43
Chapter 5 5.4 Poisson distributions
16 / 43
Chapter 5 5.5 Negative Binomial distributions
Geometric distributions
17 / 43
Chapter 5 5.5 Negative Binomial distributions
Chapter 5 sections
19 / 43
Chapter 5 5.7 Gamma distributions
Gamma distributions
R∞
The Gamma function: Γ(α) = 0 x α−1 e−x dx
√
Γ(1) = 1 and Γ(0.5) = π
Γ(α) = (α − 1)Γ(α − 1) if α > 1
21 / 43
Chapter 5 5.7 Gamma distributions
22 / 43
Chapter 5 5.8 Beta distributions
Beta distributions
Beta(1, 1) = Uniform(0, 1)
Used to model a r.v.that takes values between 0 and 1.
The Beta distributions are often used as prior distributions for
probability parameters, e.g. the p in the Binomial distribution.
23 / 43
Chapter 5 5.8 Beta distributions
Beta distributions
24 / 43
Chapter 5 5.8 Beta distributions
Chapter 5 sections
25 / 43
Chapter 5 5.6 Normal distributions
Why Normal?
Works well in practice. Many physical experiments
have distributions that are approximately normal
Central Limit Theorem: Sum of many i.i.d. random
variables are approximately normally distributed
Mathematically convenient – especially the
multivariate normal distribution.
Can explicitly obtain the distribution of many Gauss
functions of a normally distributed random variable
have.
Marginal and conditional distributions of a
multivariate normal are also normal (multivariate or
univariate).
Normal distributions
(x − µ)2
2 1
f (x|µ, σ ) = √ exp − , −∞ < x < ∞
2π σ 2σ 2
Show:
ψ(t) = exp µt + 12 σ 2 t 2
E(X ) = µ
Var(X ) = σ 2
27 / 43
Chapter 5 5.6 Normal distributions
28 / 43
Chapter 5 5.6 Normal distributions
Standard normal
29 / 43
Chapter 5 5.6 Normal distributions
and
F −1 (p) = µ + σΦ−1 (p)
30 / 43
Chapter 5 5.6 Normal distributions
31 / 43
Chapter 5 5.6 Normal distributions
In particular:
1 Pn
The sample mean: X n = n i=1 Xi
If X1 , . . . , Xn are a random sample from a N(µ, σ 2 ), what is the
distribution of the sample mean?
32 / 43
Chapter 5 5.6 Normal distributions
33 / 43
Chapter 5 5.6 Normal distributions
34 / 43
Chapter 5 5.6 Normal distributions
Lognormal distributions
Def: Lognormal distributions
If log(X ) ∼ N(µ, σ 2 ) then we say that X has the Lognormal distribution
with parameters µ and σ 2 .
Example:
Let X and Y be independent random variables such that
log(X ) ∼ N(1.6, 4.5) and log(Y ) ∼ N(3, 6). What is the
distribution of the product XY ?
35 / 43
Chapter 5 5.10 Bivariate normal distributions
36 / 43
Chapter 5 5.10 Bivariate normal distributions
Contours:
37 / 43
Chapter 5 5.10 Bivariate normal distributions
X1 = σ1 Z1 + µ1
p
X2 = σ2 (ρZ1 + 1 − ρ2 Z2 ) + µ2 (2)
38 / 43
Chapter 5 5.10 Bivariate normal distributions
(x1 − µ1 )
E(X2 |X1 = x1 ) = µ2 + ρσ2 and
σ1
2 2
Var(X2 |X1 = x1 ) = (1 − ρ )σ2
39 / 43
Chapter 5 5.10 Bivariate normal distributions
Example
41 / 43
Chapter 5 5.10 Bivariate normal distributions
where
σ12 σ1,2 σ1,3
··· σ1,n
µ1 x1 σ2,1 σ 2 σ2,3
µ2 x2 2 ··· σ2,n
2 ···
µ = . , x=. and Σ = σ3,1 σ3,2 σ3 σ3,n
.. ..
.. .. .. .. ..
. . . . .
µn xn
σn,1 σn,2 σn,3 ··· σn2
42 / 43
Chapter 5 5.10 Bivariate normal distributions
43 / 43