Sie sind auf Seite 1von 3

Some useful results for mean and variance

• Let X be a discrete random variable. Then


E(X) = µ = x1 P (X = x1 ) + x2 P (X = x2 ) + · · ·
var(X) = σ 2 = E[(X − µ)2 ] = (x1 − µ)2 P (X = x1 ) + (x2 − µ)2 P (X = x2 ) + · · ·

• Let X1 , . . . , Xn be random variables and a0 , a1 , . . . , an be constants. Then


E(a0 + a1 X1 + · · · + an Xn ) = a0 + a1 E(X1 ) + · · · + an E(Xn )
and if X1 , . . . , Xn are independent, we have
var(a0 + a1 X1 + · · · + an Xn ) = a21 var(X1 ) + · · · + a2n var(Xn )

Sampling distribution of mean

Set up X1 is a random variable with mean µ and variance σ 2


X2 is a random variable with mean µ and variance σ 2
···
Xn is a random variable with mean µ and variance σ 2
X1 , . . . , Xn are independent
Let X̄ = X1 +···+Xn
n
2
Result E(X̄) = µ and var(X̄) = σn

Applying to Binomial

Set up X ∼ Binomial(n, p)
where X is the total number of successes out of n indpendent experiments
Result E(X) = np, var(X) = np(1 − p)
Let P̂ = Xn
where P̂ is the proportion of success
Result E(P̂ ) = p, var(P̂ ) = p(1−p)
n

1
Sampling distribution of mean

Set up X1 is a random variable with mean µ and variance σ 2


X2 is a random variable with mean µ and variance σ 2
···
Xn is a random variable with mean µ and variance σ 2
X1 , . . . , Xn are independent
Let X̄ = X1 +···+Xn
n
2
Result E(X̄) = µ and var(X̄) = σn

Normal Theory

Set up X1 is a random variable from normal with mean µ and variance σ 2


X2 is a random variable from normal with mean µ and variance σ 2
···
Xn is a random variable from normal with mean µ and variance σ 2
X1 , . . . , Xn are independent
Let X̄ = X1 +···+Xn
n

Then from sampling distribution of mean, we have


2
E(X̄) = µ and var(X̄) = σn
2
Result X̄ is distributed as normal with mean E(X̄) = µ and var(X̄) = σn
X̄−E(X̄)
or equivalently √ = √X̄−µ2
∼ N (0, 1)
var(X̄) σ /n

Central Limit Theorem (CLT)

Set up X1 is a random variable with mean µ and variance σ 2


X2 is a random variable with mean µ and variance σ 2
···
Xn is a random variable with mean µ and variance σ 2
X1 , . . . , Xn are independent
Let X̄ = X1 +···+Xn
n

Then from sampling distribution of mean, we have


2
E(X̄) = µ and var(X̄) = σn
Assume n is large
σ2
Result X̄ is distributed approximately as normal with mean E(X̄) = µ and var(X̄) = n
X̄−E(X̄)
or equivalently √ = √X̄−µ2
≈ N (0, 1)
var(X̄) σ /n

General version of Central Limit Theorem


Let Y be a random variable that depends on n
Assume n is large
Y −E(Y )
Result √ ≈ N (0, 1)
var(Y )

2
Special application to Binomial

Set up X ∼ Binomial(n, p)
where X is the total number of successes out of n indpendent experiments
we have E(X) = np, var(X) = np(1 − p)
Assume n is large and the text also suggested np > 5 and n(1 − p) > 5
X−E(X)
Result By CLT, we have √ = √X−np ≈ N (0, 1)
var(X) np(1−p)
The text named it as the normal approximation to Binomial
Let P̂ = Xn
where P̂ is the proportion of success
we have E(P̂ ) = p, var(P̂ ) = p(1−p)
n
Assume n is large
P̂ −E(P̂ )
Result By CLT, we have √ = √ P̂ −p ≈ N (0, 1)
var(P̂ ) p(1−p)/n

Das könnte Ihnen auch gefallen