Sie sind auf Seite 1von 4

SPECIAL PARAMETRIC FAMILIES OF UNIVARIATE DISTRIBUTIONS

A. Parametric Families of Discrete Densities

1. Discrete Uniform Distribution


A random variable X is defined to have a Discrete Uniform distribution, denoted by
X~DU(N), if its pmf is as follows:

1
𝑝𝑋 (𝑥) = 𝐼 (𝑥)
𝑁 {1,2,….,𝑣𝑁}

where the parameter N ranges over the positive integers.

Theorem. If X has a Discrete Uniform distribution, then

𝑁+1 𝑁 2 −1 𝑁 1
𝐸(𝑋) = 2
𝑉𝑎𝑟(𝑋) = 12
𝑚𝑋 (𝑡) = ∑𝑗=1 𝑒 𝑗𝑡 𝑁

2. Bernoulli Distribution
A random variable X is defined to have a Bernoulli distribution, denoted X~Be(p), if
the pmf of X is given by:

𝑝𝑋 (𝑥) = 𝑝 𝑥 (1 − 𝑝)1−𝑥 𝐼{0,1} (𝑥)

where the parameter p satisfies 0 ≤ p ≤ 1. The quantity 1-p is often denoted by q.

Theorem. If X has a Bernoulli distribution, then

𝐸(𝑋) = 𝑝 𝑉𝑎𝑟(𝑋) = 𝑝𝑞 𝑚𝑋 (𝑡) = 𝑝𝑒 𝑡 + 𝑞

3. Binomial Distribution
A random variable X is defined to have a Binomial distribution, denoted by, if the pmf
of X is given by: X~Bi(n,p)
𝑛
𝑝𝑋 (𝑥) = ( ) 𝑝 𝑥 (1 − 𝑝)𝑛−𝑥 𝐼{0,1,2,….,𝑛} (𝑥)
𝑥

where the two parameters n and p are such that 0 ≤ p ≤ 1 and 𝑛 ∈ 𝒁+ . (1-p) is often
denoted by q.

Theorem. If X has a Binomial distribution, then

𝐸(𝑋) = 𝑛𝑝 𝑉𝑎𝑟(𝑋) = 𝑛𝑝𝑞 𝑚𝑋 (𝑡) = (𝑝𝑒 𝑡 + 𝑞)𝑛

4. Hypergeometric Distribution
A random variable X is defined to have a Hypergeometric distribution, denoted by, if
the pmf of X is given by: X~Hyp(n,M,K)
(𝐾𝑥)(𝑀−𝐾
𝑛−𝑥
)
𝑝𝑋 (𝑥) = 𝐼{0,1,2,….,min(𝑛,𝐾)} (𝑥)
(𝑀
𝑛
)

where M is a positive integer, K is a nonnegative integer that is at most M, and n is a positive


integer that is at most M. Note, however, that when n>(M-K), then the smallest mass point is
n-(M-K).

Theorem. If X follows a hypergeometric distribution, then

𝑛𝐾 𝑛𝐾(𝑀−𝐾)(𝑀−𝑛)
𝐸(𝑋) = 𝑀
𝑉𝑎𝑟(𝑋) = 𝑀 2 (𝑀−1)

5. Poisson Distribution
A random variable X is defined to have a Poisson distribution, denoted by, if the pmf
of X is given by: X~Po(λ)

𝑒 −λ 𝜆𝑥
𝑝𝑋 (𝑥) = 𝐼 (𝑥)
𝑥! {0,1,2,….}

where the parameter λ satisfies λ>0.

Theorem. Let X~Po(λ), then

𝐸(𝑋) = λ 𝑉𝑎𝑟(𝑋) = λ 𝑚𝑋 (𝑡) = exp{λ(𝑒 𝑡 − 1)}

6. Geometric Distribution
A random variable X is defined to have a geometric distribution, denoted by
X~Geo(p), if the pmf of X is given by:

𝑝𝑋 (𝑥) = 𝑝(1 − 𝑝)𝑥 𝐼{0,1,2,….} (𝑥)

where the parameter p satisfies 0 ≤ p ≤ 1. We define q as 1-p

Theorem. Let X~Geo(p), then

𝐸(𝑋) = 𝑞/𝑝 𝑉𝑎𝑟(𝑋) = q/p2 𝑚𝑋 (𝑡) = p/(1 − q𝑒 𝑡 )

7. Negative Binomial Distribution


A random variable X is defined to have a negative binomial distribution, denoted by
X~NB(r,p), if the pmf of X is given by:

𝑟+𝑥−1 𝑟
𝑝𝑋 (𝑥) = ( ) 𝑝 (1 − 𝑝)𝑥 𝐼{0,1,2,….} (𝑥)
𝑥

where the parameters r and p satisfy r = 1, 2, 3, … and 0 ≤ p ≤ 1. Define q = 1-p

Theorem. Let X~NB(r,p), then

𝐸(𝑋) = 𝑞/𝑝 𝑉𝑎𝑟(𝑋) = q/p2 𝑚𝑋 (𝑡) = p/(1 − q𝑒 𝑡 )


B. Parametric Families of Continuous Densities

1. Uniform or Rectangular Distribution


A random variable X is defined to have a Uniform distribution, denoted by X~U(θ1,θ2),
if its pdf is given by:

1
𝑓𝑋 (𝑥) = 𝐼 (𝑥)
𝜃2 − 𝜃1 [𝜃1,𝜃2 ]

where the parameters θ1 and θ2 satisfy -∞< θ1< θ2<∞

Theorem. Let X~U(θ1,θ2), then

𝜃1 +𝜃2 (𝜃1 −𝜃2 )2 𝑒 𝜃2 𝑡 −𝑒 𝜃1𝑡


𝐸(𝑋) = 𝑉𝑎𝑟(𝑋) = 𝑚𝑋 (𝑡) =
2 12 (𝜃1 −𝜃2 )𝑡

2. A.) Normal Distribution


A random variable is defined to follow a Normal distribution, denoted by X~N(μ,σ2),
if its probability density function is as follows:

1 −(𝑥 − 𝜇)
𝑓𝑋 (𝑥) = 𝑒𝑥𝑝 { } 𝐼(−∞,∞) (𝑥)
𝜎√2𝜋 2𝜎 2

where the parameters μ and σ2 satisfy -∞< μ <∞ and σ2>0.

Theorem. If X~N(μ,σ2), then

1
𝐸(𝑋) = 𝜇 𝑉𝑎𝑟(𝑋) = 𝜎 2 𝑚𝑋 (𝑡) = 𝑒𝑥𝑝 {𝜇𝑡 + 𝜎 2 }
2

B.) Standard Normal Distribution


If the Normal random variable has mean 0 and variance 1, it is called a Standard
Normal random variable and is denoted by Z. Thus, its pdf and cdf are as follows:
1 1
𝜙(𝑥) = 𝑒𝑥𝑝 {− 𝑧 2 } 𝐼(−∞,∞) (𝑧)
√2𝜋 2
𝑧
𝛷(𝑧) = ∫ 𝜙(𝑢)𝑑𝑢
−∞

𝑋−𝜇
Theorem. If X~N(μ,σ2), then 𝑍 = will follow a standard normal distribution.
𝜎

3. Exponential Distribution
A random variable X is defined to follow an Exponential distribution, denoted by
X~Exp(λ), if its pdf is as follows:

𝑓𝑋 (𝑥) = 𝜆𝑒 −𝛌 𝐼[0,∞) (𝑥)

where λ>0.
Theorem. If X~Exp(λ), then
1 1 𝜆
𝐸(𝑋) = 𝜆 𝑉𝑎𝑟(𝑋) = 𝜆2
𝑚𝑋 (𝑡) = 𝜆−𝑡 for 𝑡 < 𝜆 𝐹𝑋 (𝑥) = (1 − 𝑒 −λx )𝐼[0,∞) (𝑥)

4. Gamma Distribution
A random variable X is defined to follow a Gamma distribution, denoted by X~Ga(r,λ), if
its pdf is as follows:
𝜆𝑟 𝑟−1 −𝜆x
𝑓𝑋 (𝑥) = 𝑥 𝑒 𝐼[0,∞) (𝑥)
𝛤(𝑟)

where r>0 and λ>0, and Γ(r) is the gamma function defined by 𝛤(𝑡) = ∫0 𝑥 𝑡−1 𝑒 −𝜆x (𝑑𝑥) for t>0.

Theorem. If X~Ga(r,λ), then


𝑟 𝑟 𝜆 𝑟
𝐸(𝑋) = 𝜆 𝑉𝑎𝑟(𝑋) = 𝜆2
𝑚𝑋 (𝑡) = (𝜆−𝑡) for 𝑡 < 𝜆 .

Theorem. If r is an integer,
𝑟−1
𝑒 −λx (λx)𝑗
𝐹𝑋 (𝑥) = (1 − ∑ ) 𝐼[0,∞) (𝑥)
𝑗!
𝑗=0

5. Beta Distribution
A random variable X is defined to follow a Beta distribution, denoted by, if its pdf is
as follows: X~Beta(a,b), if its pdf is as follows:
1
𝑓𝑋 (𝑥) = 𝑥 𝑎−1 (1 − 𝑥)𝑏−1 𝐼(0,1) (𝑥)
𝐵(𝑎, 𝑏)

where a>0 and b>0, and B(a,b) is the beta function defined as follows:
1
𝛤(𝑎)𝛤(𝑏)
𝐵(𝑎, 𝑏) = ∫ 𝑥 𝑎−1 (1 − 𝑥)𝑏−1 𝑑𝑥 = = 𝐵(𝑏, 𝑎)
0 𝛤(𝑎 + 𝑏)

Theorem. If X~Beta(a,b), then


𝑎 𝑎𝑏 𝛤(𝑘+𝑎)𝛤(𝑎+𝑏)
𝐸(𝑋) = 𝑎+𝑏 𝑉𝑎𝑟(𝑋) = (𝑎+𝑏+1)(𝑎+𝑏)2
𝐸(𝑋 𝑘 ) = 𝛤(𝑎)𝛤(𝑘+𝑎+𝑏)

Theorem. If a and b are integers, then


𝑎+𝑏−1
𝑎+𝑏−1 𝑗
𝐹𝑋 (𝑥) = ( ∑ ( ) 𝑥 (1 − 𝑥)𝑎+𝑏−1−𝑗 ) 𝐼(0,1) (𝑥) + 𝐼[1,∞) (𝑥)
𝑗
𝑗=𝑎

Das könnte Ihnen auch gefallen