Sie sind auf Seite 1von 6

Exam 1/P Formula Sheets

WWW.P ROBABILITY E XAM . COM

S ETS I NTEGRALS C ONT. C OUNTING T ECHNIQUES


De Morgan’s Law Substitution Multiplication Rule
c c c c c c Z b g(b)
(A ∪ B) = A ∩ B (A ∩ B) = A ∪ B
Z
0 A compound experiment consisting of 2 sub-experiments with a
f (g(x))g (x) dx = f (u) du
Inclusion-Exclusion Principle a g(a) and b possible outcomes resp. has a · b possible outcomes.
Integration by Parts Permutations
|A ∪ B| = |A| + |B| − |A ∩ B|
Z b Z b
|A ∪ B ∪ C| = |A| + |B| + |C| − |A ∩ B| − |B ∩ C| − |A ∩ C| + |A ∩ B ∩ C| b An ordered arrangement of k elements from an n element set
u dv = uv a − v du
a a n!
Count: n Pk = n(n − 1) · · · (n − (k − 1)) =
D ERIVATIVES Other Useful Identities (n − k)!
Z b  cx
xe e cx b
 Combinations
Function Derivative cx
xe dx = − 2 c 6= 0
a c c a A k-element subset of an n element set
c 0 Z ∞  
n! n n!
xr rxr−1 n −cx
x e dx = n+1 n ∈ N, c > 0 Count: n Ck = =
c k k!(n − k)!
cf (x) cf 0 (x) 0

f (x) + g(x) f 0 (x) + g 0 (x) Properties of Binomial Coefficients


f (x) · g(x) f 0 (x) · g(x) + f (x) · g 0 (x) P ROBABILITY A XIOMS n
X n   
n
 
n

n
f (g(x)) f 0 (g(x)) · g 0 (x) =2 =
k k n−k
e x
e x Probability Function Definition k=0
         
1 n−1 n n n−1 n−1
ln x x 1. P (S) = 1 n =k = +
k−1 k k k−1 k
ax x
a · ln a
2. P (A) ≥ 0 for all A 
m+n
 X r  
m n

=
3. For mutually disjoint events A1 , A2 , . . . r k r−k
k=0
I NTEGRALS
∞ ∞
!
[ X Counting Rules
Properties of Integrals P Ai = P (Ai )
Z b i=1 i=1 # of ways to select k elements from n total elements:
c dx = c · (b − a)
a Axiom Consequences
Z b Z b Z b order matters order doesn’t matter
[f (x) + g(x)] dx = f (x) dx + g(x) dx P (∅) = 0  
a a a n+k−1
P (Ac ) = 1 − P (A) replace n k
Z b Z b k
cf (x) dx = c f (x) dx c
A ⊆ B =⇒ P (B ∩ A ) = P (B) − P (A)  
a a n! n
P (A ∪ B) = P (A) + P (B) − P (A ∩ B) don’t replace
Z b Z c Z b (n − k)! k
f (x) dx = f (x) dx + f (x) dx
a a c
P (A ∪ B ∪ C) = P (A) + P (B) + P (C)
− P (A ∩ B) − P (B ∩ C) − P (A ∩ C)
Applications of the FTC
+ P (A ∩ B ∩ C)
r+1 b
Z b Z b b
r x (r 6= −1) 1 |A|
x dx = dx = ln |x| If S is finite with equally likely outcomes then P (A) =
a r + 1
a a x a |S|
Z b b Z b x b

x x
x c
e dx = e c dx =
a a a ln c
a
Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

C ONDITIONAL P ROBABILITY R ANDOM VARIABLES C ONT. S UMMARY S TATISTICS C ONT.


Definition Discrete Random Variable Variance
P (A ∩ B) X has finite or countably infinite (listable) support. Var(X) = E[(X − µ)2 ] = E[X 2 ] − E[X]2
P (A | B) = provided P (B) > 0
P (B) Probability Mass Function: p(x) = P (X = x) Variance Properties
Conditional Probabilities are Probabilities X
A valid PMF has p(x) ≥ 0 and p(x) = 1 Var(X) ≥ 0 Var(X) = 0 ⇐⇒ P (X = µ) = 1
P (Ac | B) = 1 − P (A | B) x
2
Var(X + c) = Var(X) Var(cX) = c · Var(X)
P (A ∪ B | C) = P (A | C) + P (B | C) − P (A ∩ B | C) Jumps in the CDF are the probabilities (values of the PMF). X, Y independent =⇒ Var(X + Y ) = Var(X) + Var(Y )
Multiplication Rule P (A) > 0, P (B) > 0 Continuous Random Variable
Standard Deviation Coefficient of Variation
P (A ∩ B) = P (A | B) · P (B) = P (B | A) · P (A) X has continuous CDF differentiable except at finitely many points. p σX
σX = Var(X) CV(X) =
Probability Density Function: f (x) = F 0 (x) µX
Bayes’ Theorem
Z ∞
Skew
P (B | A) · P (A) A valid PDF has f (x) ≥ 0 and f (x) = 1
P (A | B) = provided P (A) > 0, P (B) > 0 −∞
" 3 #
P (B) X −µ E[X 3 ] − 3µσ 2 − µ3
Skew(X) = E =
Z b σ σ3
Law of Total Probability P (a ≤ X ≤ b) = f (x) dx = F (b) − F (a)
a
If A1 , A2 , . . . , An partition S with P (Ai ) > 0, then Jensen’s Inequality
Mixed Type Random Variable
P (B) = P (B | A1 ) · P (A1 ) + · · · + P (B | An ) · P (An ) g 00 (x) ≥ 0 =⇒ E[g(X)] ≥ g(E[X])
X’s CDF is a weighted average of a continuous and discrete CDF: 00
Independent Events g (x) ≤ 0 =⇒ E[g(X)] ≤ g(E[X])
F (x) = α · FC (x) + (1 − α) · FD (x) 0<α<1
Definition: P (A ∩ B) = P (A) · P (B) P (g(X) = a + bX) = 1 ⇐⇒ E[g(X)] = g(E[X])

(A, B) indep. pair =⇒ (A, B c ), (Ac , B), (Ac , B c ) also indep. pairs Mode
S UMMARY S TATISTICS A mode is an x value which maximizes the PMF/PDF.
Expected Value It is possible to have 0, 1, 2, . . . or infinite modes.
R ANDOM VARIABLES Z ∞
Discrete: any values with the largest probability
X
A random variable, X, is a function from the sample space S to R E[X] = x · p(x) E[X] = x · f (x) dx
−∞
Cumulative Distribution Function
x Continuous: check end points of interval and where f 0 (x) = 0
Law of the Unconscious Statistician (LOTUS)
F (x) = P (X ≤ x) Percentile
X Z ∞
E[g(X)] = g(x) · p(x) E[g(X)] = g(x) · f (x) dx c is a (100p)th percentile of X if P (X ≤ c) ≥ p and P (X ≥ c) ≥ 1−p
x −∞
A 50th percentile is called a median
Expected Value Linearity
Discrete: look for smallest c with F (c) ≥ p
E[aX + b] = a · E[X] + b E[X + Y ] = E[X] + E[Y ] Continuous: solve for c in F (c) = p
Survival Shortcut

X
If X is nonnegative integer-valued, then E[X] = P (X > k)
k=0
A valid CDF is nondecreasing, right-continuous and Z ∞
If X is nonnegative continuous, then E[X] = [1 − F (x)] dx
lim F (x) = 0, lim F (x) = 1 0
x→−∞ x→∞
Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

C OMMON D ISCRETE D ISTRIBUTIONS

Distribution Description P (X = x) Expected Value Variance MGF Properties

2 at (b+1)t
1 a+b (b − a + 1) − 1 e −e
DUniform({a, . . . , b}) Equally likely values a, . . . , b
b−a+1 2 12 (b − a + 1)(1 − et )

P (X = 1) = p t
Bernoulli(p) 1 trial w/ success chance p p p(1 − p) 1 − p + pe
P (X = 0) = 1 − p
 
# of successes in n indep. n x n−x t n
Binomial(n, p) p (1 − p) np np(1 − p) (1 − p + pe ) np ∈ N =⇒ np = mode = median
Bernoulli(p) trials x
# w/ property chosen w/ K N −K
    K
x n−x K K K N −n Resembles Binomial(n, N)
HyperGeom(N, K, n) out replacement from N N
n n 1− ugly
N −1 with large N relative to n

where K have property n
N N N

e−λ λx λ(et −1) Approximates Binomial(n, p)


Poisson(λ) Common frequency dist. λ λ e
x! when λ = np, n large, p small

# of failures before x 1−p 1−p p


Geometric(p) (1 − p) p Only memoryless discrete dist.
first probability p success p p2 1 − (1 − p)et
   r
# of failures before x+r−1 r x r(1 − p) r(1 − p) p
NegBin(r, p) th p (1 − p) Sum of r iid Geometric(p)
r probability p success r−1 p p2 1 − (1 − p)et

C OMMON C ONTINUOUS D ISTRIBUTIONS

Distribution f (x) F (x) Expected Value Variance MGF Properties

1 x−a a+b (b − a)2 ebt − eat


Uniform(a, b) Probabilities are proportional to length
b−a b−a 2 12 t(b − a)
 
2 1 (x−µ)2
− 2σ2 x−µ 2 µt+σ 2 t2 /2 µ σ2
N (µ, σ ) √ e Φ µ σ e Approximates sum of n iid rv’s w/ mean n and variance n
σ 2π σ

−λx −λx 1 1 λ
Exp(λ) λe 1−e Only memoryless continuous distribution
λ λ2 λ−t
−λx α−1
 α
λe (λx) α α λ
Gamma(α, λ) ugly Sum of α independent Exp(λ) for integer α > 0
Γ(α) λ λ2 λ−t
Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

D ISCRETE D ISTRIBUTIONS C ONT. C ONTINUOUS D ISTRIBUTIONS C ONT. M OMENT G ENERATING F UNCTIONS


Bernoulli Uniform Definition
A Bernoulli r.v. is also an indicator of the occurrence of A ⊆ S Us ∼ Uniform(0, 1) =⇒ U = (b − a) · Us + a ∼ Uniform(a, b) tX
MX (t) = E[e ]
d−c Properties
(c, d) ⊆ (a, b) =⇒ P (c ≤ U ≤ d) = ,
X ∼ Bernoulli(p) =⇒ Y = (a − b)X + b takes values a and b b−a
with probabilities p and 1 − p, respectively. Uniquely determines a distribution
U | U ∈ (c, d) ∼ Uniform(c, d)
n (n)
E[Y ] = p · a + (1 − p) · b Normal E[X ] = MX (0)
2 Z ∼ N (0, 1) =⇒ X = σZ + µ ∼ N (µ, σ 2 ) MaX+b (t) = ebt MX (at)
Var(Y ) = (a − b) · p · (1 − p)
Φ(−z) = 1 − Φ(z) X, Y independent =⇒ MX+Y (t) = MX (t) · MY (t)
Binomial
X∼ 2
N (µX , σX ), Y ∼ 2
N (µY , σY ) independent, then Variance Shortcut (Cumulant)
If X ∼ Binomial(n, p), Y ∼ Binomial(m, p) then
d2
2

X + Y ∼ N (µX + µY , σX + σY2 )
n − X ∼ Binomial(n, 1 − p) Var(X) = 2 ln (M (t))
dt t=0
Central Limit Theorem
X, Y independent =⇒ X + Y ∼ Binomial(n + m, p)
X1 , . . . , Xn iid each with mean µ and variance σ 2 , then
Z | X ∼ Binomial(X, q) =⇒ Z ∼ Binomial(n, pq) 2 P ROBABILITY G ENERATING F UNCTIONS
X1 + · · · + Xn ∼
˙ N (nµ, nσ )
Poisson Defined for nonnegative integer-valued random variables
Approximating consecutive integer-valued X with CLT: Definition
If X ∼ Poisson(λ), Y ∼ Poisson(κ) then
1 1
x+ 2 −µ x− 2 −µ
   
X
λ P (X ≤ x) ≈ Φ , P (X < x) ≈ Φ GX (t) = E[t ]
P (X = x + 1) = P (X = x) · σ σ
x+1 Properties
λ ∈ N =⇒ λ, λ + 1 are modes Exponential
Uniquely determines a distribution
−λa −λb
X, Y independent =⇒ X + Y ∼ Poisson(λ + κ) X ∼ Exp(λ) =⇒ P (a ≤ X ≤ b) = e −e (k)
P (X = k) = GX (0)/k!
Z | X ∼ Binomial(X, p) =⇒ Z ∼ Poisson(λ · p) The continuous analog of the Geometric (rounding & limit) b a
GaX+b (t) = t GX (t )
P (X ≥ s + t | X ≥ s) = P (X ≥ t) [memoryless]
Geometric X, Y independent =⇒ GX+Y (t) = GX (t) · GY (t)
st Time between events in Poisson process with rate λ
If X ∼ Geometric(p) then X + 1 counts trials until 1 success Moments
1 1−p Xi ∼ Exp(λi ) indep. =⇒ min{X1 , . . . , Xn } ∼ Exp(λ1 + · · · + λn )  
E[X + 1] = Var(X + 1) = (n) X!
p p2 GX (1) =E = E[X(X − 1) . . . (X − n + 1)]
Gamma (X − n)!
P (X ≥ n + m | X ≥ m) = P (X ≥ n) [memoryless] 0
With integer α, X ∼ Gamma(α, λ) is E[X] = GX (1)
a sum of α independent Exp(λ) Var(X) = G00X (1) − (G0X (1))2 + G0X (1)
th
the time until α event in a Poisson process with rate λ

The continuous analog of the Negative Binomial


Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

J OINT D ISTRIBUTIONS J OINT D ISTRIBUTIONS C ONT.


Cumulative Distribution Function (CDF) Covariance
F (x, y) = P (X ≤ x, Y ≤ y) Definition: Cov(X, Y ) = E[(X − µX ) · (Y − µY )] = E[XY ] − E[X] · E[Y ]
Probability Mass Function (PMF) Probability Density Function (PDF)
Cov(X, X) = Var(X) Cov(X, Y ) = Cov(Y, X) Cov(X + c, Y ) = Cov(X, Y )
∂2
p(x, y) = P (X = x, Y = y) f (x, y) = F (x, y) Cov(X, c) = 0 Cov(cX, Y ) = c · Cov(X, Y ) Cov(X + Y, Z) = Cov(X, Z) + Cov(Y, Z)
∂x∂y
2 2
Var(aX + bY ) = a · Var(X) + b · Var(Y ) + 2ab · Cov(X, Y )
Marginal PMF Marginal PDF
Z ∞ Coefficient of Correlation
X
pX (x) = p(x, y) fX (x) = f (x, y) dy
 
X − µX Y − µY Cov(X, Y )
y −∞ ρX,Y = Cov , = −1 ≤ ρX,Y ≤ 1
σX σY σX · σY
Conditional PMF Conditional PDF Consequences of Independence
p(x, y) f (x, y)
pX|Y (x | Y = y) = fX|Y (x | Y = y) = E[g(X) · h(Y )] = E[g(X)] · E[h(Y )] Cov(X, Y ) = 0
pY (y) fY (y)
MX,Y (s, t) = MX (s) · MY (t) ρX,Y = 0
Independence Criteria (X and Y are independent if any hold)
Bivariate Continuous Uniform
F (x, y) = FX (x) · FY (y) − ∞ < x, y < ∞ F (x, y) = FX (x) · FY (y) − ∞ < x, y < ∞
1
p(x, y) = pX (x) · pY (y) − ∞ < x, y < ∞ f (x, y) = fX (x) · fY (y) − ∞ < x, y < ∞ f (x, y) = Probabilities are proportional to areas
Area of support
pX|Y (x | Y = y) = pX (x) where pY (y) > 0 fX|Y (x | Y = y) = fX (x) where fY (y) > 0
Multinomial
pY |X (y | X = x) = pY (y) where pX (x) > 0 fY |X (y | X = x) = fY (y) where fX (x) > 0
p(x, y) = g(x) · h(y) for any g, h ≥ 0 f (x, y) = g(x) · h(y) for any g, h ≥ 0 (X1 , . . . , Xk ) ∼ Multinomial(n, p1 , . . . , pk ) if n indep. trials performed, each with k possible outcomes
(with respective probabilities p1 , . . . , pk ) and Xi is the number of trials resulting in outcome i.
2D LOTUS
∞ ∞ n! n1 n2 nk
P (X1 = n1 , . . . , Xk = nk ) = · p1 p2 · · · pk
XX Z Z
E[h(X, Y )] = h(x, y) · p(x, y) E[h(X, Y )] = h(x, y) · f (x, y) dx dy n1 !n2 ! . . . nk !
x y −∞ −∞
Xi ∼ Binomial(n, pi ) i 6= j =⇒ Cov(Xi , Xj ) = −npi pj
Conditional Expectation Bivariate Normal
Z ∞ 2 2
(X, Y ) ∼ BivNormal(µX , µY , σX , σY , ρX,Y ) if aX + bY is normal for all a, b ∈ R.
X
E[X | Y = y] = x · pX|Y (x | Y = y) E[X | Y = y] = x · fX|Y (x | Y = y) dx
x −∞ σY 2 2
Y | X = x ∼ N (µY + ρ (x − µX ), σY (1 − ρ ))
σX
Law of Total Expectation/Variance

E[X] = E[E[X | Y ]] Var(X) = E[Var(X | Y )] + Var(E[X | Y ])


Joint Moment Generating Function
n+m

sX+tY n m ∂
MX,Y (s, t) = E[e ] E[X Y ] = n m MX,Y (s, t) MX (s) = MX,Y (s, 0)
∂s ∂t s=t=0
Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

T RANSFORMATIONS T RANSFORMATIONS C ONT. R ISK AND I NSURANCE


Transformation of Discrete X Convolution Theorem (X and Y independent) Ordinary Deductible d
X Z ∞ (
P (g(X) = y) = P (X = x) X
0, X≤d
pX+Y (t) = pX (x)·pY (t−x) fX+Y (t) = fX (x)·fY (t−x) dx Payment: (X − d)+ =
x|g(x)=y
x −∞ X − d, X>d
Transformation of Continuous X Z ∞ Z ∞
Order Statistics Mean: (x − d) · fX (x) dx = SX (x) dx
Find sets Ay so g(X) ≤ y ⇐⇒ X ∈ Ay th d d
X(i) is the i smallest of iid continuous X1 , . . . , Xn w/ dist. F, f  
Compute FY (y) = P (g(X) ≤ y) = P (X ∈ Ay ) d
n
X n   Payment w/ inflation r: (1 + r) · X −
fY (y) = FY0 (y) FX(j) (x) = k
F (x) (1 − F (x)) n−k 1+r +
k
k=j
Strictly Monotone Transformation of Continuous X   Policy Limit u
n−1
fX(j) (x) = n f (x)F (x)j−1 (1 − F (x))n−j
(
fY (y) = fX (g −1 (y)) · |(g −1 )0 (y)| j−1 X, X≤u
Payment: X ∧ u =
u, X>u
1-1 Transformation of Continuous X, Y
n n
( ( FX(1) (x) = 1 − (1 − F (x)) FX(n) (x) = F (x) Z u Z u
g1 (x, y) = u x = h1 (u, v) Mean: x · fX (x) dx + u · SX (u) = SX (x) dx
If has unique solution and Mixture Distribution 0 0
g2 (x, y) = v y = h2 (u, v)  
X has PMF/PDF f (x) = α1 · f1 (x) + · · · + αn · fn (x) u
∂h
1 ∂ h1
Payment w/ inflation r: (1 + r) · X ∧

∂u ∂v ∂ h1 ∂ h2 ∂ h1 ∂ h2 1+r
J= = − 6= 0 (αi positive and sum to 1, fi are valid PMF/PDFs)
∂u ∂v ∂v ∂u

∂ h2 ∂ h2
∂u ∂v Ordinary Deductible d and Policy Limit u Simultaneously
Then the joint density of U = g1 (X, Y ) and V = g2 (X, Y ) is FX (x) = α1 · F1 (x) + · · · + αn · Fn (x)

0,
 X≤d
g(u, v) = f (h1 (u, v), h2 (u, v)) · |J| SX (x) = α1 · S1 (x) + · · · + αn · Sn (x) Payment: Xdu = X − d, d < X ≤ u + d

k k k u, X >u+d

E[X ] = α1 · E[X1 ] + · · · + αn · E[Xn ]
MX (t) = α1 · MX1 (t) + · · · + αn · MXn (t)
Z u+d Z u+d
Mean: (x − d) · fX (x) dx + u · SX (u + d) = SX (x) dx
d d
u
1+r
Payment w/ inflation r: (1 + r) · X d
1+r

Loss Given Positive Loss


If XC = X | X > 0 and α = P (X > 0), then
2 2
E[X] = α · E[XC ] E[X ] = α · E[XC ]
2
Var(X) = α · Var(XC ) + α · (1 − α) · E[XC ]

Das könnte Ihnen auch gefallen