Sie sind auf Seite 1von 5

PROBABILITY THEORY

Sem. 1, Eulers Functions


Eulers Gamma Function : (0, ) (0, ) (a) =

_
0
x
a1
e
x
dx
1. (1) = 1; 2. (a + 1) = a(a), a > 0;
3. (n + 1) = n! , n IN; 4.
_
1
2
_
=

_
0
e

t
2
2
dt =
_
IR
e
t
2
dt =

.
Eulers Beta Function : (0, ) (0, ) (0, ) (a, b) =
1
_
0
x
a1
(1 x)
b1
dx
1. (a, 1) =
1
a
, a > 0; 2. (a, b) = (b, a), a, b > 0; 3. (a, b) =
a 1
b
(a 1, b + 1), a > 1, b > 0;
4. (a, b) =
b 1
a + b 1
(a, b 1) =
a 1
a + b 1
(a 1, b), a, b > 1; 5. (a, b) =
(a)(b)
(a + b)
, a, b > 0.
Sem. 2, Class. Prob., Geom. Prob., Cond. Prob., Indep. Events, Bayes Formula
Classical Probability: P(A) =
nr. of favorable outcomes
total nr. of possible outcomes
.
Conditional Probability: P(A|B) =
P(AB)
P(B)
, P(B) = 0.
Independent Events: A, B independent <=> P(AB) = P(A)P(B) <=> P(A|B) = P(A)
Total Probability Rule: {A
i
}
iI
a partition of S, then P(A) =

iI
P(A
i
)P(A
i
|A)
Multiplication Rule: P
_
n

i=1
_
A
i
= P (A
1
) P (A
2
|A
1
) P (A
3
|A
1
A
2
) ... P
_
A
n
|
n1

i=1
A
i
_
Bayes Formula: {A
i
}
iI
a partition of S, then P (A
j
|A) =
P (A|A
j
) P (A
j
)

iI
P (A|A
i
) P (A
i
)
, j I
Sem. 3, Probabilistic Models
Binomial Model: The probability of k successes in n Bernoulli trials, with probability of success p, is
P(n, k) = C
k
n
p
k
q
nk
, k = 0, n.
Multinomial Model: The probability that in n = n
1
+ n
2
+ ... + n
r
trials, E
i
occurs n
i
times, where
p
i
= P (E
i
), i = 1, r, is P(n; n
1
, ..., n
r
) =
n!
n
1
!n
2
!...n
r
!
p
n
1
1
p
n
2
2
...p
n
r
r
.
Bernoulli Model Without Replacement (Hypergeometric): The probability that in n trials, we
get k white balls out of n
1
and n k black balls out of N n
1
(0 k n
1
, 0 n k N n
1
), is
P(n; k) =
C
k
n
1
C
nk
Nn
1
C
n
N
.
Bernoulli Model Without Replacement With r States: The probability that in M = m
1
+
m
2
+ ... + m
r
trials, we get m
i
balls of color i out of n
i
, i = 1, r, (n = n
1
+ n
2
+ ... + n
r
), is
P(n; m
1
, ..., m
r
) =
C
m
1
n
1
C
m
2
n
2
...C
m
r
n
r
C
M
n
.
Poisson Model: The probability of k successes (0 k n) in n trials, with probability of success p
i
in the
i
th
trial (q
i
= 1p
i
), i = 1, n, is P(n; k) =

1i
1
<...<i
k
n
p
i
1
...p
i
k
q
i
k+1
...q
i
n
, i
k+1
, ..., i
n
{1, ..., n}\{i
1
, ..., i
k
}
= the coecient of x
k
in the expansion (p
1
x + q
1
)(p
2
x + q
2
) . . . (p
n
x + q
n
).
Pascal Model: The probability of the n
th
success occurring after k failures in a sequence of Bernoulli
trials with probability of success p (q = 1 p), is P(n; k) = C
n1
n+k1
p
n
q
k
= C
k
n+k1
p
n
q
k
.
Geometric Model: The probability of the 1
st
success occurring after k failures in a sequence of Bernoulli
trials with probability of success p (q = 1 p), is p
k
= pq
k
.
Sem. 4, Discrete Random Variables and Discrete Random Vectors
Bernoulli Distribution with parameter p (0, 1): X
_
0 1
1 p p
_
Binomial Distribution with parameters n IN, p (0, 1): X
_
k
C
k
n
p
k
q
nk
_
k=0,n
Hypergeometric Distribution with parameters N, n
1
, n IN, n, n
1
N: X
_
k
p
k
_
k=0,n
, where
p
k
=
C
k
n
1
C
nk
Nn
1
C
n
N
Poisson Distribution with parameter > 0: X
_
k
p
k
_
kIN
, where p
k
=

k
k!
e

Pascal Distribution with parameters n IN, p (0, 1): X


_
k
C
k
n+k1
p
n
q
k
_
kIN
Geometric Distribution with parameter p (0, 1): X
_
k
pq
k
_
kIN
Discrete Uniform Distribution with parameter m IN: X
_
_
k
1
m
_
_
k=1,m
Cumulative Distribution Function F
X
: IR IR, F
X
(x) = P(X < x)
Discrete Random Vector: (X, Y ) : S IR
2
,
pdf p
ij
= P (X = x
i
, Y = y
j
), (i, j) I J,
cdf F = F
(X,Y )
: IR
2
IR, F(x, y) = P(X < x, Y < y) =

x
i
<x

y
j
<y
p
ij
, (x, y) IR
2
,
p
i
= P(X = x
i
) =

jJ
p
ij
, i I, q
j
= P(Y = y
j
) =

iI
p
ij
, j J (marginal densities)
Operations: X
_
x
i
p
i
_
iI
, Y
_
y
j
q
j
_
jJ
X and Y are independent <=> p
ij
= P (X = x
i
, Y = y
j
) = P (X = x
i
) P (Y = y
j
) = p
i
q
j
.
X + Y
_
x
i
+ y
j
p
ij
_
(i,j)IJ
, X
_
x
i
p
i
_
iI
, XY
_
x
i
y
j
p
ij
_
(i,j)IJ
, X/Y
_
x
i
/y
j
p
ij
_
(i,j)IJ
(y
j
= 0)
Sem. 5, Cont. R. Variables, Cont. R. Vectors, Functions of Cont. R. Variables
X : S IR cont. random variable with pdf f : IR IR, cdf F : IR IR. Properties:
1. F is absolutely continuous and F(x) = P(X < x) =
x
_

f(t)dt 2. f(x) 0, x IR,


_
IR
f(x) = 1
3. P(X = x) = 0, x IR, P(a < X < b) =
b
_
a
f(t)dt 4. F is left continuous and increasing
5. F() = 0, F() = 1
Continuous R. Vector: (X, Y ) : S IR
2
, pdf f = f
(X,Y )
: IR
2
IR, cdf F = F
(X,Y )
: IR
2

IR, F(x, y) = P(X < x, Y < y) =


x
_

y
_

f(u, v) dv du, (x, y) IR


2
. Properties:
1. P (a
1
X < b
1
, a
2
Y < b
2
) = F (b
1
, b
2
) F (a
1
, b
2
) F (b
1
, a
2
) + F (a
1
, a
2
)
2. F is left continuous and increasing in each variable 3. F(, ) = 1, F(, y) = F(x, ) =
0, x, y IR
4. X, Y independent <=> F(x, y) = F
X
(x)F
Y
(y), <=> f
(X,Y )
(x, y) = f
X
(x)f
Y
(y), (x, y) IR
2
5. F
X
(x) = F(x, ), F
Y
(y) = F(, y), x, y IR (marginal cdfs) 6. P((X, Y ) D) =
_
D
_
f(x, y) dy dx
7. f
X
(x) =
_
IR
f(x, y)dy, x IR, f
Y
(y) =
_
IR
f(x, y)dx, y IR (marginal densities)
8. Operations:
Sum: f
X+Y
(z) =
_
IR
f
(X,Y )
(u, z u)du
X,Y ind
=
_
IR
f
X
(u)f
Y
(z u)du
Product: f
XY
(z) =
_
IR
f
(X,Y )
_
u,
z
u
_
1
|u|
du
X,Y ind
=
_
IR
f
X
(u)f
Y
_
z
u
_
1
|u|
du
Quotient: f
X/Y
(z) =
_
IR
f
(X,Y )
(uz, u) |u|du
X,Y ind
=
_
IR
f
X
(uz)f
Y
(u) |u|du
Function Y = g(X): g : IR IR di., g

= 0, strictly monotone f
Y
(y) =
f
X
(g
1
(y))
|g

(g
1
(y)) |
, y g (IR)
Sem. 6, Numerical Characteristics of Random Variables
Expectation:
X discr. with pdf X
_
x
i
p
i
_
iI
, E(X) =

iI
x
i
p
i
, X cont. with pdf f : IR IR, E(X) =
_
IR
xf(x)dx.
Variance: V (X) = E
_
(X E(X))
2
_
= E (X
2
) (E(X))
2
.
Standard Deviation: (X) =
_
V (X).
Moments of order k:
- initial
k
= E
_
X
k
_
,
- absolute
k
= E
_
|X|
k
_
,
- central
k
= E
_
(X E(X))
k
_
.
Covariance: cov(X, Y ) = E ((X E(X))(Y E(Y ))) = E(XY ) E(X)E(Y )
Correlation Coecient: (X, Y ) =
cov(X, Y )
_
V (X)
_
V (Y )
Properties:
1. E(aX + b) = aE(X) + b, V (aX + b) = a
2
V (X) 2. E(X + Y ) = E(X) + E(Y )
3. if X and Y are independent, then E(XY ) = E(X)E(Y ) and V (X + Y ) = V (X) + V (Y )
4. h : IR IR, X discrete, then E (h(X)) =

iI
h(x
i
)p
i
, X continuous, then E (h(X)) =
_
IR
h(x)f(x)dx
5. cov(X, Y ) = E(XY ) E(X)E(Y ) 6. V
_
n

i=1
a
i
X
i
_
=
n

i=1
a
2
i
V (X
i
) + 2

1i<jn
a
i
a
j
cov(X
i
, X
j
)
7. X, Y independent => cov(X, Y ) = (X, Y ) = 0 (X and Y are uncorrelated)
8. 1 (X, Y ) 1; (X, Y ) = 1 <=> a, b IR, a = 0 s.t. Y = aX + b
9. (X, Y ) a cont. r. vector with pdf f(x, y), h : IR
2
IR
2
, then E (h(X, Y )) =

h(x, y)f(x, y)dxdy.


Sem. 7, Inequalities, Sequences of Random Variables
Holders Inequality: E(|XY |) (E(|X|
p
))
1
p
(E(|Y |
q
))
1
q
, p, q > 1,
1
p
+
1
q
= 1.
Markovs Inequality: P (|X| a)
1
a
E (|X|), a > 0.
Chebyshevs Inequality: P (|X E(X)| )
V (X)

2
, > 0.
Convergence:
1) in probability X
n
p
X, if lim
n
P
_
|X
n
X| <
_
= 1, > 0;
2) strongly X
n
s
X, if lim
n
P
_

kn
{|X
k
X| < }
_
= 1, > 0;
3) almost surely X
n
a.s.
X, if P
_
lim
n
X
n
= X
_
= 1;
4) in distribution X
n
d
X, if lim
n
F
n
(x) = F(x), x IR continuity point for F;
5) in mean of order r, 0 < r < X
n
L
r
X, if lim
n
E (|X
n
X|
r
) = 0.
Properties 1. 2) <=> 3) => 1) => 4) 2. 5) => 1).
STATISTICS
X a population characteristic, X
1
, X
2
, ..., X
n
a sample of size n, i.e. independent and identically dis-
tributed, with the same pdf as X; target parameter, = (X
1
, X
2
, ..., X
n
) point estimator.
Sample Mean: X =
1
n
n

i=1
X
i
,
Sample Moment:
k
=
1
n
n

i=1
X
k
i
,
Sample Absolute Moment:
k
=
1
n
n

i=1
(X
i
X)
k
,
Sample Variance: s
2
=
1
n 1
n

i=1
(X
i
X)
2
.
Likelihood Function of a Sample: L(X
1
, ..., X
n
|) =
n

i=1
f(X
i
|).
Fishers Information: I
n
() = E
_
_
_
ln L(X
1
, ..., X
n
|)

_
2
_
_
.
- if the range of X does not depend on , then I
n
() = E
_

2
ln L(X
1
, ..., X
n
|)

_
and I
n
() = nI
1
().
Eciency of an Absolutely Correct Estimator: e() =
1
I
n
()V ()
.
Estimator is
- unbiased: E() = ;
- MVUE: E() = and V () V (

),

unbiased estimator;
- absolutely correct: E() = and lim
n
V () = 0;
- ecient: absolutely correct and e() = 1.
Statistic S = S(X
1
, X
2
, ..., X
n
) is
- sucient for : the cond. pdf f(X
1
, ..., X
n
|S) does not depend on
Fact.Crit.
<=> L(x
1
, ..., x
n
|) =
g(s, )h(x
1
, ..., x
n
);
- complete for the family of distributions f(x | ), A: E((S)) = 0, A =>
a.s.
= 0.
Method of Moments:
Solve the system
k
=
k
for all unknown parameters.
Method of Maximum Likelihood:
Solve the system
L(X
1
, ..., X
n
|)

j
= 0 or
ln L(X
1
, ..., X
n
|)

j
= 0, j = 1, m for the unknown param-
eters = (
1
, ...,
m
).
Lehmann-Schee Theorem: Let

be an unbiased estimator and S a sucient and complete statistic
for . Then = E(

|S) is an MVUE.
Rao-Cramer Inequality: Let be an absolutely correct estimator for . Then V ()
1
I
n
()
.
Hypothesis Testing: H
0
: =
0
with one of the alternatives H
1
:
_

_
<
0
(left-tailed test),
>
0
(right-tailed test),
=
0
(two-tailed test).
Signicance Level: = P( type I error) = P( reject H
0
| H
0
) = P(TS RR | =
0
).
Type II Error: = P( type II error) = P( accept H
0
| H
1
) = P(TS / RR |H
1
).
Power of a Test: (

) = P( reject H
0
| =

) = P(TS RR | =

).
Neyman-Pearson Lemma (NPL): Suppose we test two simple hypotheses H
0
: =
0
versus H
1
:
=
1
. Let L(

) denote the likelihood function of the sample, when =

. Then for every (0, 1),


a most powerful test (a test that maximizes the power (
1
)) is the test with RR =
_
L(
1
)
L(
0
)
k

_
, for
some constant k

> 0.

Das könnte Ihnen auch gefallen