Sie sind auf Seite 1von 3

Outline

Contents
1 Basic Concepts of Probability 1

1 Basic Concepts of Probability


Basic Definitions

• We will go through Chapter 1, sections 1-5.


• I’ll ask you to go through sections 1-3. You will find most of these defini-
tions/theorems/inequalities very, very familiar.
• I’ll mention a few things that are not in the appendix.

Some Remarks
d
• (Page 5) We say two r.v.s X and Y are equal in distribution, denoted as X = Y ,
if they have the same distribution function, i.e., P (X 6 x) = P (Y 6 x) for all
x ∈ R.

• Remember, being equal in distribution is a very weak equality.


• (Page 8) Exercise 1.10. It shows you how to compute the density function of a
transformed random variable. Exercise 1.12 is just an important special case of
1.10.

Proof of 1.10

• Let π, ν, and λ be three measures defined on B(R) in this way: π(A) =


P (g(X) ∈ A), ν(A) = P (X ∈ A), and λ(A) is the Lebesgue measure on
R. In other words, ν is the probability associated with X, and π is the proba-
bility of g(X). The density function of g(X), if exists, is the Radon-Nikodym
derivative dπ
dx .

• First, we need to show that such a density function exists. It suffices to show that
π  λ, i.e., π(A) = 0 if λ(A) = 0. (Radon-Nikodym Theorem)
• This is true because (I’ll use the hard way, using definitions only).
Proof of 1.10 (II)

• Strict monotonicity implies the existence of g −1 . Continuity implies that g −1 is


continuous.
• The pre-image g −1 ((a, b)) is an open interval: (g −1 (a), g −1 (b)). Monotonicity
implies that this pre-image must be an interval (no hole in the middle), continuity
implies that this interval must be open.

• Continuity of g −1 further implies that when (a, b) shrinks to zero (means a ↑ c


and b ↓ c for c ∈ (a, b)), (g −1 (a), g −1 (b)) shrinks to zero as well.
• Now a Lebesgue null set A has this property: you can find a sequence of open
sets Bn to approximate it (A ⊆ Bn , λ(Bn ) ↓ 0). With a bit more work, you will
see that λ(g −1 (Bn )) ↓ 0. π(Bn ) = ν(g −1 (Bn )) ↓ 0 (ν  λ).

Proof of 1.10 (III)

• Denote h(x) = dπ dν
R
dx and f (x) = dx . By definition, π(A) = A
h(x)dx, ν(A) =
A ∈ B.
R dν
A dx
dx, for all
• Let A = (−∞, y]. We get
Z y
h(x)dx = P (g(X) 6 y) = P (X 6 g −1 (y))
−∞
Z g −1 (y)
= f (x)dx (Let x = g −1 (x))
−∞
Z y
f (g −1 (t))d g −1 (t)

=
−∞
y
f (g −1 (t))
Z
= dt
−∞ g 0 (g −1 (t))

• Carathéodory extension theorem.

Inequalities

• Chebyshev’s inequality. Suppose ϕ : R → R is positive. Let iA = inf {ϕ(y) : y ∈ A}.


Z
1 1
P (X ∈ A) 6 ϕ(X)dP (x) 6 Eϕ(x).
iA A iA

A special case:
EX 2
P (X > a) 6 .
a2

2
Misc.

• Random variables are measurable functions. Measurable transformation (which


includes continuous transformation) of r.v.s are r.v.s.
• Change of variable formula, page 17.
• The kth moment, EX k .

Theorem of Total Probability

• Two compartment case. Ω = B1 ∪ B2 , B1 ∩ B2 = φ.

• For any A ∈ F we have:


– P (A) = P (A ∩ B1 ) + P (A ∩ B2 ).
– P (A) = P (B1 )P (A|B1 ) + P (B2 )P (A|B2 ).

• You can easily generalize this theorem to the countable infinite case.

Homework
Go over all the proofs in the book.

Das könnte Ihnen auch gefallen