Beruflich Dokumente
Kultur Dokumente
05-February-2009
0. 8 r(δ, π0 )
R1 (δ)
0. 7
′
R0 (δBπ0 )
0. 6
′
0. 5 r(δBπ0 , π0 )
0. 4
0. 3
r(δBπ0 , π0 )
′
R1 (δBπ0 )
0. 2
0. 1
0
0 0. 1 0. 2 0. 3 0. 4 0. 5 0. 6 0. 7 0. 8 0. 9 1
π0′ prior π0
0. 9
0. 8
0. 7
′
R0 (δBπ0 )
0. 6
′
0. 5 r(δBπ0 , π0 )
r(δBπlf , π 0) R0 (δBπlf )
R1 (δBπlf )
0. 3
r(δBπ0 , π0 )
R1 (δ Bπ0′ )
0. 2
0. 1
0
0 0. 1 0. 2 0. 3 0. 4 0. 6 0. 7 0. 8 0. 9 1
π0′ πlf prior π0
Definition:
Remarks:
◮ No single decision rule minimizes the weighted average, e.g. Bayes,
risk for every possible prior state distribution.
◮ A conservative approach is to minimize the worst case risk over all
possible prior state distributions.
◮ Intuitively, there should be a least favorable prior. Does it always
exist? Is it unique?
◮ Intuitively, the minimax decision rule should be the Bayesian decision
rule with constant Bayesian risk over the priors. Is this always true?
Let V (π) := r(δBπ , π) be the minimum Bayesian risk for the prior π.
Theorem
The minimum Bayesian risk V (π) is concave and continuous P over the
space of priors satisfying πj ≥ 0, j = 0, 1, . . . , N − 1, and j πj = 1.
Hence, there exists a unique least favorable prior
0
π0 π0′′ π0′ 1
R1 (δBπlf ) R0 (δBπlf )
R1 (δBπlf ) R0 (δBπlf )
1 π0
2 π0
π0lf π0lf
V (π0 ) V (π0 )
R1 (δBπlf ) R0 (δBπlf )
R0 (δBπlf )
R1 (δBπlf )
π0lf = 0
π0lf = 1
3 π0
4 π0
Proof.
′ ′
Given a π ′ satisfying R0 (δ Bπ ) = R1 (δ Bπ ). For any δ,
R0 (δBπlf ) = R1 (δBπlf )
ρmm = δBπlf
2
where γ := a0 +a
2
1
+ a1σ−a0 ln ππ01 .
The conditional risks are
Bπ γ − a0
R0 (δ ) = Q
σ
Bπ a1 − γ
R1 (δ ) = Q
σ
R∞ 2
where Q(x) := √1 e−t /2 dt.
x 2π
Let’s try the equalizer rule. What value of γ gives us R0 (δBπ ) = R1 (δBπ )?
Worcester Polytechnic Institute D. Richard Brown III 05-February-2009 11 / 21
Minimax Hypothesis Testing
a0 +a1
γ= 2
a0 a1
Y0 Y1
What does this imply about the least favorable prior?
Answer: π0 = π1 = 21 .
Given a0 , a1 , and σ, the minimax rule allows you to guarantee a
worst-case risk over all priors.
Worcester Polytechnic Institute D. Richard Brown III 05-February-2009 12 / 21
Minimax Hypothesis Testing
0. 9
0. 8
0. 7
′
R0 (δBπ0 )
0. 6
′
0. 5 r(δBπ0 , π0 )
r(δBπlf , π 0) R0 (δBπlf )
R1 (δBπlf )
0. 3
r(δBπ0 , π0 )
R1 (δ Bπ0′ )
0. 2
0. 1
0
0 0. 1 0. 2 0. 3 0. 4 0. 6 0. 7 0. 8 0. 9 1
π0′ πlf prior π0
90
80
70
60
Bayes risk
50
40
30
20
rule 1
rule 2
10 rule 3
rule 4
minimum
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
prior π
0
◮ Note that the equalizer rule would give no solution to this problem:
90
80
70
60
rule 1
Bayes risk
rule 2
50 rule 3
rule 4
minimum
40
30
20
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
prior π
0
80
+
70 δBπlf
60
rule 1
Bayes risk
rule 2 −
50 rule 3
−
R0 (δBπlf )
rule 4 Bπlf
minimum
δ
40
V (πlf ) V (πlf )
30 r(ρmm )
20
10
− +
R1 (δ Bπlf
) 00 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 R0 (δBπlf )
prior π0
Worcester Polytechnic Institute D. Richard Brown III 05-February-2009 18 / 21
Minimax Hypothesis Testing