Sie sind auf Seite 1von 50

CONTEMPORARY MANAGEMENT SCIENCE WITH SPREADSHEETS

ANDERSON SWEENEY WILLIAMS

SLIDES PREPARED BY JOHN LOUCKS

1999 South-Western College Publishing Slide 1

Chapter 9 Decision Analysis

Structuring the Decision Problem Decision Making Without Probabilities Decision Making with Probabilities Expected Value of Perfect Information Decision Analysis with Sample Information Developing a Decision Strategy Expected Value of Sample Information

Slide 2

Structuring the Decision Problem

A decision problem is characterized by decision alternatives, states of nature, and resulting payoffs. The decision alternatives are the different possible strategies the decision maker can employ. The states of nature refer to future events, not under the control of the decision maker, which may occur. States of nature should be defined so that they are mutually exclusive and collectively exhaustive. For each decision alternative and state of nature, there is an outcome. These outcomes are often represented in a matrix called a payoff table.
Slide 3

Decision Trees

A decision tree is a chronological representation of the decision problem. Each decision tree has two types of nodes; round nodes correspond to the states of nature while square nodes correspond to the decision alternatives. The branches leaving each round node represent the different states of nature while the branches leaving each square node represent the different decision alternatives. At the end of each limb of a tree are the payoffs attained from the series of branches making up that limb.

Slide 4

Decision Making Without Probabilities

If the decision maker does not know with certainty which state of nature will occur, then he is said to be doing decision making under uncertainty. Three commonly used criteria for decision making under uncertainty when probability information regarding the likelihood of the states of nature is unavailable are: the optimistic approach the conservative approach the minimax regret approach.

Slide 5

Optimistic Approach

The optimistic approach would be used by an optimistic decision maker. The decision with the largest possible payoff is chosen. If the payoff table was in terms of costs, the decision with the lowest cost would be chosen.

Slide 6

Conservative Approach

The conservative approach would be used by a conservative decision maker. For each decision the minimum payoff is listed and then the decision corresponding to the maximum of these minimum payoffs is selected. (Hence, the minimum possible payoff is maximized.) If the payoff was in terms of costs, the maximum costs would be determined for each decision and then the decision corresponding to the minimum of these maximum costs is selected. (Hence, the maximum possible cost is minimized.)

Slide 7

Minimax Regret Approach

The minimax regret approach requires the construction of a regret table or an opportunity loss table. This is done by calculating for each state of nature the difference between each payoff and the largest payoff for that state of nature. Then, using this regret table, the maximum regret for each possible decision is listed. The decision chosen is the one corresponding to the minimum of the maximum regrets.

Slide 8

Example
Consider the following problem with three decision alternatives and three states of nature with the following payoff table representing profits: States of Nature s1 s2 s3 d1 Decisions d2 d3 4 0 1 4 3 5 -2 -1 -3

Slide 9

Example

Optimistic Approach An optimistic decision maker would use the optimistic approach. All we really need to do is to choose the decision that has the largest single value in the payoff table. This largest value is 5, and hence the optimal decision is d3. Maximum Decision Payoff d1 4 d2 3 choose d3 d3 5 maximum

Slide 10

Example

Formula Spreadsheet for Optimistic Approach


A B C D E F P A Y OFF TA B LE De cision Alte rna tive d1 d2 d3 S ta te of Na ture s1 4 0 1 s2 4 3 5 B es t P ay off s3 -2 -1 -3 M a x im um P a yoff = M A X(B 5:D5) = M A X(B 6:D6) = M A X(B 7:D7) = M A X(E 5:E 7) Re com m e nde d De cision = IF(E 5= $E $9,A 5,"") = IF(E 6= $E $9,A 6,"") = IF(E 7= $E $9,A 7,"")

1 2 3 4 5 6 7 8 9

Slide 11

Example

Spreadsheet for Optimistic Approach


A B C D E F P A Y O F F TA B LE De cisio n Alte rn a tive d1 d2 d3 S ta te o f Na tu re s1 4 0 1 s2 4 3 5 B es t P ay off s3 -2 -1 -3 M a x im u m P a yo ff 4 3 5 5 d3 Re co m m e n d e d De cisio n

1 2 3 4 5 6 7 8 9

Slide 12

Example

Conservative Approach A conservative decision maker would use the conservative approach. List the minimum payoff for each decision. Choose the decision with the maximum of these minimum payoffs. Minimum Decision Payoff d1 -2 choose d2 d2 -1 maximum d3 -3

Slide 13

Example

Formula Spreadsheet for Conservative Approach


A B C D E F P A Y OFF TA B LE De cision Alte rna tive d1 d2 d3 S ta te of Na ture s1 4 0 1 s2 4 3 5 Be st P a yoff s3 -2 -1 -3 M inim um P a yoff = M IN(B 5:D5) = M IN(B 6:D6) = M IN(B 7:D7) = M A X(E 5:E 7) Re com m e nde d De cision = IF(E 5= $E $9,A 5,"") = IF(E 6= $E $9,A 6,"") = IF(E 7= $E $9,A 7,"")

1 2 3 4 5 6 7 8 9

Slide 14

Example

Spreadsheet for Conservative Approach


A B C D E F P A Y O F F TA B LE De cisio n Alte rn a tive d1 d2 d3 S ta te o f Na tu re s1 4 0 1 s2 4 3 5 Be st P a yo ff s3 -2 -1 -3 M in im u m P a yo ff -2 -1 -3 -1 d2 Re co m m e n d e d De cisio n

1 2 3 4 5 6 7 8 9

Slide 15

Example

Minimax Regret Approach For the minimax regret approach, first compute a regret table by subtracting each payoff in a column from the largest payoff in that column. In this example, in the first column subtract 4, 0, and 1 from 4; in the second column, subtract 4, 3, and 5 from 5; etc. The resulting regret table is: s1 s2 s3
d1 d2 d3 0 4 3 1 2 0 1 0 2
Slide 16

Example

Minimax Regret Approach (continued) For each decision list the maximum regret. Choose the decision with the minimum of these values. Decision d1 d2 d3 Maximum Regret 1 minimum 4 3

choose d1

Slide 17

Example

Formula Spreadsheet for Minimax Regret Approach


A B C State of Nature s1 4 0 1 s2 4 3 5 s3 -2 -1 -3 D E F

PAYOFF TABLE

2 Decision 3 4 5 6 7 8 10 11 12 13 14 OPPORTUNITY LOSS TABLE 9 Decision Altern. d1 d2 d3 s1


=MA X($B$4:$B$6)-B4 =MA X($B$4:$B$6)-B5 =MA X($B$4:$B$6)-B6

Altern. d1 d2 d3

State of Nature s2
=MA X($C$4:$C$6)-C4 =MA X($C$4:$C$6)-C5 =MA X($C$4:$C$6)-C6

Maximum s3
=MA X($D$4:$D$6)-D4 =MA X($D$4:$D$6)-D5 =MA X($D$4:$D$6)-D6

Recommended Decision

Regret

=MAX(B11:D11) =IF(E11=$E$14,A11,"") =MAX(B12:D12) =IF(E12=$E$14,A12,"") =MAX(B13:D13) =IF(E13=$E$14,A13,"") =MIN(E11:E13)

Minimax Regret Value

Slide 18

Example

1 2 3 4 5 6 7 8 9 10 11 12 13 14 O P P O R TU N ITY L O S S TA B L E D e c i si o n A l te r n a ti v e d1 d2 d3 s1 0 4 3 S ta te o f N a tu r e s2 1 2 0 s3 1 0 2 M a x im u m R e g re t 1 4 3 1 Re co m m e n d e d D e c i si o n d1

Spreadsheet for Minimax Regret Approach


P A Y O F F TA B L E D e c i si o n A l te r n a ti v e d1 d2 d3 s1 4 0 1 S ta te o f N a tu r e s2 4 3 5 s3 -2 -1 -3

M in im a x R e g re t V a lu e

Slide 19

Decision Making with Probabilities

Expected Value Approach If probabilistic information regarding he states of nature is available, one may use the expected value (EV) approach. Here the expected return for each decision is calculated by summing the products of the payoff under each state of nature and the probability of the respective state of nature occurring. The decision yielding the best expected return is chosen.

Slide 20

Expected Value of a Decision Alternative

The expected value of a decision alternative is the sum of weighted payoffs for the decision alternative. The expected value (EV) of decision alternative di is defined as:

EV( d i ) P( s j )Vij
j 1

where:

N = the number of states of nature P(sj) = the probability of state of nature sj Vij = the payoff corresponding to decision alternative di and state of nature sj

Slide 21

Example: Burger Prince


Burger Prince Restaurant is contemplating opening a new restaurant on Main Street. It has three different models, each with a different seating capacity. Burger Prince estimates that the average number of customers per hour will be 80, 100, or 120. The payoff table for the three models is as follows:
Average Number of Customers Per Hour s1 = 80 s2 = 100 s3 = 120 d1 = Model A d2 = Model B d3 = Model C $10,000 $ 8,000 $ 6,000 $15,000 $18,000 $16,000 $14,000 $12,000 $21,000

Slide 22

Example: Burger Prince

Expected Value Approach Calculate the expected value for each decision. The decision tree on the next slide can assist in this calculation. Here d1, d2, d3 represent the decision alternatives of models A, B, C, and s1, s2, s3 represent the states of nature of 80, 100, and 120.

Slide 23

Example: Burger Prince

Decision Tree
s1 .4
2

Payoffs 10,000 15,000 14,000

d1
1

s2 .2 s3 .4 s1 .4 s2 .2 s3 .4 s1
.4 .2

d2
3

8,000 18,000

d3

12,000
6,000 16,000 .4

s2 s3

21,000
Slide 24

Example: Burger Prince

Expected Value For Each Decision


d1
Model A EV = .4(10,000) + .2(15,000) + .4(14,000) = $12,600 2

Model B

d2

EV = .4(8,000) + .2(18,000) + .4(12,000) = $11,600

Model C

d3
EV = .4(6,000) + .2(16,000) + .4(21,000) = $14,000 4

Choose the model with largest EMV -- Model C.


Slide 25

Example: Burger Prince

Formula Spreadsheet for Expected Value Approach


A B C D E F

1 PAYOFF TABLE 2 3 Decision State of Nature Expected Value Recommended Decision

4 Alternative s1 = 80 s2 = 100 s3 = 120 5 6 7 9 Model A Model B Model C 10,000 8,000 6,000 0.4 15,000 18,000 16,000 0.2

14,000 =$B$8*B5+$C$8*C5+$D$8*D5 =IF(E5=$E$9,A5,"") 12,000 =$B$8*B6+$C$8*C6+$D$8*D6 =IF(E6=$E$9,A6,"") 21,000 =$B$8*B7+$C$8*C7+$D$8*D7 =IF(E7=$E$9,A7,"") 0.4 =MAX (E5:E7)

8 Probability

Maximum Expected Value

Slide 26

Example: Burger Prince

Spreadsheet for Expected Value Approach


A B C D E F PAYOFF TABLE De cision Alte rna tive M odel A M odel B M odel C Proba bility Sta te of Na ture s1 = 80 10,000 8,000 6,000 0.4 s2 = 100 s3 = 120 15,000 18,000 16,000 0.2 14,000 12,000 21,000 0.4 14000 Ex pe cte d Va lue 12600 11600 14000 M ode l C Re com m e nded De cision

1 2 3 4 5 6 7 8 9

M a x im um Ex pe cte d Va lue

Slide 27

Expected Value of Perfect Information

Frequently information is available which can improve the probability estimates for the states of nature. The expected value of perfect information (EVPI) is the increase in the expected profit that would result if one knew with certainty which state of nature would occur. The EVPI provides an upper bound on the expected value of any sample or survey information.

Slide 28

Expected Value of Perfect Information

EVPI Calculation Step 1: Determine the optimal return corresponding to each state of nature. Step 2: Compute the expected value of these optimal returns. Step 3: Subtract the EV of the optimal decision from the amount determined in step (2).

Slide 29

Example: Burger Prince

Expected Value of Perfect Information Calculate the expected value for the optimum payoff for each state of nature and subtract the EV of the optimal decision.

EVPI= .4(10,000) + .2(18,000) + .4(21,000) - 14,000 = $2,000

Slide 30

Example: Burger Prince

Spreadsheet for Expected Value of Perfect Information


A B C D E F P A Y O F F TA B L E D e c i si o n A l te rn a ti v e d1 = M odel A d2 = M odel B d3 = M odel C P ro b a b i l i ty S ta te o f N a tu re s 1 = 80 10,000 8,000 6,000 0.4 s 2 = 100 15,000 18,000 16,000 0.2 s 3 = 120 14,000 12,000 21,000 0.4 14000 EV w P I 16000 EV P I 2000 Ex p e c te d V a lu e 12600 11600 14000 d3 = M ode l C Re co m m e n d ed D e c i si o n

1 2 3 4 5 6 7 8 9 10 11 12

M a x i m u m Ex p e c te d V a l u e M a x i m u m P a y o ff 10,000 18,000 21,000

Slide 31

Decision Analysis With Sample Information

Knowledge of sample or survey information can be used to revise the probability estimates for the states of nature. Prior to obtaining this information, the probability estimates for the states of nature are called prior probabilities. With knowledge of conditional probabilities for the outcomes or indicators of the sample or survey information, these prior probabilities can be revised by employing Bayes' Theorem. The outcomes of this analysis are called posterior probabilities.

Slide 32

Posterior Probabilities

Posterior Probabilities Calculation Step 1: For each state of nature, multiply the prior probability by its conditional probability for the indicator -- this gives the joint probabilities for the states and indicator. Step 2: Sum these joint probabilities over all states -- this gives the marginal probability for the indicator. Step 3: For each state, divide its joint probability by the marginal probability for the indicator -- this gives the posterior probability distribution.
Slide 33

Expected Value of Sample Information

The expected value of sample information (EVSI) is the additional expected profit possible through knowledge of the sample or survey information.

Slide 34

Expected Value of Sample Information

EVSI Calculation Step 1: Determine the optimal decision and its expected return for the possible outcomes of the sample using the posterior probabilities for the states of nature. Step 2: Compute the expected value of these optimal returns. Step 3: Subtract the EV of the optimal decision obtained without using the sample information from the amount determined in step (2).
Slide 35

Efficiency of Sample Information

Efficiency of sample information is the ratio of EVSI to EVPI. As the EVPI provides an upper bound for the EVSI, efficiency is always a number between 0 and 1.

Slide 36

Example: Burger Prince

Sample Information Burger Prince must decide whether or not to purchase a marketing survey from Stanton Marketing for $1,000. The results of the survey are "favorable" or "unfavorable". The conditional probabilities are: P(favorable | 80 customers per hour) = .2 P(favorable | 100 customers per hour) = .5 P(favorable | 120 customers per hour) = .9 Should Burger Prince have the survey performed by Stanton Marketing?

Slide 37

Example: Burger Prince

Posterior Probabilities Favorable Survey Results State 80 100 120 Prior .4 .2 .4 Conditional Joint .2 .08 .5 .10 .9 .36 Total .54 P(favorable) = .54 Posterior .148 .185 .667 1.000

Slide 38

Example: Burger Prince

Posterior Probabilities Unfavorable Survey Results State 80 100 120 Prior .4 .2 .4 Conditional .8 .5 .1 Total Joint .32 .10 .04 .46 Posterior .696 .217 .087 1.000

P(unfavorable) = .46

Slide 39

Example: Burger Prince

1 2 3 4 5 6 7 8 9 10 1 1 S t a t e o f N a t u re 12 13 14 15 s 1 = 80 s 2 = 100 s 3 = 120 M a rk e t R e s e a rc h U n fa vo ra b le P rio r P ro b a b ilit ie s 0.4 0.2 0.4 C o n d it io n a l P ro b a b ilit ie s 0.8 0.5 0.1 Jo in t P ro b a b ilit ie s = B 1 2 *C 1 2 = B 1 3 *C 1 3 = B 1 4 *C 1 4 P o s t erio r P ro b a b ilit ie s = D 1 2 / $ D$ 1 5 = D 1 3 / $ D$ 1 5 = D 1 4 / $ D$ 1 5 S t a t e o f N a t u re s 1 = 80 s 2 = 100 s 3 = 120

Formula Spreadsheet for Posterior Probabilities


A B P rio r P ro b a b ilit ie s 0.4 0.2 0.4 C C o n d it io n a l P ro b a b ilit ie s 0.2 0.5 0.9 P (F a vo ra b le ) = D Jo in t P ro b a b ilit ie s = B 4 *C 4 = B 5 *C 5 = B 6 *C 6 = S U M (D 4 : D 6 ) E P o s t erio r P ro b a b ilit ie s = D 4 / $ D$ 7 = D 5 / $ D$ 7 = D 6 / $ D$ 7 M a rk e t R e s e a rc h F a vo ra b le

P (U n fa vo ra b le ) = = S U M (D 1 2 : D 1 4 )

Slide 40

Example: Burger Prince

1 2 3 4 5 6 7 8 9 10 1 1 S t a t e o f N a t u re 12 13 14 15 s 1 = 80 s 2 = 100 s 3 = 120 M a rk e t R e s e a rc h U n fa vo ra b le P rio r P ro b a b ilit ie s 0.4 0.2 0.4 C o n d it io n a l P ro b a b ilit ie s 0.8 0.5 0.1 P (F a vo ra b le ) = Jo in t P ro b a b ilit ie s 0.32 0.10 0.04 0.46 P o s t e rior P ro b a b ilit ie s 0.696 0.217 0.087 S t a t e o f N a t u re s 1 = 80 s 2 = 100 s 3 = 120

Spreadsheet for Posterior Probabilities


A B P rio r P ro b a b ilit ie s 0.4 0.2 0.4 C C o n d it io n a l P ro b a b ilit ie s 0.2 0.5 0.9 P (F a vo ra b le ) = D Jo in t P ro b a b ilit ie s 0.08 0.10 0.36 0.54 E P o s t e rior P ro b a b ilit ie s 0.148 0.185 0.667 M a rk e t R e s e a rc h F a vo ra b le

Slide 41

Example: Burger Prince

Decision Tree (top half)


s1 (.148) 4 s2 (.185) s3 (.667) s1 (.148) 2 $14,000 $8,000 $18,000 $12,000 $6,000 $16,000 $21,000 $10,000 $15,000

d1 d2 d3
6 1

s2 (.185) s3 (.667)

I1
(.54)

s1 (.148) s2 (.185)

s3 (.667)

Slide 42

Example: Burger Prince

Decision Tree (bottom half)


1
s1 (.696) $10,000 $15,000 $14,000

I2
(.46)

d1
d2
3

s2 (.217) s3 (.087) s1 (.696)

d3
9

$8,000 s2 (.217) $18,000 s3 (.087) $12,000 s1 (.696) $6,000 s2 (.217) s3 (.087) $16,000 $21,000 Slide 43

Example: Burger Prince


d1
$17,855 2 4 5 EMV = .148(10,000) + .185(15,000) + .667(14,000) = $13,593

d2
d3

I1
(.54)

EMV = .148 (8,000) + .185(18,000) + .667(12,000) = $12,518


EMV = .148(6,000) + .185(16,000) +.667(21,000) = $17,855

6 1

I2
(.46) 3 $11,433

d1 d2 d3

EMV = .696(10,000) + .217(15,000) +.087(14,000)= $11,433 EMV = .696(8,000) + .217(18,000) + .087(12,000) = $10,554

EMV = .696(6,000) + .217(16,000) +.087(21,000) = $9,475


Slide 44

Example: Burger Prince

Decision Strategy Assuming the Survey is Undertaken: If the outcome of the survey is favorable, choose Model C. If it is unfavorable, choose Model A.

Slide 45

Example: Burger Prince

Question: Should the survey be undertaken? Answer: If the Expected Value with Sample Information (EVwSI) is greater, after deducting expenses, than the Expected Value without Sample Information (EVwoSI), the survey is recommended.

Slide 46

Example: Burger Prince

Expected Value with Sample Information (EVwSI) EVwSI = .54($17,855) + .46($11,433) = $14,900.88

Expected Value of Sample Information (EVSI) EVSI = EVwSI - EVwoSI assuming maximization EVSI= $14,900.88 - $14,000 = $900.88

Slide 47

Example: Burger Prince

Conclusion EVSI = $900.88 Since the EVSI is less than the cost of the survey ($1000), the survey should not be purchased.

Slide 48

Example: Burger Prince

Efficiency of Sample Information The efficiency of the survey: EVSI/EVPI = ($900.88)/($2000) = .4504

Slide 49

The End of Chapter 9

Slide 50

Das könnte Ihnen auch gefallen