Sie sind auf Seite 1von 37

Markov Analysis

Learning Objectives
Students will be able to:
Determine future states or conditions
using Markov analysis.
Compute long-term or steady-state
conditions using only the matrix of
transition.
Understand the use of absorbing
state analysis in predicting future
conditions.
Outline
16.1 Introduction
16.2 States and State Probabilities
16.3 Matrix of Transition Probabilities
16.4 Predicting Future Market Share
16.5 Markov Analysis of Machine
Operations
16.6 Equilibrium Conditions
16.7 Absorbing States and the
Fundamental Matrix: Accounts
Receivable Applications
Introduction
Markov Analysis
A technique dealing with probabilities of
future occurrences with currently known
probabilities
Numerous applications in
Business (e.g., market share analysis),
Bad debt prediction
University enrollment predictions
Machine breakdown prediction

Markov Analysis
Matrix of Transition Probabilities
Shows the likelihood that the system will
change from one time period to the next
This is the Markov Process.
It enables the prediction of future states or
conditions.

States and State
Probabilities
States are
used to identify all possible conditions of a
process or system
A system can exist in only one state at a
time. Examples include:
Working and broken states of a machine
Three shops in town, with a customer able to
patronize one at a time
Courses in a student schedule with the
student able to occupy only one class at a
time
Assumptions of
Markov Analysis
1. A finite number of possible states
2. Probability of changing states remains
the same over time
3. Future state predictable from previous
state and the transition matrix
4. Size and states of the system remain
the same during analysis
5. States are collectively exhaustive
All possible states have been identified
6. States are mutually exclusive
Only one state at a time is possible
States and State
Probabilities continued
1. Identify all states.
2. Determine the probability that the
system is in this state.
This information is placed into a vector of
state probabilities.
p(i) = vector of state probabilities for period i
= (p
1
, p
2
, p
3
,,p
n
)
where
n = number of states
p
1
, p
2
,,p
n
= P (being in state 1, 2, , state n)
States and State
Probabilities continued
For example:
If dealing with only 1 machine, it may be
known that it is currently functioning
correctly.
The vector of states can then be shown.
p(1) = (1,0)
where
p(1) = vector of states for the machine in period 1
p
1
= 1 = P (being in state 1) = P (machine working)
p
2
= 0 = P (being in state 2) = P (machine broken)
Most of the time, problems deal with more
than one item!
States and State
Probabilities continued
Three Grocery Store example:
100,000 customers monthly for the 3 grocery
stores
State 1 = store 1 = 40,000/100,000 = 40%
State 2 = store 2 = 30,000/100,000 = 30%
State 3 = store 3 = 30,000/100,000 = 30%
vector of state probabilities:
p(1) = (0.4,0.3,0.3)
where
p(1) = vector of state probabilities in period 1
p
1
= 0.4 = P (of a person being in store 1)
p
2
= 0.3 = P (of a person being in store 2)
p
3
= 0.3 = P (of a person being in store 3)

States and State
Probabilities continued
Three Grocery Store example, continued:
Probabilities in the vector of states for the stores
represent market share for the first period.
In period 1, the market shares are
Store 1: 40%
Store 2: 30%
Store 3: 30%
But, every month, customers who frequent one
store have a likelihood of visiting another store.
Customers from each store have different
probabilities for visiting other stores.

States and State
Probabilities continued
Three Grocery Store example, continued:
Store-specific customer probabilities for visiting a
store in the next month:
Store 1: Store 2:
Return to Store 1 = 80% Visit Store 1 = 10%
Visit Store 2 = 10% Return to Store 2 = 70%
Visit Store 3 = 10% Visit Store 3 = 20%

Store 3:
Visit Store 1 = 20%
Visit Store 2 = 20%
Return to Store 3 = 60%
States and State
Probabilities continued
Three Grocery Store example, continued:
Combining the starting market share with the
customer probabilities for visiting a store next period
yields the market shares in the next period:
Initial
Share P(1) P(2) P(3)
Store 1: 40% 80% 10% 10%
Next Period: 32% 4% 4%
Store 2: 30% 10% 70% 20%
Next Period: 3% 21% 6%
Store 3: 30% 20% 20% 60%
Next Period: 6% 6% 18%
New Shares: 41% 31% 28% = 100%
Matrix of Transition
Probabilities
To calculate periodic changes, it is much more
convenient to use
a matrix of transition probabilities.
a matrix of conditional probabilities of being in
a future state given a current state.
Let P
ij
= conditional probability of being in state j
in the future given the current state of i, P (state
j at time = 1 | state i at time = 0)
For example, P
12
is the probability of being in
state 2 in the future given the event was in state
1 in the prior period

Matrix of Transition
Probabilities continued
Let P = matrix of transition probabilities


P
11
P
12
P
13

*****
P
1n

P
21
P
22
P
23 *****
P
2n

P =
P
m1 ******
P
mn


Important:
Each row must sum to 1.
But, the columns do NOT necessarily sum to 1.

*
*
*
*
*
*
*
*
Row
Sum
1
1
1
Matrix of Transition
Probabilities continued
Three Grocery Stores, revisited
The previously identified transitional
probabilities for each of the stores can
now be put into a matrix:
0.8 0.1 0.1
P = 0.1 0.7 0.2
0.2 0.2 0.6
Row 1 interpretation:
0.8 = P
11
= P (in state 1 after being in state 1)
0.1 = P
12
= P (in state 2 after being in state 1)
0.1 = P
13
= P (in state 3 after being in state 1)
Predicting Future
Market Shares
Grocery Store example
A purpose of Markov analysis is to predict the future
Given the
1. vector of state probabilities and
2. matrix of transitional probabilities.
It is easy to find the state probabilities in the future.
This type of analysis allows the computation of the
probability that a person will be at one of the grocery
stores in the future.
Since this probability is equal to market share, it is
possible to determine the future market shares of the
grocery store.


Predicting Future Market
Shares continued
Grocery Store example
When the current period is 0, finding the state
probabilities for the next period (1) can be found
using:
p(1) = p(0)P
Generally, in any period n, the state probabilities for
period n+1 can be computed as:
p(n+1) = p(n)P






Predicting Future
States continued
[
]


) 1 (
6 . 2 . 2 .
2 . 7 . 1 .
1 . 1 . 8 .
(0)
6 . 2 . 2 .
2 . 7 . 1 .
1 . 1 . 8 .
[ ]
.4 .3 .3
(0)

= state probabilities

=










=










=
=
p
p (1) p
p
P
P
[ ]
.4 .3 .3
(1) =
p
(.4*.8 + .3*.1 + .3*.2) +
(.4*.1 + .3*.7 + .3*.2) +
(.4*.1 + .3*.2 + .3*.6)
(1)
p
= [ 0.41 0.31 0.28]
Predicting Future Market
Shares continued
In general,
p(n) = p(0)P
n
Therefore, the state probabilities n periods in
the future can be obtained from the
current state probabilities and the
matrix of transition probabilities.



Markov Analysis of
Machine Operations
where
P
11
= 0.8 = probability of machine working
this period if working last period
P
12
= 0.2 = probability of machine not
working this period if working last
P
21
= 0.1 = probability of machine working
this period if not working last
P
22
= 0.9 = probability machine not working
this period if not working last

0.1 0.9
0.8 0.2
P =
Markov Analysis of
Machine Operations
continued
What is the probability the machine will be
working next month?
p(1) = p(0)P
= (1,0)
0.1 0.9
0.8 0.2
= [(1)(0.8)+(0)(0.1), (1)(0.2)+(0)(0.9)]
= (0.8, 0.2)
If the machine works this month, then there
is
an 80% chance that it will be working
next month and
a 20% chance it will be broken.

Markov Analysis of
Machine Operations
continued
What is the probability the machine will be
working in two months?
p(2) = p(1)P
= (0.8, 0.2)
0.1 0.9
0.8 0.2
= [(0.8)(0.8)+(0.2)(0.1), (0.8)(0.2)+(0.2)(0.9)]
= (0.66, 0.34)
If the machine works next month, then in
two months there is
a 66% chance that it will be working
and
a 34% chance it will be broken.

Equilibrium
Conditions
Equilibrium state probabilities are
the long-run average probabilities
for being in each state.

Equilibrium conditions exist if state
probabilities do not change after a
large number of periods.

At equilibrium, state probabilities
for the next period equal the state
probabilities for current period.
Equilibrium
Conditions continued
One way to compute the equilibrium
share of the market is to use Markov
analysis for a large number of periods
and see if the future amounts approach
stable values.

On the next slide, the Markov analysis is
repeated for 15 periods for the machine
example.

By the 15
th
period, the share of time the
machine spends working and broken is
around 66% and 34%, respectively.
Machine Example:
Periods to Reach
Equilibrium
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
State 1
1.0
.8
.66
.562
.4934
.44538
.411766
.388236
.371765
.360235
.352165
.346515
.342560
.339792
.337854
0.0
.2
.34
.438
.5066
.55462
.588234
.611763
.628234
.639754
.647834
.653484
.657439
.660207
.662145
State 2
The Markov Process
(n) P (n+1)

Equilibrium Conditions
Matrix of
Transition
New
State
Current
State
Equilibrium Equations
[ ]
[ ] [ ]

1
and
1
: or
Then:
P , (i)
Assume:
) ( ) 1 (
22
12 1
2
11
21 2
1
22 2 12 1 2 , 21 2 11 1 1
22 2 12 1 21 2 11 1 2 1
22 21
12 11
2 1
p
p
p
p
Therefore:
P P P P
P P P P
p p
p p
P i i
-
=
-
=
+ = + =
+ + =






= =
= +
p
p
p
p
p p p p p p
p p p p p p
p p p
p p
Equilibrium Equations
continued
It is always true that
p (next period) = p (this period) P
p (n+1) = p (n) P
or
At Equilibrium:
p (n+1) = p (n) = p (n) P
*
p (n) = p (n) P
*
p = p P
Dropping the n term:
Equilibrium Equations
continued
Machine Breakdown example
(p
1
, p
2
) = (p
1
, p
2
)
At Equilibrium:
p = p P
Applying matrix multiplication:
0.1 0.9
0.8 0.2
(p
1
, p
2
) = [(p
1
)(0.8) + (p
2
)(0.1), (p
1
)(0.2) + (p
2
)(0.9)
Multiplying through yields:
p
1
= 0.8 p
1
+ 0.1 p
2
p
2
= 0.2 p
1
+ 0.9 p
2
Equilibrium Equations
continued
Machine Breakdown example
The state probabilities must sum to 1,
therefore:
S ps = 1
In this example, then:
p
1
+ p
2
= 1
In a Markov analysis, there are always n state
equilibrium equations and 1 equation of state
probabilities summing to 1.
Equilibrium Equations
continued
Machine Breakdown Example
Summarizing the equilibrium equations:
p
1
= 0.8 p
1
+ 0.1 p
2
p
2
= 0.2 p
1
+ 0.9 p
2
p
1
+ p
2
= 1
Solving by simultaneous equations:
p
1
= 0.333333
p
2
= 0.666667
Therefore, in the long-run, the machine
will be functioning 33.33% of the time
and broken down 66.67% of the time.
Absorbing States
Any state that does not have a
probability of moving to another state is
called an absorbing state.
If an entity is in an absorbing state now,
the probability of being in an absorbing
state in the future is 100%.

An example of such a process is
accounts receivable.
Bills are either paid, delinquent, or written
off as bad debt.
Once paid or written off, the debt stays
paid or written off.
Absorbing States
Accounts Receivable example
The possible states are
Paid
Bad debt
Less than 1 month old
1 to 3 months old
A transition matrix for this would look
similar to:
Paid Bad <1 1-3
Paid 1 0 0 0
Bad 0 1 0 0
<1 0.6 0 0.2 0.2
1-3 0.4 0.1 0.3 0.2
Markov Process
Fundamental Matrix

P =
0 1 0 0
1 0 0 0
0.6 0 0.2 0.2
0.4 0 0.3 0.2
I
0
A B
Partition the probability matrix into 4
quadrants to make 4 new sub-matrices:
I, 0, A, and B
Markov Process
Fundamental Matrix
continued

=
B A
0 I
P Let
Where I = Identity matrix,
and 0 = Null matrix
[ ]
1 -
- = B I F Then
The FA indicates the probability that an amount
in one of the non-absorbing states will end up
in one of the absorbing states.
Once F is found, multiply by the A matrix: FA
Markov Process
Fundamental Matrix
continued
Once the FA matrix is found, multiply by
the M vector, which is the starting values
for the non-absorbing states, MFA,
where
M = (M
1
, M
2
, M
3
, M
n
)
The resulting vector will indicate how
many observations end up in the first non-
absorbing state and the second non-
absorbing state, respectively.

Das könnte Ihnen auch gefallen