6 views

Original Title: Markov Chains

Uploaded by HaydenChadwick

- 00953266
- syllabus-ece5510-fall2009
- Continuous Time Markov Chains
- Constantin
- ps2_2015
- Queue
- h1
- hw1-2017[2140]
- Brownian Montion
- 0787335 D1E03 Kouvatsos d Ed Performance Evaluation and Applications of At
- 1551288261614-YourStory DeepScienceReport Feb2019
- Constantin.pdf
- psheet10
- psVol5-13
- 62. a Bulk Quorum Queueing System With a Random Setup Time Under N-policy and With Bernoulli Vacation Schedule 2006
- 10.1.1.153
- APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIFFERENTIAL EQUATIONS UNDER NONLOCAL CONDITIONS
- BOT CreditRiskModels.surveyInRelation2CapitalFramework
- syllabus2
- Bayesian Methods for Hackers

You are on page 1of 76

Chia-Ping Chen

Professor

Department of Computer Science and Engineering

National Sun Yat-sen University

Probability

Dependence between Random Variables

assumed that random variables at different times are

independent.

In general, random variables at different times are often

not independent.

A Markov process, also called a Markov chain, allows

dependence of random variables.

To maintain tractability, conditional independence

property is assumed.

That is, random variables are independent conditioned on

any random variable between them.

Discrete-time Markov Chains

Conditional Independence

X is independent of Y given Z if

independence, which is denoted by

X

Y |Z

Markov Assumption

S0 , S1 , . . .

{Sn+1 , Sn+2 , . . . }

{Sn1 , . . . , S0 } | Sn , nN

State

A sample path of a Markov chain is a sequence of states.

A state transition occurs from one time to the next time,

including the case of self-transition.

Representation

A Markov chain can be represented by a state transition

graph or a transition probability matrix.

node: state

edge: state transition

annotation: transition probability

p11 p12 ... p1m

p21 p22 ... p2m

T=

.. .. .. ,

.. pij = P(Sn+1 = j|Sn = i)

. . . .

pm1 pm2 . . . pmm

Example 7.1 Catch-up

In each week, she is either up-to-date or fallen-behind.

If she is up-to-date in a given week, the probability that

she will be up-to-date in the next week is 0.8.

If she is fallen-behind in a given week, the probability that

she will be up-to-date in the next week is 0.6.

Construct a Markov chain for Alice.

Example 7.2 Flies and Spiders

At each time period, it moves one unit to the left with

probability 0.3, one unit to the right with probability 0.3,

and stays in place with probability 0.4.

Two spiders are lurking at positions 1 and m: if the fly

lands there, it is captured by a spider.

Construct a Markov chain for the fly.

Example 7.3 Machines

If it is working, it will break down the next day with

probability b, and will continue working with probability

1 b.

If it breaks down on a given day, it will be repaired and be

working on the next day with probability r, and will

continue to be broken down with probability 1 r.

Construct a Markov chain for the machine.

it is replaced by a new working machine. What is the new

Markov chain?

Sequence of States

S0 = s0 , . . . , Sn = sn

is

P(S0 = s0 , . . . , Sn = sn )

= P(S0 = s0 )P(S1 = s1 |S0 = s0 ) . . . P(Sn = sn |Sn1 = sn1 , . . . )

= P(S0 = s0 )P(S1 = s1 |S0 = s0 ) . . . P(Sn = sn |Sn1 = sn1 )

" n #

Y

= s0 psk1 sk

k=1

where

i = P(S0 = i)

is the probability of an initial state.

n-Step State Transition Probability

starting at one state and ending at another state in n steps

Chapman-Kolmogorov Equation

For n 2

m

X

rij (n) = rik (n 1)pkj

k=1

m

X

= P(Sn = j, Sn1 = k|S0 = i)

k=1

Xm

= P(Sn1 = k|S0 = i)P(Sn = j|Sn1 = k, S0 = i)

k=1

Xm

= P(Sn1 = k|S0 = i)P(Sn = j|Sn1 = k)

k=1

Xm

= rik (n 1)pkj

k=1

1-Step and n-Step Transition Probability Matrix

R(n) = Tn

By definition

R(1) = T

By induction

R(n) = Tn

Visualization

state transition graph or trellis.

a sequence of states is a path on the state transition graph

each path on the state transition graph has a probability

an n-step transition probability is the sum of probabilities

of the paths with designated start state and end state

Trellis

an expansion of the state transition graph over time

Classification of States

Accessibility between States

i j

j, and the edge directions are along the path

starting from i, the probability of reaching j is non-zero

starting from i over and over again, j will be reached

eventually

Recurrent State and Transient State

accessible to it. That is, i is recurrent if

i j j i

for every j.

there exists j such that

i j

but

j 9 i

Revisiting Property

discrete-time Markov chain. The difference between a recurrent

state and a transient state is not whether the token will leave,

but whether the token will return.

Partition of State Space

set of recurrent states and the set of transient state.

one or more recurrent classes, or simply classes.

Two recurrent states in the same class are accessible to

each other.

Two recurrent states in different classes are not accessible

to each other.

Periodic Class

C = S1 Sd

so that all valid state transitions are from one subset to the

next subset cyclicly

S1 S2 Sd S1

Long-term Behavior

Steady-State Probability

steady-state probability of a state is defined by

n

Probability of a State

m

X

P(Xn = j) = P(Xn = j|X0 = i)P(X0 = i)

i=1

Xm

= rij (n)P(X0 = i)

i=1

For a large n

m

X

P(Xn = j) j P(X0 = i) = j

i=1

Balance Equations

m

0 = 0 P, i.e. j =

X

k pkj

k=1

m

X

rij (n) = rik (n 1)pkj

k=1

Let n m

X

j = k pkj

k=1

Example 7.5 Two-state Markov Chain

p21 = 0.6, p22 = 0.4

Solution

2 = 1 p12 + 2 p22 = 0.21 + 0.42

and sum to 1

1 = 1 + 2

Example 7.6 Dont Forget Umbrellas

commuting from home to office and back. If it rains and an

umbrella is available in his location, he takes it. If it is not

raining, he always forgets to take an umbrella. Suppose that it

rains with probability p each time he commutes. What is the

steady-state probability that he gets wet during a commute?

Solution

must satisfy the balance equations and sum to 1

0 = 2 (1 p)

1 = 1 (1 p) + 2 p

2 = 0 1 + 1 p

1 = 0 + 1 + 2

Example 7.7 Superstition

doors, where m is odd, and never uses the same door twice in a

row. Instead, he uses with probability p (or probability 1 p)

the door that is adjacent in the clockwise (or counter-clockwise)

direction to the door he used last. What is the probability that

a given door will be used on some particular day far into the

future?

Solution

direction. Let state i represents that door i is used by the

professor in a given day. must satisfy the balance equations

and sum to 1

1 = 0 + + m1

By symmetry

1

i =

m

Long-term Frequency of a State

probability j is the long-term frequency of state j.

Let vij (n) be the number of visits to state j within the first n

transitions, starting from state i. Then

vij (n) a. s.

j

n

Long-term Frequency of a State Transition

long-term frequency of state transition from j to k.

k within the first n transitions, starting from state i.

= j pjk

n n vij (n)

Frequency Interpretation for Balance Equations

X

j = k pkj

k

right side: the sum of long-term frequencies of state

transitions to j

equality: a transition to j means a visit to j

Birth-Death Process

transition graph, in which only self-transitions or transitions

between neighboring states are allowed.

The probability of birth at state i is denoted by bi

P(Sn+1 = i + 1|Sn = i) = bi

The probability of death at state i is denoted by di

P(Sn+1 = i 1|Sn = i) = di

Local Balance Equations

local balance equations

i1 bi1 = i di , i = 2, . . . , m

number of transitions from i 1 to i differ by at most 1

=

n n n

i1 pi1,i = i pi,i1

i1 bi1 = i di

Simplification

so the computation of steady-state probabilities is simplified.

i1 bi1 = i di

bi1 b1 . . . bi1

i = i1 = = 1

di d2 . . . di

normalize total probability to 1 to find 1 , then find

2 , . . . , m

Example 7.8 Random Walk

takes a step to the right with probability b, and a step to the

left with probability 1 b. She starts in one of the positions

1, . . . , m, but if she reaches the position 0 (or position m + 1),

her step is instantly reflected back to 1 (or m, respectively). We

introduce a Markov chain model whose states are the positions

1, . . . , m. What are the steady-state probabilities?

Example 7.9 Buffer Size

they are stored in a buffer and then transmitted. Let the size of

the buffer be m. If m packets are already present, any newly

arriving packets are discarded. We discretize time in very small

periods, and we assume that in each period, at most one event

can happen that can change the number of packets stored in the

node. We assume that at each period, exactly one of the

following occurs:

one new packet arrives, with probability b > 0

one existing packet completes transmission, with

probability d > 0 if there is at least one packet in the node,

and with probability 0 otherwise

no new packet arrives and no existing packet completes

transmission, with probability 1 b d if there is at least

one packet in the node, and with probability 1 b otherwise

Solution

The state of the buffer can be modeled as a birth-death process,

with birth and death probabilities

b0 = = bm1 = b, bm = 0

d0 = 0, d1 = = dm = d

i1 b = i d, i = 1, . . . , m

lead to

b

i = i1 = = i 0 , =

d

m

P

Adding normalization condition i = 1, we get

i=0

1 1 1 i

0 = = and i = 0 i =

1 + + + m 1 m+1 1 m+1

Short-term Behavior

Absorbing State

pss = 1

An absorbing state is in a recurrent class by itself.

Multiple absorbing states in a Markov chain are possible.

Absorption Probability

absorbing state. Specifically, starting at state i, the absorption

probability for absorbing state s is

ai = P lim Sn = s S0 = i

n

ai = is

Equation for Absorption Probabilities

For a transient initial state i

X

ai = pij aj

ai = P lim Sn = s|S0 = i

n

m

X

= P lim Sn = s, S1 = j|S0 = i

n

j=1

Xm

= P (S1 = j|S0 = i) P lim Sn = s|S1 = j

n

j=1

Xm

= P (S1 = j|S0 = i) P lim Sn1 = s|S0 = j

n

j=1

Xm

= pij aj

j=1

Example 7.11 Gamblers Ruin

$1 with probability 1 p. He plays continuously until he either

accumulates a target amount of $m, or loses all his money.

Solution

Let state i represent that gambler has i dollars. State m (as

well as state 0) is an absorbing state. For a transient state i

m

X

ai = pij aj = (1 p)ai1 + pai+1

j=0

or

(1 p)(ai ai1 ) = p(ai+1 ai )

Defining

1p

i = ai+1 ai , =

p

we have

i = i1

which leads to

i = i1 = = i 0

For absorbing states 0 and m

am = 1, a0 = 0

and

am a0 = (am am1 ) + + (a1 a0 )

= m1 + + 0

= 0 (m1 + + 1)

=1

so

1

0 =

m1 + + 1

Thus

i1 + + 0

ai = a0 + 0 + + i1 =

m1 + + 1

Expected Time to Absorption

i = E[A|S0 = i]

where

A = arg min {Sn is recurrent}

n>0

Recursion

If i is recurrent

i = 0

If i is transient

i = E[A|S0 = i] = E[E[A|S1 ]|S0 = i]

m

X

= P(S1 = j|S0 = i)E[A|S1 = j, S0 = i]

j=1

Xm

= pij E[A|S1 = j]

j=1

Xm

= pij (j + 1)

j=1

m

X

=1+ pij j

j=1

Example 7.12 Spider-and-Fly

corresponds to a Markov chain. Assume m = 4. What is the

expected number of steps until the fly is captured?

Solution

absorption, we have

1 = 0

2 = 1 + 0.31 + 0.42 + 0.33

3 = 1 + 0.32 + 0.43 + 0.34

4 = 0

Thus

10

2 = 3 =

3

Continuous-time Markov Chains

From Discrete-time to Continuous-time

at discrete times.

happen at any instant.

Representation of a Continuous-time Markov Chain

T T T n T Tn+1 Tn+2

S0 1 S1 2 S2 3 . . .

Sn Sn+1 . . .

transit to the next state Sn+1 .

Markov Property

independent of the past given the current state.

current state

waiting time for state transition to occur

next state when state transition occurs

Given Sn , both Tn+1 and Sn+1 are independent of the past.

Conditional Description

Tn+1 , the time to leave i, is an exponential random

variable with some parameter i

Sn+1 , the next state, is j with probability pij

of exponential random variables.

A CTMC is completely specified by a set of state transition

probabilities pij s and state transition rates i s.

Joint Distribution

= P(Tn+1 t|Sn = i)P(Sn+1 = j|Sn = i)

= ei t pij

Note that

j 6= i

Rate of Transitions Out of a State

a state transition is

Z

1

E[Tn+1 |Sn = i] = ti ei t dt =

0 i

On average, i transitions occur per unit time when in i.

The rate of transitions out of state i is i .

Rate of Transitions between States

When in state i

a fraction pij of the state transitions out of i go to state j

on average, the number of state transitions from i to j per

unit time is

i pij

the rate of transitions from state i to state j is

qij = i pij

Example 7.14 Machines

until an alarm signal is generated. The time up to the alarm

signal is an exponential random variable with parameter 1.

Subsequent to the alarm signal, the machine is in test mode for

an exponentially distributed amount of time with parameter 5.

The test results are positive, with probability 1/2, in which case

the machine returns to production mode, or negative, with

probability 1/2, in which case the machine is taken for repair.

The duration of the repair is exponentially distributed with

parameter 3. Construct a continuous-time Markov chain.

Solution

The states are production, test, and repair. The state

transition rates are

1 = 1, 2 = 5, 3 = 3

0 1 0

P = 12 0 21

1 0 0

0 1 0

Q = 25 0 52

3 0 0

Starting with State Transition Rates

The state transition rates and state transition probabilities can

be determined by the rates of transitions between states.

qij , i 6= j

X X X

i = i pij = i pij = qij

j j j

qij qij

pij = =P

i qij 0

j0

Auxiliary Discrete-time Markov Chain

continuous-time Markov chain.

the discrete-time random process

Z = Z0 , Z1 , . . .

where

Zn = X(n)

is an auxiliary discrete-time Markov chain of X(t).

State Transition Probabilities

The state transition probabilities of Z, denoted by {pij ()}, are

related to the rates of transitions between states of {X(t)}.

For j 6= i

pij () = P(Zn+1 = j|Zn = i)

transition out of i transition to j

z }| {

= (i + o())

z}|{

pij

= i pij + o()

= qij + o()

For j = i X

pii () = 1 pij ()

j6=i

X

=1 qij + o()

j6=i

Example 7.14 (continued)

for an auxiliary DTMC is

1 0

5 5

2 1 5

2

3 0 1 3

Alternative Characterization of CTMC

the current state i, time later the state is in another state j

with probability

qij + o(), j 6= i

for some qij 0, then the process is a CTMC.

Example 7.15 Communication Network

to a Poisson process with rate . The packets are stored at a

buffer with room for up to m packets, and are then transmitted

one at a time. However, if a packet finds a full buffer upon

arrival, it is discarded. The time required to transmit a packet

is exponentially distributed with parameter . Show that this

can be modeled as a continuous-time Markov chain.

Solution

P(X(t + ) = i + 1|X(t) = i) = + o(), i = 0, . . . , m 1

Steady-State Probabilities

For a CTMC X(t) with single recurrent class, each state has a

steady-state probability

lim P(X(t) = j) = j

t

DTMC

Zn = X(n)

Balance Equations

The steady-state probabilities satisfy the balance equations

X X

j qjk = k qkj , j

k6=j k6=j

Zn = X(n) are X

j = k pkj

k

X

= j pjj + k pkj

k6=j

X X

j = j 1 qjk + o() + k (qkj + o())

k6=j k6=j

X X

j qjk = k qkj

k6=j k6=j

Example 7.14 Machines

Solution

5

1 = 2 + 33

2

52 = 1

5

33 = 2

2

1 + 2 + 3 = 1

30

1 =

41

6

2 =

41

5

3 =

41

Continuous-time Birth-Death Processes

arranged and only transitions to a neighboring state are allowed.

That is

qij = 0, for |i j| > 1

Local Balance Equations

same

i qij = j qji

for j = i 1.

Example 7.15 Communication Network

Solution

i = i+1 i+1 = i = i , =

i = 0 i

Normalization

0 + 1 + + m = 1 0 (1 + + + m ) = 1

0 = (1 + + + m )1

i

i =

1 + + + m

- 00953266Uploaded byYumna
- syllabus-ece5510-fall2009Uploaded byaizaz_khalid_1
- Continuous Time Markov ChainsUploaded byEdgar Marca
- ConstantinUploaded byBinu Velambil
- ps2_2015Uploaded byscatterwalker
- QueueUploaded byapi-3839714
- h1Uploaded byTodo Mundo
- hw1-2017[2140]Uploaded byMohit Duseja
- Brownian MontionUploaded byPrakash Dhage
- 0787335 D1E03 Kouvatsos d Ed Performance Evaluation and Applications of AtUploaded bystroleg2011
- 1551288261614-YourStory DeepScienceReport Feb2019Uploaded byDeliver r
- Constantin.pdfUploaded byAsad Ali
- psheet10Uploaded byMichael Corleone
- psVol5-13Uploaded byhollowkenshin
- 62. a Bulk Quorum Queueing System With a Random Setup Time Under N-policy and With Bernoulli Vacation Schedule 2006Uploaded byebenesarb
- 10.1.1.153Uploaded byTatiana Gonzalez
- APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIFFERENTIAL EQUATIONS UNDER NONLOCAL CONDITIONSUploaded byijflsjournal
- BOT CreditRiskModels.surveyInRelation2CapitalFrameworkUploaded byangkanu
- syllabus2Uploaded bygyyr3010
- Bayesian Methods for HackersUploaded byohdoubleyou
- 05700393Uploaded bydevender
- On Functional Central Limit Theorems for Dependent HeterogeneousUploaded byMahfudhotin
- OR2Uploaded bymnbvqwerty
- chap1.pdfUploaded byMuhamad Mustain
- 1 Queueing SystemsUploaded bymusiomi2005
- 9783540707127-t1Uploaded byAl Khwarizmi
- Mtech Civil 20133Uploaded bysabareesan09
- A Chaotic Model of Power Concentration in the International SystemUploaded byFloor Gobbi
- BlessingUploaded byBlessing-Eyoma Ojelabi
- 10.1.1.509.5452Uploaded byferasalkam

- Sol a6 MarkovChainUploaded byHaydenChadwick
- Simply-Pen-Tangled-Sample-Page-2.pdfUploaded byHaydenChadwick
- Secret Garden 1Uploaded byVanessaElviraVictoria
- Secret Garden 1Uploaded byVanessaElviraVictoria
- Markov ChainsUploaded byHaydenChadwick
- kleurplaat-2 (1).pdfUploaded byHaydenChadwick
- Binder 1Uploaded byHaydenChadwick
- Day-of-the-Dead-coloring-pages-for-Grown-Ups.pdfUploaded byHaydenChadwick
- Secret-garden-1.pdfUploaded byHaydenChadwick
- kleurplaat-3.pdfUploaded byHaydenChadwick
- Creative-Coloring-Inspirations-p63.pdfUploaded byHaydenChadwick
- For-adult-people-because-difficult.pdfUploaded byHaydenChadwick
- ElephantUploaded byMocanu Adriana
- Secret_Garden_activity_sheet (1).pdfUploaded byHaydenChadwick
- colorear adulto (4).pdfUploaded byR.H
- FlowerDesignsbyJeneanMorrison2015.pdfUploaded byHaydenChadwick
- Many-hearts-in-this-image.pdfUploaded byHaydenChadwick
- Secret_Garden_activity_sheet.pdfUploaded byHaydenChadwick
- Free-Colouring-Pages-for-Adults-Chameleon-2.pdfUploaded byHaydenChadwick

- Stalker Ingamecc ReadmeUploaded bydierttam
- AP Ministerial-rules - CorrectedUploaded bymrraee4729
- 4973580_D0012Uploaded bys3r1p
- schulerice.pdfUploaded byg1974611
- precharge kit 5000PSI.pdfUploaded bymetal_dung2
- Adaptive Active Array Phased Multifunction RadarsUploaded byHari Ram
- Ayudantia_10_Pauta Resistencia de materialesUploaded byAlvaroFranco
- Measuring the Service Quality of Mobile Phone Companies in Saudi ArabiaUploaded bySaiLohngYee
- Case Study for Chapter 1Uploaded byMurali Krishna Velaveti
- Corporation a - CUploaded byLaughfy Rubin
- Ball _ Lead Screw Sizing Tool (1).pdfUploaded bysero ghazarian
- oracle apps by razia banuUploaded byapi-241983191
- CT Series 20x36 Jaw Crusher[1]Uploaded byYimmy Moreno
- Iapws95 RevUploaded byJeff Matthew Uayan
- Max Plus TutorialUploaded bykrish0610
- Samsung Microwave ManualUploaded byVikas Vatnani
- Synthetic Slag for Secondary SteelmakingUploaded byWaqas Ahmed
- 03 Intracellular AccumulationsUploaded byraanja2
- Alomo Bitters DataUploaded bybiodunagbaje
- soils webquestUploaded byapi-434053294
- 5002 Freestyle CHOUploaded byAnil Reddy
- Lucia Barrameda v. Rural Bank Case DigestUploaded byCheChe
- TommyHYUploaded byShazad Ali
- EnzymesUploaded byIrfanul Noel Ninetynine
- 01102013 Neuro 1Uploaded byjack.sun
- C10Uploaded bygmcadcam
- Building Evidence-Based Advocacy in Cyberspace: A Social Work Imperative for the New Millennium John G. McNutt, PhDUploaded byAmory Jimenez
- F-modelo de Examen Final INGLESUploaded byFernanda
- Mainland vs. Movilla LaborUploaded byCharo Bayani Chio
- Method of Statement -Natural Gas Pipeline InstallationUploaded byAndrew Wong