Sie sind auf Seite 1von 4

Markov Model

• Occupies one of finite set of states at any given


time (X1, X2, … , Xn)
– could be characters, phonemes, weather conditions
Hidden Markov Models • Probability of occupying given state determined
solely by recent history
Andrew Davis – k-th order model depends on last k states
• Probabilities described by Matrix A
– aij = P(system in state j | system was in state i)
– aij are time independent
– rows add to 1

December 4, 2007 Computer Vision 1 December 4, 2007 Computer Vision 2


Lecture 13: Image Understanding Lecture 13: Image Understanding

Markov Model Markov Model

• Example of probability matrix (weather forecasting) • Simpler example – traffic light


– red → green → yellow → red …
sun cloud rain
sun ⎡0.50 0.375 0.125⎤
A = cloud ⎢⎢0.25 0.375 0.625⎥⎥
rain ⎢⎣0.25 0.375 0.375⎥⎦

• Rows add to 1 → given one weather condition at t1,


the next condition was one of the three at t0

December 4, 2007 Computer Vision 3 December 4, 2007 Computer Vision 4


Lecture 13: Image Understanding Lecture 13: Image Understanding

Hidden Markov Model Hidden Markov Model

• Desired parameters unknown, determined from set • Example – observe condition of seaweed to
of observable parameters, Y determine weather (illustrative, not realistic)
– words determined from sounds (speech recognition)
• Columns add to 1 → given weather, seaweed will
– words determined from lines (optical character recognition)
have some property
– weather determined from secondary phenomena (example)
• Matrix B describes probabilities of Yk sun cloud rain
– bkj = P(state Yk observed | system was in state j) dry ⎡0.60 0.25 0.05⎤
dryish ⎢⎢0.20 0.25 0.10⎥⎥
B=
damp ⎢0.15 0.25 0.35⎥
⎢ ⎥
soggy ⎣0.05 0.25 0.50⎦

December 4, 2007 Computer Vision 5 December 4, 2007 Computer Vision 6


Lecture 13: Image Understanding Lecture 13: Image Understanding
Hidden Markov Model Hidden Markov Model

• HMM is a 3-tuple λ = (π, A, B) • HMM has three principle issues


– A is probability matrix of hidden states – Evaluation
– B is probability matrix of observable states • probability model generated observations
π is n-dimensional vector describing initial probabilities (t = 1) • may have more than one model – choose best fit
– Decoding
• Speech processing • most likely state sequence given observation sequence
– A is probability one phoneme follows another
– Learning
– B relates features of phoneme under analysis • what is best λ
• OCR
– A is probability of next character
– B is features of line being analyzed

December 4, 2007 Computer Vision 7 December 4, 2007 Computer Vision 8


Lecture 13: Image Understanding Lecture 13: Image Understanding

HMM Evaluation HMM Evaluation

• Want probability that model generated observed • Define intermediate probabilities


sequence ƒ αt(j) = P(Xj at t)
• Evaluate all possible sequences, calculate n

probabilities
ƒ αt+1(j) = Σ [a (i)a ]b
I=1
t ij kt+1j

• αij is P(moving to state j)


P(Yk) = Σ P(Yk | Xi)P(Xi) • bjkt+1 is P(specific observation made at this time)
Xi
• initialize α1(j) = π(j)bk j
– superscript indicates sequence 1
• Then
•Exponential with length of sequence n

P(Yk) = Σ αT(j)
I=1

December 4, 2007 Computer Vision 9 December 4, 2007 Computer Vision 10


Lecture 13: Image Understanding Lecture 13: Image Understanding

HMM Evaluation HMM Decoding

n
• Need algorithm to determine likely sequence of states
P(Yk) = Σ αT(j) that produced sequence of observed states
I=1
• Can start at t = 1 and look for most probable next
• Forward algorithm state, given observation
– noise can result in bad decision
• Find model that maximizes this probability – error compounded; illegal sequence
– e.g., OCR, model could be individual word
• Isolation vs. context
– may result in different best guesses

December 4, 2007 Computer Vision 11 December 4, 2007 Computer Vision 12


Lecture 13: Image Understanding Lecture 13: Image Understanding
HMM Decoding HMM Decoding – Example

• Go down observed sequence • Weather determination based on seaweed


– record likelihood of hidden state being reached – guess π = (⅓ , ⅓ , ⅓) (equal P of sun, cloud, rain)
– keep pointer to most likely predecessor • Say we observe dry, dryish, soggy, soggy
• At end of sequence, choose final state based on • What is most likely weather pattern to produce
history, then step back through earlier stages observations
• Viterbi Algorithm

December 4, 2007 Computer Vision 13 December 4, 2007 Computer Vision 14


Lecture 13: Image Understanding Lecture 13: Image Understanding

HMM Decoding – Example HMM Decoding – Example

• Start with observed state dry • P(day 1 sun and day 2 sun and dryish) =
• P(dry | sun) = ⅓ · 0.6 = 0.2 0.2 · 0.5 · 0.2 = 0.02
• P(dry | cloud) = ⅓ · 0.25 = 0.0833 • P(day 1 cloud and day 2 sun and dryish) = 0.00417
• P(dry | rain) = ⅓ · 0.05 = 0.0167 • P(day 1 rain and day 2 sun and dryish) = 0.000833
• Find P for day 2 cloud and day 2 rain
• sun is best guess • We eventually find sun is best guess for day 2
• Now find probability of dryish, given dry and sun on • Do same for next two observed states
previous day

December 4, 2007 Computer Vision 15 December 4, 2007 Computer Vision 16


Lecture 13: Image Understanding Lecture 13: Image Understanding

HMM Decoding – Example HMM Learning

• Eventually you’ll find most likely final state is rain • Make initial estimate
• Use back pointers to get most likely sequence to • Use forward-backward (e.g., Baum-Welch) algorithm
produce dry, dryish, soggy, soggy to improve estimate
• You’ll get sun, sun, rain, rain • Forward – probability of being in a (hidden) state at t,
given the observed sequence
• Backward – probability of succeeding observation,
given current state and time
• Train with reference observations

December 4, 2007 Computer Vision 17 December 4, 2007 Computer Vision 18


Lecture 13: Image Understanding Lecture 13: Image Understanding
HMM Applications Coupled HMM

• Speech recognition • A matrices of two models


• Optical character recognition – probabilistically related

• Handwriting analysis • Can couple arbitrarily many models


• Sign language (from video) • Example – speech recognition
– audio
• Lip reading – video

December 4, 2007 Computer Vision 19 December 4, 2007 Computer Vision 20


Lecture 13: Image Understanding Lecture 13: Image Understanding

References

• Image Processing, Analysis, and Machine Vision by


Sonka, Hlavac, Boyle

• http://en.wikipedia.org/wiki/Main_Page

• http://www.comp.leeds.ac.uk/roger/
HiddenMarkovModels/html_dev/main.html

December 4, 2007 Computer Vision 21


Lecture 13: Image Understanding

Das könnte Ihnen auch gefallen