Beruflich Dokumente
Kultur Dokumente
• Desired parameters unknown, determined from set • Example – observe condition of seaweed to
of observable parameters, Y determine weather (illustrative, not realistic)
– words determined from sounds (speech recognition)
• Columns add to 1 → given weather, seaweed will
– words determined from lines (optical character recognition)
have some property
– weather determined from secondary phenomena (example)
• Matrix B describes probabilities of Yk sun cloud rain
– bkj = P(state Yk observed | system was in state j) dry ⎡0.60 0.25 0.05⎤
dryish ⎢⎢0.20 0.25 0.10⎥⎥
B=
damp ⎢0.15 0.25 0.35⎥
⎢ ⎥
soggy ⎣0.05 0.25 0.50⎦
probabilities
αt+1(j) = Σ [a (i)a ]b
I=1
t ij kt+1j
P(Yk) = Σ αT(j)
I=1
n
• Need algorithm to determine likely sequence of states
P(Yk) = Σ αT(j) that produced sequence of observed states
I=1
• Can start at t = 1 and look for most probable next
• Forward algorithm state, given observation
– noise can result in bad decision
• Find model that maximizes this probability – error compounded; illegal sequence
– e.g., OCR, model could be individual word
• Isolation vs. context
– may result in different best guesses
• Start with observed state dry • P(day 1 sun and day 2 sun and dryish) =
• P(dry | sun) = ⅓ · 0.6 = 0.2 0.2 · 0.5 · 0.2 = 0.02
• P(dry | cloud) = ⅓ · 0.25 = 0.0833 • P(day 1 cloud and day 2 sun and dryish) = 0.00417
• P(dry | rain) = ⅓ · 0.05 = 0.0167 • P(day 1 rain and day 2 sun and dryish) = 0.000833
• Find P for day 2 cloud and day 2 rain
• sun is best guess • We eventually find sun is best guess for day 2
• Now find probability of dryish, given dry and sun on • Do same for next two observed states
previous day
• Eventually you’ll find most likely final state is rain • Make initial estimate
• Use back pointers to get most likely sequence to • Use forward-backward (e.g., Baum-Welch) algorithm
produce dry, dryish, soggy, soggy to improve estimate
• You’ll get sun, sun, rain, rain • Forward – probability of being in a (hidden) state at t,
given the observed sequence
• Backward – probability of succeeding observation,
given current state and time
• Train with reference observations
References
• http://en.wikipedia.org/wiki/Main_Page
• http://www.comp.leeds.ac.uk/roger/
HiddenMarkovModels/html_dev/main.html