Sie sind auf Seite 1von 11

HIDDEN MARKOV MODEL METHODS

SUBMITTED BY
LOKESH KUMAR(RIT2009013) PRAKHAR JAIN(RIT2009053) ANSHUL JAIN(RIT2009060) SHASHANK MITTAL(RIT2009071)

INTRODUCTION
A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but output, dependent on the state, is visible.

INTRODUCTION
Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states. Note that the adjective 'hidden' refers to the state sequence through which the model passes, not to the parameters of the model; even if the model parameters are known exactly, the model is still 'hidden'.

Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, bioinformatics.

Definition of a hidden Markov model


A hidden Markov model (HMM) is a triple (,A,B).
the vector of the initial state probabilities;

the state transition matrix;


the confusion matrix;

EXAMPLE
Suppose a person has say three coins and is sitting inside a room tossing them in some sequence--this room is closed and what you are shown (on a display outside the room) is only the outcomes of his tossings TTHTHHTT... this will be called the observation sequence . You do not know the sequence in which he is tossing the different coins, nor do you know the bias of the various coins.

EXAMPLE
To appreciate how much the outcome depends on the individual biasing and the order of tossing the coins, suppose you are given that the third coin is highly biased to produce heads and all coins are tossed with equal probability. Then, we naturally expect there to be far greater number of heads than tails in the output sequence.

EXAMPLE
Now if it be given that besides the bias the probability of going to the third coin (state) from either the first or the second coin (state) is zero; then assuming that we were in the first or second state to begin with the heads and tails will appear with almost equal probability inspite of the bias. So we see that the output sequence depends very much on the individual bias, the transition probabilities between various states, as well as on which state is chosen to begin the observations.

EXAMPLE
The three sets, namely, the set of individual bias of the three coins, the set of transition probabilities from one coin to the next and the set of initial probabilities of choosing the states characterize what is called as the HIDDEN MARKOV MODEL(HMM) for this coin tossing experiment. We shall shortly see the problems and their solutions that come under the framework of HMMs.

The state transition and output probabilities of an HMM are indicated by the line opacity in the upper part of the diagram. Given that we have observed the output sequence in the lower part of the diagram, we may be interested in the most likely sequence of states that could have produced it. Based on the arrows that are present in the diagram, the following state sequences are candidates: 532532 432532 312532 We can find the most likely sequence by evaluating the joint probability of both the state sequence and the observations for each case (simply by multiplying the probability values, which here correspond to the opacities of the arrows involved).

The Three Problems for HMMs


Most applications of HMMs are finally reduced to solving three main problems. These are : Problem 1 : Given the model how do we compute P( O), the probability of occurrence of the observation sequence O = ,,..., . Problem 2 : Given the model how do we choose a state sequence I = ,,..., so that P( O,I),the joint probability of the observation sequence O = ,,..., and the state sequence given the model is maximized. Problem 3 : How do we adjust the HMM model parameters so that P( O) (or P( O,I) ) is maximized.

Das könnte Ihnen auch gefallen