### Evaluation problem of Hidden Markov Model, One of the three fundamental problems to be solved under HMM is Likelihood problem

##
__Evaluation problem of Hidden Markov Model__

###
**Evaluation
problem (Likelihood)**:

**Evaluation problem (Likelihood)**:

Given an HMM

*λ = (A, B,**π***and an observation sequence***)***, how do we compute the probability that the observed sequence was produced by the model. In other words, it is about determining the likelihood of the observation sequence.***O = o*_{1}, o_{2}, …, o_{T}###
__Explain evaluation problem of HMM with example__

__Explain evaluation problem of HMM with example__

It is just about how
to calculate the probability of the observation sequence given the model. The calculation
depends on the data given to us. For instance, if you are given observation sequence
O =

**and***o*_{1}, o_{2}, …, o_{T}- one particular
hidden state sequence
, you are computing the likelihood of observation only for the given sequence as in the example below; [*Q = q*_{1}, q_{2}, …, q_{T}]*For more, refer Hidden Markov Model for POS tagging solved exercises*

*P(O|Q) = P(drink water | VB NN) = P(VB|start) * P(drink|VB) * P(NN|VB) * P(water|NN)*

- no particular state
sequence, then you need to find the same as above and do the sum over all
possible hidden state sequences. For example, to get the
without any particular hidden tag sequence, we need to compute the likelihood for each possibility and sum all the values as given below. This is because both the words “*P(drink water)*and “*drink*”are listed under the POS categories*water*”**Noun**and**Verb**.

*P(drink water) = P(drink water | VB NN) + P(drink water | VB VB) + P(drink water | NN VB) + P(drink water | NN NN)*

This way we can
compute the probability for all hidden sequences to find the probability of the
given observation sequence.

###
__What is the
drawback in Evaluation (likelihood) problem of HMM?__

How about in
general for an HMM with

**N hidden states**and a**sequence of T observations**? The above said calculations have to be done for**N**different possibilities. Let us do simple calculations to understand the complexity;^{T}- If number of tags N
= 2, and number of words observed T = 2, then 2
^{2}= 4 possible likelihood estimates.

- If N = 6 and T = 4,
then 6
^{4}= 1296 possible likelihood estimates.

- For longer sentences, it will be too high.

###
__What is the
solution to handle Evaluation problem of HMM?__

- Forward-Backward procedure

***************

Go to NLP Glossary

Go to Natural Language Processing Home page

## No comments:

## Post a comment