# Natural language processing keywords, what is add-1 smoothing, what is Laplace smoothing, explain add-1 smoothing with an example, unigram and bi-gram with add-1 laplace smoothing

__Add-1
(Laplace) smoothing__

__Add-1 (Laplace) smoothing__

We have used Maximum Likelihood Estimation (MLE) for training the parameters of an N-gram model. The problem with MLE is that it assigns zero probability to unknown (unseen) words. This is because, MLE uses a training corpus. If the word in the test set is not available in the training set, then the count of that particular word is zero and it leads to zero probability.

To eliminate this zero probability, we can
do smoothing. **Smoothing is about taking some
probability mass from the events seen in training and assigns it to unseen
events**. ** Add-1 smoothing** (also called
as

**) is a simple smoothing technique that Add 1 to the count of all n-grams in the training set before normalizing into probabilities.**

*Laplace smoothing*__Example:__

Recall that the unigram and bi-gram probabilities for a word w are calculated as follows;

P(w) = C(w)/N

P(w_{n}|w_{n-1}) = C(w_{n-1}
w_{n})/N

Where, P(w) is the unigram probability, P(w_{n-1}
w_{n}) is the bigram probability, C(w) is the count of occurrence of w
in the training set, C(w_{n-1} w_{n}) is the count of bigram (w_{n-1}
w_{n}) in the training set, N is the total number of word tokens in the
training set.

__Add-1
smoothing for unigrams__

P_{Laplace}(w) =
(C(w)+1)/N+|V|

Here, N is the total number of tokens in the training set and |V| is the size of the vocabulary represents the unique set of words in the training set.

As we have added 1 to the numerator, we have to normalize that by adding the count of unique words with the denominator in order to normalize.

__Add-1
smoothing for bigrams__

P_{Laplace}(w_{n}|w_{n-1})
= (C(w_{n-1} w_{n})+1)/N+|V|

*************************

**Related articles:**

## No comments:

## Post a comment