Calculating BLEU score for neural machine translation using Python

WBOY
Release: 2023-09-02 11:09:11
forward
1714 people have browsed it

Calculating BLEU score for neural machine translation using Python

Using NMT or Neural Machine Translation in NLP we can translate text from a given language to a target language. To evaluate how well the translation performed, we used BLEU or Bilingual Assessment student scores in Python.

The BLEU score works by comparing machine-translated sentences to human-translated sentences, both using n-grams. Furthermore, as the sentence length increases, the BLEU score decreases. Generally, BLEU scores range from 0 to 1, with higher values ​​indicating better quality. However, it is very rare to get a perfect score. Note that the evaluation is done on the basis of substring matching, it does not take into account other aspects of the language such as coherence, tense, and grammar.

formula

BLEU = BP * exp(1/n * sum_{i=1}^{n} log(p_i))
Copy after login

Here, each term has the following meaning -

  • BP is a brevity penalty. It adjusts the BLEU score based on the length of the two texts. The formula is -

BP = min(1, exp(1 - (r / c)))
Copy after login
  • n is the maximum order of n-gram matching

  • p_i is the precision score

algorithm

  • Step 1 - Import the dataset library.

  • Step 2 - Use the load_metric function with bleu as parameter.

  • Step 3 - Make a list based on the words of the translated string.

  • Step 4 - Repeat step 3 with the words of the desired output string.

  • Step 5 - Use bleu.compute to find the bleu value.

Example 1

In this example, we will use Python's NLTK library to calculate the BLEU score for machine translation of German sentences into English.

  • Source text (English) - It’s raining today

  • Machine Translated Text - It's raining today

  • Required Text - It's raining today, it's raining today

While we can see that the translation wasn't done correctly, we can get a better idea of ​​the translation quality by looking for the blue score.

Example

#import the libraries
from datasets import load_metric
  
#use the load_metric function
bleu = load_metric("bleu")

#setup the predicted string
predictions = [["it", "rain", "today"]]

#setup the desired string
references = [
   [["it", "is", "raining", "today"], 
   ["it", "was", "raining", "today"]]
]

#print the values
print(bleu.compute(predictions=predictions, references=references))
Copy after login

Output

{'bleu': 0.0, 'precisions': [0.6666666666666666, 0.0, 0.0, 0.0], 'brevity_penalty': 0.7165313105737893, 'length_ratio': 0.75, 'translation_length': 3, 'reference_length': 4}
Copy after login

You can see that the translation is not very good, so the blue score is 0.

Example 2

In this example, we will calculate the BLEU score again. But this time, we will machine translate a French sentence into English.

  • Source text (German) - We are going on a trip

  • Machine translated text - We are going to travel

  • Required text - We are going to travel, we are going to travel

You can see that this time the translated text is closer to the desired text. Let’s check its BLEU score.

Example

#import the libraries
from datasets import load_metric
  
#use the load_metric function
bleu = load_metric("bleu")

#steup the predicted string
predictions = [["we", "going", "on", "a", "trip"]]

#steup the desired string
references = [
   [["we", "are", "going", "on", "a", "trip"], 
   ["we", "were", "going", "on", "a", "trip"]]
]

#print the values
print(bleu.compute(predictions=predictions, references=references))
Copy after login

Output

{'bleu': 0.5789300674674098, 'precisions': [1.0, 0.75, 0.6666666666666666, 0.5], 'brevity_penalty': 0.8187307530779819, 'length_ratio': 0.8333333333333334, 'translation_length': 5, 'reference_length': 6}
Copy after login

You can see that the translation completed this time is very close to the desired output, so the blue score is also higher than 0.5.

in conclusion

BLEU Score is a great tool to check the efficiency of your translation model so you can further improve it to produce better results. Although the BLEU score can be used to get a rough idea of ​​a model, it is limited to a specific vocabulary and often ignores the nuances of language. This is why BLEU scores rarely reconcile with human judgment. But you can definitely try some alternatives like ROUGE score, METEOR metric, and CIDEr metric.

The above is the detailed content of Calculating BLEU score for neural machine translation using Python. For more information, please follow other related articles on the PHP Chinese website!

source:tutorialspoint.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template