BLEU Score in Rust
-
Updated
May 27, 2024 - Python
BLEU Score in Rust
Machine Translation (MT) Evaluation Scripts
Investigates the reproducibility of METEOR scores in scientific papers. Includes a systematic literature review and validation of METEOR implementations.
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
MAchine Translation Evaluation Online (MATEO)
Hugging Face Transformers, a popular Python library, offers pre-trained models for various powerful toolkit for NLP tasks, opening doors to career opportunities and be part of the innovation that will change the world with shaping the future of human-machine interaction.
A C# class library for calculating the BLEU score, a metric for evaluating the quality of machine translations.
Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.
Python Implementation of lexical vector embedding similarity scoring, zero-shot classification of images and n-gram based scoring to compare textual summaries
Well tested & Multi-language evaluation framework for text summarization.
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation
Image caption generator is a task that involves computer vision and natural language processing concepts to recognize the context of an image and describe them in a natural language like English.
A python3 library for evaluating caption's BLEU, Meteor, CIDEr, SPICE,ROUGE_L,WMD score. Fork from https://github.com/ruotianluo/coco-caption
Add a description, image, and links to the bleu topic page so that developers can more easily learn about it.
To associate your repository with the bleu topic, visit your repo's landing page and select "manage topics."