This project provides a Python implementation to calculate ROUGE (Recall-Oriented Understudy for Gisting Evaluation) scores. It includes methods for computing ROUGE-1, ROUGE-2, and ROUGE-L metrics, which are commonly used to evaluate text similarity and summarization performance.
- Features
- Requirements and Dependencies
- Quick Start
- Dataset Structure
- Output
- Score Explanation
- License
- Contributors
- Reads input datasets from JSON files and outputs results as JSON.
- Outputs:
ROUGE-1
calculation: Measures unigram overlap.ROUGE-2
calculation: Measures bigram overlap.ROUGE-L
calculation: Measures the longest common subsequence (LCS).
This project was developed using Python 3.6 and is tested to be compatible with Python 3.6 through 3.12 and should work with newer versions. No additional dependencies are required.
- Clone the repository:
git clone https://github.com/RJTPP/ROUGE-Evaluation-Tool.git &&
cd ROUGE-Evaluation-Tool
- Prepare the Dataset:
- Create a JSON file named dataset.json inside the /data folder with the following structure. See the Dataset Structure section for more details.
[
{
"reference": "the reference text",
"candidate": [
"candidate text 1",
"candidate text 2"
]
}
]
- Run the script:
python main.py
- Check the Output:
- The ROUGE scores will be saved in
/data/scores.json
. - For more details, see the Output and Score Explanation section.
The input dataset must be placed in the /data folder as a JSON file named dataset.json. It should contain a list of evaluation instances, where each instance consists of:
reference
(String): The reference or ground truth text that serves as the standard for the comparison.candidate
(Array of Strings): A list of candidate texts generated by a model or system to be evaluated against the reference text.
[
{
"reference": "the ground truth or expected text",
"candidate": [
"candidate text 1",
"candidate text 2",
"... additional candidate texts ..."
]
},
"... additional evaluation instances ..."
]
The script generates a scores.json
file in the /data
folder with the following structure:
[
{
"candidate": "candidate text 1",
"reference": "the reference text",
"ROUGE-1": {
"precision": 0.75,
"recall": 0.6,
"f-measure": 0.67
},
"ROUGE-2": {
"precision": 0.5,
"recall": 0.4,
"f-measure": 0.44
},
"ROUGE-L": {
"precision": 0.7,
"recall": 0.55,
"f-measure": 0.62
}
}
]
Each ROUGE score is represented by three key metrics:
- Precision:
- Measures the fraction of overlapping elements in the candidate text that are also in the reference text.
- Recall:
- Measures the fraction of overlapping elements in the reference text that are also in the candidate text.
- F-Measure:
- The harmonic mean of Precision and Recall, giving a balanced score between the two.
The ROUGE scores are calculated as:
- ROUGE-1: Based on unigram overlap (individual words).
- ROUGE-2: Based on bigram overlap (pairs of consecutive words).
- ROUGE-L: Based on the longest common subsequence (LCS) between the candidate and reference texts.
This project is released under the MIT License.
You are free to use, modify, and distribute this software under the terms of the MIT License. See the LICENSE file for detailed terms and conditions.
Rajata Thamcharoensatit (@RJTPP)