Skip to content

MikeLab 2024 subproject. Developed for the computation of ROUGE score.

License

Notifications You must be signed in to change notification settings

RJTPP/ROUGE-Evaluation-Tool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ROUGE Evaluation Tool

Python 3.6+ License: MIT

This project provides a Python implementation to calculate ROUGE (Recall-Oriented Understudy for Gisting Evaluation) scores. It includes methods for computing ROUGE-1, ROUGE-2, and ROUGE-L metrics, which are commonly used to evaluate text similarity and summarization performance.

Table of Contents

Features

  • Reads input datasets from JSON files and outputs results as JSON.
  • Outputs:
    • ROUGE-1 calculation: Measures unigram overlap.
    • ROUGE-2 calculation: Measures bigram overlap.
    • ROUGE-L calculation: Measures the longest common subsequence (LCS).

Requirements and Dependencies

This project was developed using Python 3.6 and is tested to be compatible with Python 3.6 through 3.12 and should work with newer versions. No additional dependencies are required.

Quick Start

  1. Clone the repository:
git clone https://github.com/RJTPP/ROUGE-Evaluation-Tool.git && 
cd ROUGE-Evaluation-Tool
  1. Prepare the Dataset:
  • Create a JSON file named dataset.json inside the /data folder with the following structure. See the Dataset Structure section for more details.
[
  {
    "reference": "the reference text",
    "candidate": [
      "candidate text 1",
      "candidate text 2"
    ]
  }
]
  1. Run the script:
python main.py
  1. Check the Output:
  • The ROUGE scores will be saved in /data/scores.json.
  • For more details, see the Output and Score Explanation section.

Dataset Structure

The input dataset must be placed in the /data folder as a JSON file named dataset.json. It should contain a list of evaluation instances, where each instance consists of:

  1. reference (String): The reference or ground truth text that serves as the standard for the comparison.
  2. candidate (Array of Strings): A list of candidate texts generated by a model or system to be evaluated against the reference text.

Example

[
  {
    "reference": "the ground truth or expected text",
    "candidate": [
      "candidate text 1",
      "candidate text 2",
      "... additional candidate texts ..."
    ]
  },
  "... additional evaluation instances ..."
]

Output

The script generates a scores.json file in the /data folder with the following structure:

[
  {
    "candidate": "candidate text 1",
    "reference": "the reference text",
    "ROUGE-1": {
      "precision": 0.75,
      "recall": 0.6,
      "f-measure": 0.67
    },
    "ROUGE-2": {
      "precision": 0.5,
      "recall": 0.4,
      "f-measure": 0.44
    },
    "ROUGE-L": {
      "precision": 0.7,
      "recall": 0.55,
      "f-measure": 0.62
    }
  }
]

Score Explanation

Each ROUGE score is represented by three key metrics:

  1. Precision:
    • Measures the fraction of overlapping elements in the candidate text that are also in the reference text.
$$\text{Precision} = \frac{\text{Overlap Count}}{\text{Candidate Text Word Count}}$$
  1. Recall:
    • Measures the fraction of overlapping elements in the reference text that are also in the candidate text.
$$\text{Recall} = \frac{\text{Overlap Count}}{\text{Reference Text Word Count}}$$
  1. F-Measure:
    • The harmonic mean of Precision and Recall, giving a balanced score between the two.
$$\text{F-Measure} = \frac{2 \times \text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}}$$

The ROUGE scores are calculated as:

  • ROUGE-1: Based on unigram overlap (individual words).
  • ROUGE-2: Based on bigram overlap (pairs of consecutive words).
  • ROUGE-L: Based on the longest common subsequence (LCS) between the candidate and reference texts.

License

This project is released under the MIT License.

You are free to use, modify, and distribute this software under the terms of the MIT License. See the LICENSE file for detailed terms and conditions.

Contributors

Rajata Thamcharoensatit (@RJTPP)

About

MikeLab 2024 subproject. Developed for the computation of ROUGE score.

Topics

Resources

License

Stars

Watchers

Forks

Languages