Skip to content

SVC04/Explainable-Summarization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Explainable Summarization

This is a GitHub Pages project showcasing explainable text summarization. The tool is introduced in the paper “Let’s Agree to Disagree”: Investigating the Disagreement Problem in Explainable AI for Text Summarization". The arXiv link of the paper is here: [https://arxiv.org/abs/2410.18560]. This GitHub repository also contains the code for the paper, located in the scripts folder. The Python scripts for Global Disagreement and Local Disagreement analysis using the RXAI approach are given in the Python scripts global_disagreement_analysis.ipynb and rxai_analysis.ipynb. Other uploaded Python scripts are designed for use cases involving statistical testing, coherence, and semantic similarity checks (global vs. RXAI).

View the live site here and understand the summarization model: Explainable Summarization

Key Features

  • Provides interactive visualizations for summarization explanations.
  • Uses JavaScript for dynamic user interactions.
  • Fully responsive and hosted via GitHub Pages.

Requirements

  • The input format should contain:
    • Source sentences: Enclosed in quotation marks (" ") and separated by commas.
    • Attribution scores: A comma-separated list of normalized values (e.g., between 0 and 1).
    • The attribution weights can be fetched using the Inseq library by Sarti et al., 2023 🔗.

Tool’s Interface:

  • Title to mention which XAI method is used to generate attribution weight
  • Model Generated Summary (Paste it as is generated by the model without any additional quotation or commas)
  • Accepts source sentences in quotation marks (" ") separated by commas.
  • Accepts corresponding normalized attribution scores as comma-separated values.
  • Allows users to explore the influence of each sentence on the summary generation.

Example Input and Weights

  • Title: Attribution analysis of summarization model using XAI method: LIME

  • Summary: XAI for text summarization helps understand the model's decision-making process. We can understand the influence of input features on the summary.

  • Sentence 1: "Explainable AI(XAI) helps understand the model's decision-making." attribution_weight_s1 = 0.7

  • Sentence 2: "XAI in text summarization helps in understanding the contribution of input features, such as input tokens or sentences, to the model-generated summary." attribution_weight_s2 = 0.5

  • Sentence 3: "Leveraging XAI for text summarization, we can get an idea about the most important sentences while the summary is generated." attribution_weight_s3 = 0.3

Note: To extract attribution weights, you can use the Inseq library:

To visualize the plot of three sentences with its weight should be added as:

  • Title: Attribution analysis of summarization model using XAI method: LIME

  • Summary: XAI for text summarization helps understand the model's decision-making process. We can understand the influence of input features on the summary.

  • Input Sentences: "Explainable AI(XAI) helps understand the model's decision-making.", "XAI in text summarization helps in understanding the contribution of input features, such as input tokens or sentences, to the model-generated summary.","Leveraging XAI for text summarization, we can get an idea about the most important sentences while the summary is generated."

  • Weights: 0.7, 0.5, 0.3

After Adding these Inputs, users can click on the generate text plot button, creating a color-coded plot with the scores further normalized between the range of [0,1].

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors