Skip to content

Fix graph Laplacian docstring and semantic weight scaling in KernelLanguageEntropy#441

Open
Kangwei-g wants to merge 1 commit intoIINemo:mainfrom
Kangwei-g:main
Open

Fix graph Laplacian docstring and semantic weight scaling in KernelLanguageEntropy#441
Kangwei-g wants to merge 1 commit intoIINemo:mainfrom
Kangwei-g:main

Conversation

@Kangwei-g
Copy link
Copy Markdown

Hi team!

I was reviewing the implementation of Kernel Language Entropy (KLE) and noticed a couple of details regarding the semantic graph construction that deviate slightly from the formulas in the original paper. I've made the following corrections:

  • Fixed Laplacian formula in docstring: Corrected the comment from L = W - D to the standard definition L = D - W.
  • Corrected weight matrix scaling: According to the paper, the semantic graph weight is defined as $W_{ij} = w \cdot NLI'(S_i, S_j) + w \cdot NLI'(S_j, S_i)$. The previous implementation divided this sum by 2. I removed the / 2 to strictly align with the paper's formula.
  • Adjusted neutral matrix calculation: Since matrix_entail and matrix_contra are now the full sum (with a maximum possible value of 2 instead of 1), I updated the matrix_neutral calculation to use 2 * np.ones(...) to maintain the correct semantic proportions.

Let me know if you have any questions!

Correct the definition of the Laplacian matrix and adjust matrix calculations for entailment and contradiction.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant