Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why are the logits on the numerator in the loss function not masked for comparing a sample with itself? #23

Open
xiaobingbuhuitou opened this issue Nov 15, 2023 · 1 comment

Comments

@xiaobingbuhuitou
Copy link

dalao, I find that in PaCo or GPaCo the logits on the numerator in the loss function not masked, but the denominator of the loss function is masked cause exp_logits = torch.exp(logits) * logits_mask i think the the logits on the numerator also should masked? and the learnable center is used for predict ground truth to make it become a supervised question? thanks😢

@xiaobingbuhuitou
Copy link
Author

sorry and i want to ask in paper the Remark 2 after use parametric contrastive learning why the probility become alpha/(1+alphaKy) and C become 1/(1+alphaKy) ? sorry i don't know how to compute it thaks 😢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant