-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About "Pretext + Kmeans' #126
Comments
You're not applying Kmeans correctly if you don't get 65%. You need to normalize the features and report Kmeans on the validation set by fitting them on the train set. You also need to average the results over multiple runs. The code for Kmeans clustering is mostly the same as I provided in other repositories, like semantic segmentation for example. |
Dear author, I obtain a silimar result with that in table 3 of the paper (ACC=65.9,NMI=59.8,ARI=50.9):
However, the result I obtained is performing Kmeans on outputs of the clustering network (i.e., I believe performing Kmeans on features of pretext networks is important, which often severs as a baseline, even if the clustering network you proposed achieves good performance. |
You should indeed cluster the pretext features, not the class vectors. Yes, we use KMeans as a baseline in our paper. |
I means it is better to report the results of Kmeans on Maybe the word |
No, you are misunderstanding something. We cluster the features of Φθ and not Φη. The latter does not make sense. |
Edit: I was able to replicate @wvangansbeke 's result. My code is not general enough to share (it is part of something convoluted). Few issues that you might be running into are
|
Hi, I see simply 'Pretext + Kmeans' achieves 65.2 on CIFAR10 on average. I download your model and tried it, but it only acquires 33.3%. Can you tell me your settings or something special you used?(I didn't see it in your code)
The text was updated successfully, but these errors were encountered: