-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fluctuate metrics with decreasing loss #44
Comments
Seems not expected, actually, I haven't run on the beauty dataset yet, can you paste some logs showing how loss in training and metrics in inference changes? |
Thanks for your reply. For the beauty dataset with 176520 interaction records, 12099 items and 22342 users:
Some hyper parameters: bs=256, lr=1e-3, dropout=0.2, maxlen=50, hidden_units=384 |
Thank you, the beauty dataset is way too sparse which may require extra hyper-parameter tuning work, have you tried set dropout=0.5? As according to the original paper https://cseweb.ucsd.edu/~jmcauley/pdfs/icdm18.pdf :
|
Thanks for contributing the code. I have a question: when running the code with the beauty dataset, the loss keeps decreasing but the NDCG and HR fluctuate significantly. Is this normal?
The text was updated successfully, but these errors were encountered: