Exploring Error Expression Disparities Between Direct Training with DP and Finetune: Impact of Powerless Data #4460
Unanswered
JiangXiaoMingSan
asked this question in
Q&A
Replies: 1 comment
-
1e-17 is less than the accuracy of FP64, so it might be the floating error. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Discovering the difference in error expression between training potential functions directly with finetune and dp. For example, an isolated atom with only energy and zero force will exhibit a very small value (1e-17) in the error of finetune, while training the potential function directly on the same dataset dp will not result in this situation. Will these powerless data affect the results of finetune?
The lcurve.out files for both are located on GitHub:https://github.com/JiangXiaoMingSan/lcurve.out.git
The image inside is an example of a small training error in force, which is caused by a training set with zero force
Beta Was this translation helpful? Give feedback.
All reactions