Pre-code thoughts, benchmark scope and goals #1
stephane-caron
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The benchmark reported in the README only targets a limited number of problems and does not provide a fair comparison between solvers. For instance, solvers with default settings that return less precise solutions faster are advantaged if we only look at out-of-the-box computation time.
We will improve on this in qpsolvers/qpsolvers#70 following the methodology proposed in the OSQP paper and reproduced in proxqp_benchmark.
This thread is here to discuss work items and exchange feedback. Share your thoughts so that we converge on a meaningful bench!
Scope
benchmark/results.md
with summary reported toREADME.md
Metrics
Validation
Solutions can be validated by casting all inequalities to$l \leq C x \leq u$ and checking that, at the primal-dual solution $(x^*, y^*)$ ,
where$\epsilon_{abs}$ is an absolute tolerance parameter. We could check three settings, such as lax ($\epsilon_{abs} = 10^{-3}$ ), medium ($\epsilon_{abs} = 10^{-5}$ ) and high accuracy ($\epsilon_{abs} = 10^{-7}$ ).
See also
Beta Was this translation helpful? Give feedback.
All reactions