-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QPLayer: efficient differentiation of convex quadratic optimization #264
Conversation
The GPLv3 license is viral, so if GPL-covered code is needed to compile the project, the whole project becomes GPL-licensed. To allow dual licensing, there should be some way, for instance a compilation option (similar to |
Thanks for the remark. After discussion, we decided not to change the licensing at all. Everything stays under BSD2-clause as before. |
cc6f84c
to
8055cc5
Compare
Now all the tests are passing. I had to switch to |
b40e5f0
to
dbbefaf
Compare
after discussing with @jcarpent, the
If that is fine we are good to merge. |
89c3b63
to
6449af7
Compare
…havior of sum when using avx
- by default, do not rely on proxsuite with openmp support but use single threaded solve and compute_backward - users, who installed proxsuite with openmp can set the option omp_parallel = True and enjoy multithreaded forward and backward passes
- the default parameters are without omp and just one epoch, as the examples are part of the unittests when called make test (for local testing), exit script if no PyTorch available (CI) - code adapted from https://github.com/locuslab/optnet/blob/master/sudoku/train.py - data created with default settings and script https://github.com/locuslab/optnet/blob/master/sudoku/create.py
TODO: consider passing a qplayer settings object
@Bambade, does this make sense to you ? Should we also add the mu_backward to the init here ?
- thanks to @quentinll for pointing out some issues with these functions
for more information, see https://pre-commit.ci
Together with @Bambade, we worked on this PR to implement QPLayer: efficient differentiation of convex quadratic optimization. The paper is publicly available in HAL (ref 04133055). We leverage primal-dual augmented Lagrangian
techniques for computing derivatives of both feasible and infeasible QPs.
QPLayer enables using a QP as a layer within standard learning architectures. More precisely, QPLayer differentiates over$\theta$ the primal and dual solutions of QP of the form
where$x \in \mathbb{R}^n$ is the optimization variable.
We provide in the file qplayer_sudoku.py an example that enables training QP layer in two different settings: (i) either we learn only the equality constraint matrix$A$ , or (ii) we learn at the same time $A$ and $b$ , such that $b$ is structurally in the range space of $A$ . The procedure (i) is harder since a priori the fixed right-hand side does not ensure the QP to be feasible. Yet, this learning procedure is more structured, and for some problems can produce better predictions quicker (i.e., in fewer epochs).
In addition to the example to show the usage of the PyTorch implementation for QPLayer, we have added unit tests for the backward function which is implemented in C++ and we have updated the readme and doc.
EDIT:
for now we have added a GPLv3 license to this specific contribution (namely thecompute_ECJ.hpp
andtorch/qplayer.py
)