Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QPLayer: efficient differentiation of convex quadratic optimization #264

Merged
merged 69 commits into from
Nov 10, 2023

Conversation

fabinsch
Copy link
Collaborator

@fabinsch fabinsch commented Oct 6, 2023

Together with @Bambade, we worked on this PR to implement QPLayer: efficient differentiation of convex quadratic optimization. The paper is publicly available in HAL (ref 04133055). We leverage primal-dual augmented Lagrangian
techniques for computing derivatives of both feasible and infeasible QPs.

QPLayer enables using a QP as a layer within standard learning architectures. More precisely, QPLayer differentiates over $\theta$ the primal and dual solutions of QP of the form

$$ \begin{align} \min_{x} & ~\frac{1}{2}x^{T}H(\theta)x+g(\theta)^{T}x \\ \text{s.t.} & ~A(\theta) x = b(\theta) \\ & ~l(\theta) \leq C(\theta) x \leq u(\theta) \end{align} $$

where $x \in \mathbb{R}^n$ is the optimization variable.

We provide in the file qplayer_sudoku.py an example that enables training QP layer in two different settings: (i) either we learn only the equality constraint matrix $A$, or (ii) we learn at the same time $A$ and $b$, such that $b$ is structurally in the range space of $A$. The procedure (i) is harder since a priori the fixed right-hand side does not ensure the QP to be feasible. Yet, this learning procedure is more structured, and for some problems can produce better predictions quicker (i.e., in fewer epochs).

In addition to the example to show the usage of the PyTorch implementation for QPLayer, we have added unit tests for the backward function which is implemented in C++ and we have updated the readme and doc.

EDIT: for now we have added a GPLv3 license to this specific contribution (namely the compute_ECJ.hpp and torch/qplayer.py)

@fabinsch fabinsch changed the title Qplayer QPLayer: efficient differentiation of convex quadratic optimization Oct 6, 2023
@stephane-caron
Copy link
Contributor

The GPLv3 license is viral, so if GPL-covered code is needed to compile the project, the whole project becomes GPL-licensed. To allow dual licensing, there should be some way, for instance a compilation option (similar to EIGEN_MPL2_ONLY in Eigen), to (1) compile without compute_ECJ.hpp, with a BSD-licensed result, or (2) compile with compute_ECJ.hpp, with a GPL-licensed result.

@fabinsch
Copy link
Collaborator Author

The GPLv3 license is viral, so if GPL-covered code is needed to compile the project, the whole project becomes GPL-licensed. To allow dual licensing, there should be some way, for instance a compilation option (similar to EIGEN_MPL2_ONLY in Eigen), to (1) compile without compute_ECJ.hpp, with a BSD-licensed result, or (2) compile with compute_ECJ.hpp, with a GPL-licensed result.

Thanks for the remark. After discussion, we decided not to change the licensing at all. Everything stays under BSD2-clause as before.

@fabinsch fabinsch force-pushed the qplayer branch 6 times, most recently from cc6f84c to 8055cc5 Compare October 16, 2023 11:52
@fabinsch
Copy link
Collaborator Author

Now all the tests are passing. I had to switch to ClangCl toolset for windows-latest to avoid some build errors on windows.

@fabinsch fabinsch force-pushed the qplayer branch 4 times, most recently from b40e5f0 to dbbefaf Compare October 30, 2023 18:03
@fabinsch
Copy link
Collaborator Author

after discussing with @jcarpent, the ci-linux-osx-wind-conda runs now the following setup successfully for Windows:

  • windows-2019 | Release | c++17 | ClangCl

  • windows-2019 | Release | c++20 | ClangCl

  • windows-2019 | Debug | c++17 | ClangCl

  • windows-2019 | Debug | c++20 | ClangCl

  • windows-latest | Release | c++17 | v143

  • windows-latest | Release | c++20 | ClangCl

  • windows-latest | Debug | c++17 | v143

  • windows-latest | Debug | c++20 | ClangCl

If that is fine we are good to merge.

@CLAassistant
Copy link

CLAassistant commented Oct 31, 2023

CLA assistant check
All committers have signed the CLA.

@fabinsch fabinsch force-pushed the qplayer branch 2 times, most recently from 89c3b63 to 6449af7 Compare November 3, 2023 16:33
fabinsch and others added 27 commits November 10, 2023 10:07
- by default, do not rely on proxsuite with openmp support but use single threaded solve and compute_backward
- users, who installed proxsuite with openmp can set the option omp_parallel = True and enjoy multithreaded forward and backward passes
- the default parameters are without omp and just one epoch, as the examples are part of the unittests when called make test (for local testing), exit script if no PyTorch available (CI)

- code adapted from https://github.com/locuslab/optnet/blob/master/sudoku/train.py
- data created with default settings and script https://github.com/locuslab/optnet/blob/master/sudoku/create.py
ping @Bambade, @jcarpent: this change here is necessary to pass the unittests defined in dense_backward.cpp where we compare the derivatives with finite differences up to a precision of 1e-5
TODO: consider passing a qplayer settings object
@Bambade, does this make sense to you ? Should we also add the mu_backward to the init here ?
- thanks to @quentinll for pointing out some issues with these functions
@jcarpent jcarpent merged commit a974571 into Simple-Robotics:devel Nov 10, 2023
71 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants