Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FullSubnetLaplace.sample() doesn't use generator #213

Closed
elcorto opened this issue Jul 31, 2024 · 2 comments · Fixed by #216
Closed

FullSubnetLaplace.sample() doesn't use generator #213

elcorto opened this issue Jul 31, 2024 · 2 comments · Fixed by #216

Comments

@elcorto
Copy link
Contributor

elcorto commented Jul 31, 2024

Hi

I was doing some reproducibility tests for pred_type="nn", so testing SomeLaplaceClass._nn_predictive_samples() -> SomeLaplaceClass.sample(), using the current main (508843d).

I found that FullSubnetLaplace.sample() doesn't use the passed generator (ruff check --select ARG laplace/subnetlaplace.py also catches this). The reason is probably that the used torch.distributions.MultivariateNormal doesn't offer an API to pass in a generator.

I guess it would not be too difficult to rewrite this in the way other sample methods do things -- generate and re-use a set of standard Normal samples (because of #91) and go from there.

Thanks.

@wiseodd
Copy link
Collaborator

wiseodd commented Aug 2, 2024

Hi Steve, thanks for catching this.

I agree, the fix is super simple as you mentioned. Probably could also be as simple as reusing super().sample().

I have to admit that we don't pay too much attention to the SubnetLaplace implementation anymore since the same can be done more intuitively by switching off the unwanted subset's grads. The benefit of the latter is that the Jacobian for the linearized predictive will only be computed over the selected subset. Meanwhile, the former computes the full Jacobian first and then slices it using the subnet indices.

In any case, happy to accept your PR, as always!

@elcorto
Copy link
Contributor Author

elcorto commented Aug 5, 2024

Hi Steve, thanks for catching this.

I agree, the fix is super simple as you mentioned. Probably could also be as simple as reusing super().sample().

I put together a fix, please see #216.

I have to admit that we don't pay too much attention to the SubnetLaplace implementation anymore since the same can be done more intuitively by switching off the unwanted subset's grads. The benefit of the latter is that the Jacobian for the linearized predictive will only be computed over the selected subset. Meanwhile, the former computes the full Jacobian first and then slices it using the subnet indices.

I have some questions regarding this method but this is off-topic for this particular issue here, so I'll open a new one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants