-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Online Learning #136
Comments
Hi there @caxelrud 👋🏽 since both the prior and posterior are Gaussian, I don't see why this couldn't work. Apparently it's been done before, but I'm not familiar with the details: https://proceedings.neurips.cc/paper_files/paper/2018/file/f31b20466ae89669f9741e047487eb37-Paper.pdf |
Hi! |
Hi,
The LaplaceRedux.Prior has:
Since the Prior type prior_mean is a scalar, I can't use the posterior_mean vector. |
The You can still use the posterior mean as a prior, of course, by using it as a regularizer when training on new data: the Gaussian posterior now acting as your Gaussian prior is equivalent to training with weight decay (see Daxberger (2021) and also here and here). The standard Ridge penalty in Flux corresponds to a zero-mean prior, but that should be straight-forward to adjust in your code. This way the posterior mean will act as a prior affecting your MAP estimate when training on new data. As for using the posterior precision matrix as your new prior, it's worth noting that Please be aware of some limitations here: our package was never designed to be a training framework. It merely ships the functionality for fitting LA to neural networks trained in Flux in a post-hoc fashion. I should also flag that my own research is in a different field, so I'm by no means an expert on LA and I am just brain-storming my thoughts here about your problem setup. |
Is it possible to do Online Learning with LaplaceRedux?
In other words, is it possible to use a previous calculated posterior as the prior of a new evaluation?
The text was updated successfully, but these errors were encountered: