-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add (naive) FFBS algo #20
base: fred/auxiliary-particle-filter
Are you sure you want to change the base?
Conversation
0fadd2b
to
c791095
Compare
161186d
to
899c507
Compare
This is really nice. Thank you for sharing! I love how the forward filtering code is reused in a completely general way. Once the workshop submission deadline has passed, I'll have a look at how this approach extends to the Kalman smoother and Rao-Blackwellised FFBS case. |
A bit tangential, but this has raised the question for me whether you need to use multinomial resampling on the backward pass or whether any unbiased resampling method is sufficient (and could lead to lower variance). I'm not aware of that being mentioned in any papers but I might have just skimmed over it/forgotten. |
There's the rejection sampling version of the backward pass if you want to run closer to linear time, but I haven't seen anything on using something else for the exact backward pass. |
Is it a good idea to have the number of particles be a type parameter? I'm not really sure how specialisation and dispatch works for bit types but does this lead to new definitions for every possible number of types? Would it be cleaner to define a getter |
I don't mind the particle num in the type signature, but I definitely wonder whether this makes a difference at dispatch. Regardless I really like the addition of |
The main issue I have with it is if we ever need a variable number of particles. We can also have both: num_particles(::AbstractParticleFilter{N}) where {N} = N |
I must be missing something here. How does having a bit type parameter help with this? |
I mean the bit type prevents us from having a variable number of particles. I don't think it impacts dispatch |
Ah, I get you now. So is the main purpose of this change just to ensure that all |
To build off of the methods @FredericWantiez has been working on, I added a guided filter and an If we could get this in working order, it would be really great for testing auto diff for uses like VSMC (Naesseth, 2018). I have a few really interesting algorithms in mind, with some very elegant uses of Functors.jl to tune the proposals. |
|
||
function SSMProblems.distribution( | ||
model::AbstractStateSpaceModel, | ||
prop::AbstractProposal, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should change the SSMProblems
interface. The proposal should be part of the filter
interface, maybe something along the lines of:
abstract type AbstractProposal end
abstract type AbstractParticleFilter{N, P<:AbstractProposal} end
struct ParticleFilter{N,RS,P} <: AbstractParticleFilter{N,P}
resampler::RS
proposal::P
end
# Default to latent dynamics
struct LatentProposal <: AbstractProposal end
const BootstrapFilter{N,RS} = ParticleFilter{N,RS,LatentProposal}
const BF = BootstrapFilter
function propose(
rng::AbstractRNG,
prop::LatentProposal,
model::AbstractStateSpaceModel,
particles::ParticleContainer,
step,
state,
obs;
kwargs...
)
return SSMProblems.simulate(rng, model.dyn, t, state; kwargs...)
end
function logdensity(prop::AbstractProposal, ...)
return SSMProblmes.logdensity(...)
end
And we should probably update the filter/predict
functions:
function predict(
rng::AbstractRNG,
model::StateSpaceModel,
filter::BootstrapFilter,
step::Integer,
states::ParticleContainer{T};
ref_state::Union{Nothing,AbstractVector{T}}=nothing,
kwargs...,
) where {T}
states.proposed, states.ancestors = resample(
rng, filter.resampler, states.filtered, filter
)
states.proposed.particles = map(states.proposed) do state
propose(rng, filter.proposal, model.dyn, step, state; kwargs...),
end
return update_ref!(states, ref_state, filter, step)
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I 100% agree with the BF integration, I was intentionally working my way up to that, but didn't want to drastically change the interface upon the first commit.
And you're totally right about the SSMProblems integration. But it was convenient to recycle the structures.
φ::Vector{T} | ||
end | ||
|
||
# a lot of computations done at each step |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps more pressing, these computations are completed twice, once for the predict step and the again for update. Not completely clear to me how to get around that though.
We could potentially compute the proposal distribution before running the predict/update step and pass this in to each step.
No description provided.