Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add (naive) FFBS algo #20

Draft
wants to merge 13 commits into
base: fred/auxiliary-particle-filter
Choose a base branch
from

Conversation

FredericWantiez
Copy link
Member

No description provided.

@FredericWantiez FredericWantiez force-pushed the fred/auxiliary-particle-filter branch from 0fadd2b to c791095 Compare October 27, 2024 12:33
@THargreaves
Copy link
Collaborator

This is really nice. Thank you for sharing!

I love how the forward filtering code is reused in a completely general way.

Once the workshop submission deadline has passed, I'll have a look at how this approach extends to the Kalman smoother and Rao-Blackwellised FFBS case.

@THargreaves
Copy link
Collaborator

A bit tangential, but this has raised the question for me whether you need to use multinomial resampling on the backward pass or whether any unbiased resampling method is sufficient (and could lead to lower variance). I'm not aware of that being mentioned in any papers but I might have just skimmed over it/forgotten.

@FredericWantiez
Copy link
Member Author

There's the rejection sampling version of the backward pass if you want to run closer to linear time, but I haven't seen anything on using something else for the exact backward pass.

@THargreaves
Copy link
Collaborator

Is it a good idea to have the number of particles be a type parameter?

I'm not really sure how specialisation and dispatch works for bit types but does this lead to new definitions for every possible number of types?

Would it be cleaner to define a getter num_particles(filter::AbstractParticleFilter) that all subtypes must implement?

@charlesknipp
Copy link
Collaborator

I'm not really sure how specialisation and dispatch works for bit types but does this lead to new definitions for every possible number of types?

I don't mind the particle num in the type signature, but I definitely wonder whether this makes a difference at dispatch. Regardless I really like the addition of AbstractParticleFilter.

@FredericWantiez
Copy link
Member Author

FredericWantiez commented Nov 6, 2024

The main issue I have with it is if we ever need a variable number of particles.

We can also have both:

 num_particles(::AbstractParticleFilter{N}) where {N} = N

@THargreaves
Copy link
Collaborator

The main issue I have with it is if we ever need a variable number of particles.

I must be missing something here. How does having a bit type parameter help with this?

@FredericWantiez
Copy link
Member Author

FredericWantiez commented Nov 7, 2024

I mean the bit type prevents us from having a variable number of particles. I don't think it impacts dispatch

@THargreaves
Copy link
Collaborator

Ah, I get you now. So is the main purpose of this change just to ensure that all AbstractParticleFilter subtypes have this same parameter (rather than relying on an interface definition)?

@charlesknipp
Copy link
Collaborator

To build off of the methods @FredericWantiez has been working on, I added a guided filter and an AbstractProposal interface. The unit test is not passing since I haven't settled on a good test case. Feel free to make changes if you spot any mistakes or revert this commit if it causes issues.

If we could get this in working order, it would be really great for testing auto diff for uses like VSMC (Naesseth, 2018). I have a few really interesting algorithms in mind, with some very elegant uses of Functors.jl to tune the proposals.


function SSMProblems.distribution(
model::AbstractStateSpaceModel,
prop::AbstractProposal,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should change the SSMProblems interface. The proposal should be part of the filter interface, maybe something along the lines of:

abstract type AbstractProposal end

abstract type AbstractParticleFilter{N, P<:AbstractProposal} end 

struct ParticleFilter{N,RS,P} <: AbstractParticleFilter{N,P}
    resampler::RS
    proposal::P
end

# Default to latent dynamics
struct LatentProposal <: AbstractProposal end

const BootstrapFilter{N,RS} = ParticleFilter{N,RS,LatentProposal}
const BF = BootstrapFilter

function propose(
    rng::AbstractRNG, 
    prop::LatentProposal, 
    model::AbstractStateSpaceModel, 
    particles::ParticleContainer, 
    step, 
    state, 
    obs; 
    kwargs...
)
    return SSMProblems.simulate(rng, model.dyn, t, state; kwargs...)
end

function logdensity(prop::AbstractProposal, ...)
   return SSMProblmes.logdensity(...)
end

And we should probably update the filter/predict functions:

function predict(
    rng::AbstractRNG,
    model::StateSpaceModel,
    filter::BootstrapFilter,
    step::Integer,
    states::ParticleContainer{T};
    ref_state::Union{Nothing,AbstractVector{T}}=nothing,
    kwargs...,
) where {T}
    states.proposed, states.ancestors = resample(
        rng, filter.resampler, states.filtered, filter
    )
    states.proposed.particles = map(states.proposed) do state
        propose(rng, filter.proposal, model.dyn, step, state; kwargs...),
    end

    return update_ref!(states, ref_state, filter, step)
end

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I 100% agree with the BF integration, I was intentionally working my way up to that, but didn't want to drastically change the interface upon the first commit.

And you're totally right about the SSMProblems integration. But it was convenient to recycle the structures.

φ::Vector{T}
end

# a lot of computations done at each step
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps more pressing, these computations are completed twice, once for the predict step and the again for update. Not completely clear to me how to get around that though.

We could potentially compute the proposal distribution before running the predict/update step and pass this in to each step.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants