You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the requested improvement
In their paper on L-TAE (https://arxiv.org/pdf/2007.00586.pdf), authors describe a method called "attention mask" which aim at situating the discriminant source of data for each class.
Associated sits API function
Is there a way to implement this approch in sits?
The text was updated successfully, but these errors were encountered:
The main difference between the TAE and LightTAE methods proposed by Garnot et al. is the use of the average attention mask to reduce the number of parameters in the model. Both TAE and LightTAE are available on sits using the sits_tae() and sits_lighttae() functions.
To implement sits_lighttae() we used the papers by Vivien Garnot and reference code at https://github.com/VSainteuf/lightweight-temporal-attention-pytorch
We also used the code made available by Maja Schneider at https://github.com/maja601/RC2020-psetae
Describe the requested improvement
In their paper on L-TAE (https://arxiv.org/pdf/2007.00586.pdf), authors describe a method called "attention mask" which aim at situating the discriminant source of data for each class.
Associated sits API function
Is there a way to implement this approch in sits?
The text was updated successfully, but these errors were encountered: