Releases: open-spaced-repetition/fsrs-rs
Releases · open-spaced-repetition/fsrs-rs
v0.2.0
What's Changed
- Feat/option for pretrain only by @L-M-Sherlock in #153
Full Changelog: v0.1.0...v0.2.0
v0.1.0
What's Changed
- Feat/weight clipper by @Luc-Mcgrady in #9
- Feat/rewrite FSRS item by @L-M-Sherlock in #8
- Add a check that the code is formatted by @dae in #10
- Run cargo fmt by @dae in #11
- Add clippy to CI checks by @dae in #12
- simplify by @asukaminato0721 in #15
- rewrite test for convertor by @L-M-Sherlock in #14
- Add BCELoss by @L-M-Sherlock in #18
- Simplify Revlog->FSRSItem conversion by @dae in #13
- Sequence padding by @asukaminato0721 in #16
- Fix/feature dimension by @L-M-Sherlock in #26
- add explanation for the sql query by @asukaminato0721 in #23
- Feat/cosine_annealing_lr by @L-M-Sherlock in #25
- Update to the latest Burn and pin dependencies by @dae in #33
- use path join by @asukaminato0721 in #30
- Feat/sort FSRSItem by length to speed up training by @L-M-Sherlock in #32
- Feat/clip weights during training by @L-M-Sherlock in #35
- Freezes weights by @GraesonB in #29
- Expose a public API for training by @dae in #36
- Fix/remove revlogs after manual reset & use const by @L-M-Sherlock in #38
- add test into ci by @asukaminato0721 in #31
- remove reshape() calls by @L-M-Sherlock in #39
- Feat/pre-training for initial stability by @L-M-Sherlock in #42
- Create LICENSE by @L-M-Sherlock in #44
- Feat/insert default point to avoid overfitting with small dataset by @L-M-Sherlock in #46
- Error handling and refactoring by @dae in #47
- Feat/find optimal retention by @L-M-Sherlock in #48
- Use an enum instead of a hashmap for table lookups by @dae in #49
- more izip by @asukaminato0721 in #50
- Feat/evaluation by @L-M-Sherlock in #52
- Update burn and add progress handling/cancellation by @dae in #53
- Feat/tune hype parameters by @L-M-Sherlock in #55
- Feat/support scheduler by @L-M-Sherlock in #58
- Benchmark and API tweaks by @dae in #59
- A couple of minor fixes by @dae in #60
- Some more minor tweaks to API by @dae in #62
- Feat/answer buttons ratio & cost by @L-M-Sherlock in #65
- use tryFrom and some path by @asukaminato0721 in #66
- add more tests for convertor tests by @L-M-Sherlock in #64
- Drop constraint on FloatElem by @dae in #67
- add badge, add tests by @asukaminato0721 in #68
- Update burn by @dae in #69
- Avoid updating memory state when no days elapsed by @dae in #70
- Update burn by @dae in #71
- Feat/extract optimal retention parameters from revlog by @L-M-Sherlock in #73
- Clip user-provided weights by @dae in #74
- remove one mut by @asukaminato0721 in #75
- remove feature dimension by @asukaminato0721 in #76
- polyfill pow by @asukaminato0721 in #77
- memory_state_from_sm2() and next_interval() by @dae in #82
- Feat/outlier filter by @L-M-Sherlock in #80
- Fix/correct lr_scheduler & increase batch_size & lr by @L-M-Sherlock in #83
- Use u32 for inputs by @dae in #85
- Feat/loss_aversion by @L-M-Sherlock in #86
- update burn by @AlexErrant in #90
- Run retention calculation in parallel by @dae in #91
- reduce some dup code by @asukaminato0721 in #92
- Support overriding initial S/D when revlog incomplete by @dae in #93
- Laplace smoothing by @asukaminato0721 in #94
- Fix/BatchShuffledDataLoader by @L-M-Sherlock in #96
- Feat/stratified k fold by @L-M-Sherlock in #95
- Update burn for tracel-ai/burn#839 by @dae in #97
- Update burn for tracel-ai/burn#843 by @dae in #98
- bump toolchain version by @asukaminato0721 in #100
- Factor sm2 retention into memory state calculation by @dae in #101
- Fix panic when training interrupted by @dae in #102
- Correctly track progress with n_splits > 1 by @dae in #104
- Feat/update initial values of S0 by @L-M-Sherlock in #105
- Fix/Use power forgetting curve in memory_state_from_sm2 by @user1823 in #103
- Update burn-rs by @dae in #108
- Feat/early stop by @L-M-Sherlock in #109
- We can now use burn from crates.io by @dae in #110
- Feat/calc SInc based on last d by @L-M-Sherlock in #111
- Clippy nursery by @asukaminato0721 in #112
- Fix/clamp the upper limit of w16 by @L-M-Sherlock in #115
- let-else by @asukaminato0721 in #116
- Doc/rewrite README by @L-M-Sherlock in #120
- Feat/filter outlier in trainset by @L-M-Sherlock in #119
- Feat/speed up finding optimal retention via brent's method by @L-M-Sherlock in #122
- Fix progress monitoring sometimes not terminating by @dae in #123
- Rework progress code for optimal retention by @dae in #124
- Add sanity check for NaN values by @dae in #125
- Dependency updates by @dae in #127
- Feat/update default weights by @L-M-Sherlock in #128
- Increase the sample size.rs by @Expertium in #129
- Use .is_finite() instead of .is_normal() by @dae in #131
- Fix/pretrain default S0 should be from DEFAULT_WEIGHTS by @L-M-Sherlock in #132
- Feat/remove items when retention is 100% & use mode parameters by @L-M-Sherlock in #135
- Feat/flat power forgetting curve by @L-M-Sherlock in #134
- Feat/add an env var to disable outlier filter by @L-M-Sherlock in #137
- Fix/sort init_s0 on generating from r_s0_default by @L-M-Sherlock in #139
- Use array ins...