MLDataDevices-v1.5.2
MLDataDevices MLDataDevices-v1.5.2
Merged pull requests:
- Rewrite (#7) (@avik-pal)
- Rename to Lux (#11) (@avik-pal)
- Initial Documentation (#14) (@avik-pal)
- Minor Updates (#15) (@avik-pal)
- Better CUDNN Dispatches (#16) (@avik-pal)
- Tutorials (#21) (@avik-pal)
- Proper dispatch for types not supported by CUDNN (#23) (@avik-pal)
- [WIP] Recurrent Neural Networks (#24) (@avik-pal)
- Fix math display in docs (#27) (@gdalle)
- Initial ViT Implementation & Pretrained ImageNet Models (#29) (@avik-pal)
- CompatHelper: bump compat for Setfield to 1, (keep existing compat) (#30) (@github-actions[bot])
- Code Formatting -- SciMLStyle (#31) (@avik-pal)
- Cleanup generated function style (#33) (@avik-pal)
- Update README.md (#37) (@zsz00)
- Fix doc for
PairwiseFusion
(#39) (@theabhirath) - Extending
Scale
to allow for multiple dimension inputs (#40) (@theabhirath) - Fix Zygote error caused due to
fill!
(#41) (@theabhirath) - CompatHelper: bump compat for ComponentArrays to 0.12, (keep existing compat) (#43) (@github-actions[bot])
- Update JET tests to allow julia v1.6 (#47) (@avik-pal)
- Formatting updates and relax parameter type (#48) (@avik-pal)
- Enable doctests in CI (#51) (@avik-pal)
- fix quickstart example (#52) (@visr)
- Test on 1.8 (#54) (@avik-pal)
- Separate out testing unreleased julia versions (#55) (@avik-pal)
- Cleaner and Better Documentation (#56) (@avik-pal)
- Bump Pkg Compats (#66) (@avik-pal)
- CompatHelper: bump compat for MLDatasets to 0.7 for package examples, (keep existing compat) (#67) (@github-actions[bot])
- Manual to translate Flux to Lux (#69) (@avik-pal)
- Try codecov for doctests (#70) (@avik-pal)
- Add tests for utility functions (#74) (@avik-pal)
- Add tip to install packages (#76) (@Karthik-d-k)
- More Testing + Deprecate Nonsensical Functions + Better Naming for Kwargs (#80) (@avik-pal)
- CompatHelper: add new compat entry for Optimisers at version 0.2, (keep existing compat) (#82) (@github-actions[bot])
- Update rrules so that we can support Yota (#85) (@avik-pal)
- CompatHelper: bump compat for FluxMPI to 0.6 for package examples, (keep existing compat) (#86) (@github-actions[bot])
- Update comparison section in overview.md (#88) (@ToucheSir)
- Fix typos (#89) (@claforte)
- Fix minor typos in the docs (#93) (@gabrevaya)
- making x Float32 in migrate from Flux example (#97) (@gabrevaya)
- add init_hidden_state function (#101) (@gabrevaya)
- JLArray is now registered (#103) (@YichengDWu)
- [LuxTraining] Wrappers for less clunky training loops (#104) (@avik-pal)
- Use OneHotArrays (#105) (@YichengDWu)
- Fixes WeightNorm with zero Parameter bug (#106) (@avik-pal)
- fix state update in NeuralODE example (#107) (@gabrevaya)
- Deprecate
elementwise_*
andapplyactivation
(#113) (@avik-pal) - Go through the dense bias deprecation (#114) (@avik-pal)
- Fix Scale's paramlength (#116) (@lungd)
- Trainable hidden states (#117) (@lungd)
- Rnn bias deprecation (#120) (@lungd)
- Add use_bias kwarg to LSTMCell and GRUCell (#121) (@lungd)
- Update docs for dense layer (#124) (@avik-pal)
- Upper bound ComponentArrays (#125) (@avik-pal)
- Relax ComponentArrays compat (#126) (@avik-pal)
- Layer Normalization Implementation (#127) (@avik-pal)
- LSTM docs: don't go over first element in sequence twice (#132) (@visr)
- fix PairwiseFusion docs (#133) (@YichengDWu)
- Generic recurrent cells (#136) (@jumerckx)
- relu tests with finite diff is too unreliable (#137) (@avik-pal)
- Add kaiming initialization (#138) (@YichengDWu)
- Remove Val in typeinfo of WeightNorm (#140) (@avik-pal)
- Named Layers inside Generic Containers (#143) (@avik-pal)
- Allow fmapping over the model (#144) (@avik-pal)
- Update Imagenet example (#147) (@avik-pal)
- Make normalization more AD friendly (Diffractor) (#148) (@avik-pal)
- Fix CuArray -> Array rrule (#149) (@avik-pal)
- Allow indexing into Chains (#150) (@avik-pal)
- API for freezing layers (#151) (@avik-pal)
- Allow controlling fast activation transformation (#153) (@avik-pal)
- Introducing LuxLib.jl: Effectively pullout some of the custom layer implementations from Lux.jl (#154) (@avik-pal)
- Try relaxing JET version (#155) (@avik-pal)
- Update to use LuxLib (#156) (@avik-pal)
- Allow dispatch using
Lux.apply
(#158) (@avik-pal) - Mark non differentiable code paths (#160) (@avik-pal)
- Fix generic GN dispatch for non 4D arrays (#161) (@avik-pal)
- Add dispatch for subarray (#162) (@avik-pal)
- Add More Layers (#163) (@avik-pal)
- Fix type stability in normalization implementation (#164) (@avik-pal)
- Codecov for lib directories Take 2 (#165) (@avik-pal)
- Add freeze tests to runtests (#166) (@avik-pal)
- Precompile common workflows + check invalidations (#167) (@avik-pal)
- Make normalization typestable (#168) (@avik-pal)
- Add a manual page on precompilation (#169) (@avik-pal)
- Deprecate Lux.transform in favor of Flux2Lux.jl (#170) (@avik-pal)
- Remove dead code and improve var for Tracker.jl support (#171) (@avik-pal)
- Hyper Network Example (#172) (@avik-pal)
- Modify mkdocs settings (#173) (@avik-pal)
- Make ViT work on GPUs (#174) (@avik-pal)
- Add sensible recurrent layer wrappers (#175) (@avik-pal)
setup
only on AbstractRules (#176) (@avik-pal)- Start using Flux2Lux (#177) (@avik-pal)
- Fix some displays (#178) (@avik-pal)
- Relax dropout types (#179) (@avik-pal)
- Add instancenorm and alpha_dropout implementations (#180) (@avik-pal)
- Add InstanceNorm and AlphaDropout (#181) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package examples, (keep existing compat) (#184) (@github-actions[bot])
- remove convert rrule (#185) (@ArnoStrouwen)
- CompatHelper: bump compat for OneHotArrays to 0.2 for package examples, (keep existing compat) (#186) (@github-actions[bot])
- CompatHelper: bump compat for Turing to 0.22 for package examples, (keep existing compat) (#188) (@github-actions[bot])
- Fix layer_map for custom layers (#189) (@avik-pal)
- add example of DDIM implementation (#190) (@yng87)
- LuxCore.jl: Extremely light dependency for Lux Compatibility (#191) (@avik-pal)
- Revert github workflows for merged LuxCore.jl (#193) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package ImageNet, (keep existing compat) (#194) (@github-actions[bot])
- CompatHelper: bump compat for Setfield to 1 for package ImageNet, (keep existing compat) (#195) (@github-actions[bot])
- CompatHelper: bump compat for OneHotArrays to 0.2 for package ImageNet, (keep existing compat) (#196) (@github-actions[bot])
- ADAM -> Adam (#197) (@cossio)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#199) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package examples, (keep existing compat) (#200) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package ImageNet, (keep existing compat) (#201) (@github-actions[bot])
- Add easy tied weights/parameter sharing support (#202) (@avik-pal)
- CompatHelper: bump compat for Functors to 0.4 for package LuxCore, (keep existing compat) (#203) (@github-actions[bot])
- CompatHelper: add new compat entry for Zygote at version 0.6 for package DDIM, (keep existing compat) (#218) (@github-actions[bot])
- Update DDIM compat requirements (#219) (@avik-pal)
- Update examples (#221) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.23 for package examples, (keep existing compat) (#222) (@github-actions[bot])
- Fix docs (#223) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.4 for package examples, (keep existing compat) (#226) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package ImageNet, (keep existing compat) (#227) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package DDIM, (keep existing compat) (#228) (@github-actions[bot])
- Functor ambiguity fix (#229) (@avik-pal)
- Add all compats together (#238) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.24 for package examples, (keep existing compat) (#241) (@github-actions[bot])
- CompatHelper: bump compat for JET to 0.7 for package test, (keep existing compat) (#251) (@github-actions[bot])
- [WIP] Use Extensions for Flux2Lux (#261) (@avik-pal)
- Cleaner test workflow (#262) (@avik-pal)
- Add a patch for #243 (#263) (@avik-pal)
- Update LuxLib dependencies (#265) (@avik-pal)
- Dropping Julia 1.6 support for Lux (#266) (@avik-pal)
- Purge unnecessary dependencies into weak dependencies (#267) (@avik-pal)
- Add ForwardDiff Extension: Dropout (#269) (@avik-pal)
- Add Tracker as an Extension (#272) (@avik-pal)
- CompatHelper: bump compat for AbstractDifferentiation to 0.5 for package examples, (keep existing compat) (#273) (@github-actions[bot])
- Some Improvements (#274) (@avik-pal)
- Tracker has some of the rules (#275) (@avik-pal)
- Temporary CA + Tracker Patches (#276) (@avik-pal)
- Add CUDA and AMDGPU trigger packages (#277) (@avik-pal)
- ReverseDiff Extension (#280) (@avik-pal)
- Bump peter-evans/create-pull-request from 3 to 4 (#283) (@dependabot[bot])
- Bump actions/cache from 1 to 3 (#284) (@dependabot[bot])
- Bump actions/checkout from 1 to 3 (#285) (@dependabot[bot])
- Return the history for Recurrence (#287) (@avik-pal)
- Truncate tuples and namedtuples (#290) (@avik-pal)
- [WIP] Remove projects from
lib
toLuxDL
(#291) (@avik-pal) - Patch freeze (#292) (@avik-pal)
- Add dispatch for no activation (#293) (@avik-pal)
- Remove weakdeps from deps (#295) (@avik-pal)
- Try restoring lts support (#296) (@avik-pal)
- Testing using LuxTestUtils.jl (#297) (@avik-pal)
- CompatHelper: bump compat for Boltz to 0.2 for package ImageNet, (kee… (#298) (@avik-pal)
- Bump peter-evans/create-pull-request from 4 to 5 (#299) (@dependabot[bot])
- remove Dataloaders (#300) (@avik-pal)
- Update docs (#301) (@avik-pal)
- Fix bug in recurrence ordering (#303) (@avik-pal)
- Update LuxComponentArraysExt.jl (#304) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.25 for package examples, (keep existing compat) (#306) (@github-actions[bot])
- propertynames of CA from type (#307) (@avik-pal)
- Fix GRUCell docstring (#309) (@andreuvall)
- Fix enzyme doc to reflect custom rules (#310) (@wsmoses)
- Fixed link to sciml book in NeuralODE example (#311) (@MartinuzziFrancesco)
- Move documentation build to buildkite (#314) (@avik-pal)
- Fixed Boltz.jl link in docs (#316) (@MartinuzziFrancesco)
- Allow container layers to have custom names (#317) (@avik-pal)
- Small grammar and style fixes (#318) (@MartinuzziFrancesco)
- Added '__apply_activation' to 'RNNCell's (#319) (@MartinuzziFrancesco)
- Added
AbstractRecurrentCell
(#322) (@MartinuzziFrancesco) - Towards v0.5 [Take II] (#323) (@avik-pal)
- Fix errors in applying bilinear layer to ND arrays (#333) (@vpuri3)
- Use WeightInitializers.jl (#334) (@avik-pal)
- Use PackageExtensionCompat (#335) (@avik-pal)
- CompatHelper: add new compat entry for LuxCUDA at version 0.1 for package ImageNet, (keep existing compat) (#337) (@github-actions[bot])
- CompatHelper: add new compat entry for LuxAMDGPU at version 0.1 for package ImageNet, (keep existing compat) (#338) (@github-actions[bot])
- Basic 2nd order support (#339) (@avik-pal)
- Use LuxLib 0.3 (#340) (@avik-pal)
- Workaround cjdoris/PackageExtensionCompat.jl#9 (#344) (@avik-pal)
- Merge pull request #344 from LuxDL/ap/lux0.4 (#346) (@avik-pal)
- Fixes for compat (#350) (@avik-pal)
- Fix ext docs (#351) (@avik-pal)
- Allow modifying ordering of data for recurrence (#353) (@avik-pal)
- CompatHelper: bump compat for ComponentArrays to 0.14 for package examples, (keep existing compat) (#355) (@github-actions[bot])
- Fix AMDGPU tests and versions (#356) (@avik-pal)
- Clean up the codebase (#357) (@avik-pal)
- Add example on how to save the models (#358) (@avik-pal)
- DOCFIX: LayerNorm's affine default value was incorrectly noted as 'false' in doc. (#359) (@srikumarks)
- CompatHelper: bump compat for Lux to 0.5 for package ImageNet, (keep existing compat) (#362) (@github-actions[bot])
- CompatHelper: bump compat for Lux to 0.5 for package DDIM, (keep existing compat) (#363) (@github-actions[bot])
- CompatHelper: bump compat for Images to 0.26 for package ImageNet, (keep existing compat) (#365) (@github-actions[bot])
- CompatHelper: bump compat for Images to 0.26 for package DDIM, (keep existing compat) (#366) (@github-actions[bot])
- Fix url link to Deep learning with Flux tutorial (#367) (@pnavaro)
- CompatHelper: bump compat for Turing to 0.27 for package examples, (keep existing compat) (#368) (@github-actions[bot])
- CompatHelper: bump compat for Turing to 0.28 for package examples, (keep existing compat) (#372) (@github-actions[bot])
- Boltz Link was not working, updated (#373) (@ashwani-rathee)
- Formatting fix (#379) (@avik-pal)
- CompatHelper: bump compat for ADTypes to 0.2, (keep existing compat) (#380) (@github-actions[bot])
- Move experimental code to Experimental (#381) (@avik-pal)
- CompatHelper: bump compat for Boltz to 0.3 for package ImageNet, (keep existing compat) (#382) (@github-actions[bot])
- Migrate Docs to using Vitepress (#383) (@avik-pal)
- Add Potential CUDA Grouped Conv segfault test (#388) (@avik-pal)
- Add Tutorial on modeling gravitational waveforms (#389) (@avik-pal)
- CompatHelper: bump compat for Optimisers to 0.3, (keep existing compat) (#390) (@github-actions[bot])
- CompatHelper: add new compat entry for CSV at version 0.10 for package examples, (keep existing compat) (#391) (@github-actions[bot])
- CompatHelper: add new compat entry for Optimization at version 3 for package examples, (keep existing compat) (#392) (@github-actions[bot])
- CompatHelper: bump compat for Optimisers to 0.3 for package examples, (keep existing compat) (#393) (@github-actions[bot])
- CompatHelper: add new compat entry for LineSearches at version 7 for package examples, (keep existing compat) (#394) (@github-actions[bot])
- CompatHelper: add new compat entry for OptimizationOptimJL at version 0.1 for package examples, (keep existing compat) (#395) (@github-actions[bot])
- CompatHelper: bump compat for Optimisers to 0.3 for package ImageNet, (keep existing compat) (#396) (@github-actions[bot])
- CompatHelper: bump compat for Optimisers to 0.3 for package DDIM, (keep existing compat) (#397) (@github-actions[bot])
- Restructure for autosidebar (#398) (@avik-pal)
- Use separate Project and Manifest files (#399) (@avik-pal)
- Use separate processes to generate the tutorials (#400) (@avik-pal)
- Add f16, f32, f64 functions for easy parameter eltype conversions (#401) (@avik-pal)
- Add a
@debug_mode
for debugging NaNs and Errors (#402) (@avik-pal) - Add a stateful layer which prevents boxing in SciML Layers (#404) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.29 for package BayesianNN, (keep existing compat) (#405) (@github-actions[bot])
- CompatHelper: bump compat for ComponentArrays to 0.15 for package Basics, (keep existing compat) (#408) (@github-actions[bot])
- CompatHelper: bump compat for ComponentArrays to 0.15 for package GravitationalWaveForm, (keep existing compat) (#409) (@github-actions[bot])
- CompatHelper: bump compat for ComponentArrays to 0.15 for package HyperNet, (keep existing compat) (#410) (@github-actions[bot])
- CompatHelper: bump compat for ComponentArrays to 0.15 for package NeuralODE, (keep existing compat) (#411) (@github-actions[bot])
- Bump actions/checkout from 3 to 4 (#412) (@dependabot[bot])
- Change Mean to Max Pooling layer in docstring [skip ci] (#413) (@roflmaostc)
- Upstream CA patches for AD Packages (#414) (@avik-pal)
- docs: fix the ecosystem link (#419) (@sathvikbhagavan)
- GPU Downstream testing (#421) (@avik-pal)
- Neural PDE downstream (#422) (@avik-pal)
- Minor Fixes (#425) (@avik-pal)
- Ensure ReverseDiff and Gauss Adjoint is also tested (#431) (@avik-pal)
- CompatHelper: bump compat for LuxAMDGPU to 0.2 for package DDIM, (keep existing compat) (#433) (@github-actions[bot])
- CompatHelper: bump compat for LuxAMDGPU to 0.2 for package GravitationalWaveForm, (keep existing compat) (#434) (@github-actions[bot])
- CompatHelper: bump compat for LuxAMDGPU to 0.2 for package HyperNet, (keep existing compat) (#435) (@github-actions[bot])
- CompatHelper: bump compat for LuxAMDGPU to 0.2 for package ImageNet, (keep existing compat) (#436) (@github-actions[bot])
- CompatHelper: bump compat for LuxAMDGPU to 0.2 for package NeuralODE, (keep existing compat) (#437) (@github-actions[bot])
- CompatHelper: bump compat for LuxAMDGPU to 0.2 for package PolynomialFitting, (keep existing compat) (#438) (@github-actions[bot])
- CompatHelper: bump compat for LuxAMDGPU to 0.2 for package SimpleRNN, (keep existing compat) (#439) (@github-actions[bot])
- Update Project.toml (#440) (@avik-pal)
- Emergency patch the ChainRules bug for Vector of CuArrays (#442) (@avik-pal)
- CompatHelper: add new compat entry for Statistics at version 1, (keep existing compat) (#443) (@github-actions[bot])
- CompatHelper: add new compat entry for Statistics at version 1 for package DDIM, (keep existing compat) (#444) (@github-actions[bot])
- CompatHelper: add new compat entry for Statistics at version 1 for package HyperNet, (keep existing compat) (#445) (@github-actions[bot])
- CompatHelper: add new compat entry for Statistics at version 1 for package ImageNet, (keep existing compat) (#446) (@github-actions[bot])
- CompatHelper: add new compat entry for Statistics at version 1 for package NeuralODE, (keep existing compat) (#447) (@github-actions[bot])
- CompatHelper: add new compat entry for Statistics at version 1 for package PolynomialFitting, (keep existing compat) (#448) (@github-actions[bot])
- CompatHelper: add new compat entry for Statistics at version 1 for package SimpleRNN, (keep existing compat) (#449) (@github-actions[bot])
- Add perdiodic padding to documentation (#452) (@maximilian-gelbrecht)
- Fix link to documentation in README.md (#454) (@pierre-haessig)
- Add CA test for Nested AutoDiff (#458) (@avik-pal)
- CompatHelper: bump compat for CairoMakie to 0.11 for package BayesianNN, (keep existing compat) (#459) (@github-actions[bot])
- CompatHelper: bump compat for CairoMakie to 0.11 for package GravitationalWaveForm, (keep existing compat) (#460) (@github-actions[bot])
- CompatHelper: bump compat for CairoMakie to 0.11 for package PolynomialFitting, (keep existing compat) (#461) (@github-actions[bot])
- Update WeightInitializers documentation (#465) (@avik-pal)
- Allow dispatch on compact layers and use let blocks for faster closures (#466) (@avik-pal)
- Add a RepeatedLayer (#467) (@avik-pal)
- Fix check (#469) (@avik-pal)
- CompatHelper: bump compat for Adapt to 4, (keep existing compat) (#470) (@github-actions[bot])
- Patch Metal Recurrent Neural Networks (#474) (@avik-pal)
- Bump actions/cache from 3 to 4 (#479) (@dependabot[bot])
- Bump codecov/codecov-action from 3 to 4 (#484) (@dependabot[bot])
- Bump peter-evans/create-pull-request from 5 to 6 (#485) (@dependabot[bot])
- Drop 1.6 support + Patches to Fix Tests (#487) (@avik-pal)
- Remove extensions in favor of GPUArraysCore (#488) (@avik-pal)
- Parallel Testing + Distributed Docs build (#490) (@avik-pal)
- Add output lengths for layers (#491) (@SebastianM-C)
- Format code (#493) (@avik-pal)
- Try using DocumenterVitepress.jl (#496) (@avik-pal)
- Move Stateful lux layer out of experimental (#497) (@avik-pal)
- Inbuilt-Distributed Setup (#500) (@avik-pal)
- Remove ComponentArrays type-piracies (#501) (@avik-pal)
- Add
outputsize
forChain
(#503) (@SebastianM-C) - fixes ImageNet, SimpleRNN examples (#504) (@avik-pal)
- Documentation Fixes (#505) (@avik-pal)
- Fix tutorial numbering (#509) (@avik-pal)
- CompatHelper: add new compat entry for LuxAMDGPU at version 0.2 for package Basics, (keep existing compat) (#510) (@github-actions[bot])
- CompatHelper: add new compat entry for Metalhead at version 0.9 for package ImageNet, (keep existing compat) (#511) (@github-actions[bot])
- CompatHelper: add new compat entry for Flux at version 0.14 for package ImageNet, (keep existing compat) (#512) (@github-actions[bot])
- Patches (#519) (@avik-pal)
- Docs Again (#520) (@avik-pal)
- General Quality of Life Enhancements (#521) (@avik-pal)
- CompatHelper: add new compat entry for Literate at version 2 for package Basics, (keep existing compat) (#522) (@github-actions[bot])
- CompatHelper: add new compat entry for Literate at version 2 for package BayesianNN, (keep existing compat) (#523) (@github-actions[bot])
- CompatHelper: add new compat entry for Literate at version 2 for package GravitationalWaveForm, (keep existing compat) (#524) (@github-actions[bot])
- CompatHelper: add new compat entry for Literate at version 2 for package HyperNet, (keep existing compat) (#525) (@github-actions[bot])
- CompatHelper: add new compat entry for Literate at version 2 for package NeuralODE, (keep existing compat) (#526) (@github-actions[bot])
- CompatHelper: add new compat entry for Literate at version 2 for package PolynomialFitting, (keep existing compat) (#527) (@github-actions[bot])
- CompatHelper: add new compat entry for Literate at version 2 for package SimpleRNN, (keep existing compat) (#528) (@github-actions[bot])
- New Interface to switch between frameworks (#529) (@avik-pal)
- CompatHelper: add new compat entry for MLUtils at version 0.4 for package SimpleChains, (keep existing compat) (#530) (@github-actions[bot])
- Move replicate to LuxCore (#532) (@MartinuzziFrancesco)
- Test for implicit imports (#533) (@avik-pal)
- Fix #534 (#535) (@avik-pal)
- Fix
Dense
documentation (#539) (@Sleort) - Fix typo: l to layer (#546) (@prbzrg)
- Minor fixes (#547) (@avik-pal)
- QoL improvements for tracing based AD (#548) (@avik-pal)
- Fix SimpleChains for single dims (#552) (@avik-pal)
- Standardize the handling of states (#553) (@avik-pal)
- CompatHelper: add new compat entry for ADTypes at version 0.2 for package HyperNet, (keep existing compat) (#555) (@github-actions[bot])
- CompatHelper: add new compat entry for ADTypes at version 0.2 for package PolynomialFitting, (keep existing compat) (#556) (@github-actions[bot])
- CompatHelper: add new compat entry for ADTypes at version 0.2 for package SimpleChains, (keep existing compat) (#557) (@github-actions[bot])
- LuxSimpleChainsExt: specify rng when initializing (#559) (@pao)
- Update SimpleRNN docs (#561) (@avik-pal)
- Remove TruncatedStacktraces (#562) (@avik-pal)
- Use @closure to make closures type-stable (#563) (@avik-pal)
- Add
set_device!
to docs (#569) (@avik-pal) - Fuse the activation and bias (#570) (@avik-pal)
- Try fixing the hydration error (#571) (@avik-pal)
- Test continuous benchmarking (#572) (@avik-pal)
- Add more benchmarks (#574) (@avik-pal)
- More Continuous Benchmarks (#575) (@avik-pal)
- Make the AD benchmarks type stable (#576) (@avik-pal)
- Bump julia-actions/setup-julia from 1 to 2 (#577) (@dependabot[bot])
- Fix numbering in the docs (#578) (@avik-pal)
- Add a gallery component (#579) (@avik-pal)
- AD Housekeeping (#580) (@avik-pal)
- Update style.css to disable 'calt' feature for monospace (#581) (@cormullion)
- Improvement to the
@compact
API (#584) (@avik-pal) - Add dynamic expressions extension (#585) (@avik-pal)
- Convert examples to doctests (#586) (@avik-pal)
- Bump crate-ci/typos from 1.18.0 to 1.20.8 (#587) (@dependabot[bot])
- CompatHelper: add new compat entry for Lux at version 0.5 for package SymbolicOptimalControl, (keep existing compat) (#589) (@github-actions[bot])
- Allow @set! for Stateful Layers (#590) (@avik-pal)
- Used New Fused Ops from LuxLib (#591) (@avik-pal)
- CompatHelper: bump compat for ADTypes to 1, (keep existing compat) (#592) (@github-actions[bot])
- CompatHelper: bump compat for ADTypes to 1 for package HyperNet, (keep existing compat) (#593) (@github-actions[bot])
- CompatHelper: bump compat for ADTypes to 1 for package PolynomialFitting, (keep existing compat) (#594) (@github-actions[bot])
- CompatHelper: bump compat for ADTypes to 1 for package SimpleChains, (keep existing compat) (#595) (@github-actions[bot])
- CompatHelper: bump compat for ADTypes to 1 for package SimpleRNN, (keep existing compat) (#596) (@github-actions[bot])
- Bump crate-ci/typos from 1.20.8 to 1.20.9 (#597) (@dependabot[bot])
- Native Nested AD support for Lux Models (#598) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.31 for package BayesianNN, (keep existing compat) (#599) (@github-actions[bot])
- Faster testing (#601) (@avik-pal)
- Unstructure structured inputs for reasonable broadcasting (#603) (@avik-pal)
- Bump crate-ci/typos from 1.20.9 to 1.20.10 (#607) (@dependabot[bot])
- Add 3rd party tutorial (#609) (@agdestein)
- CompatHelper: bump compat for DynamicExpressions to 0.17 for package SymbolicOptimalControl, (keep existing compat) (#611) (@github-actions[bot])
- Improvements to Nested AD (#612) (@avik-pal)
- Add missing table of contents entry (#613) (@agdestein)
- Attempt to build the tutorials in parallel (#616) (@avik-pal)
- Add field access syntax to Chain (#619) (@Sleort)
- Add
vector_jacobian_product
andjacobian_vector_product
functions (#623) (@avik-pal) - Bump crate-ci/typos from 1.20.10 to 1.21.0 (#624) (@dependabot[bot])
- Bring in
batched_jacobian
(#625) (@avik-pal) - Added layer for periodic inputs (#626) (@nicholaskl97)
- Cleanup (#629) (@avik-pal)
- CompatHelper: bump compat for CairoMakie to 0.12 for package BayesianNN, (keep existing compat) (#631) (@github-actions[bot])
- CompatHelper: bump compat for CairoMakie to 0.12 for package GravitationalWaveForm, (keep existing compat) (#632) (@github-actions[bot])
- CompatHelper: bump compat for CairoMakie to 0.12 for package PolynomialFitting, (keep existing compat) (#633) (@github-actions[bot])
- CompatHelper: bump compat for CairoMakie to 0.12 for package SymbolicOptimalControl, (keep existing compat) (#634) (@github-actions[bot])
- Fixes to type stability of Zygote (#635) (@avik-pal)
- Reduce max chunksize (#637) (@avik-pal)
- missing keyword in docstring (#638) (@RoyCCWang)
- Adding Enzyme Tests (#639) (@avik-pal)
- Enzyme Testing + Caching in
compute_gradients
(#640) (@avik-pal) - Add Enzyme to benchmark infra (#641) (@wsmoses)
- Add Enzyme to benchmark infra (#643) (@avik-pal)
- Add a warning on using Tracker with SimpleChains (#645) (@avik-pal)
- Improvements to Batched Jacobian (#646) (@avik-pal)
- Patch a compact bug (#648) (@avik-pal)
- update makie (#649) (@avik-pal)
- Test on multiple os (#650) (@avik-pal)
- Fix DocumenterVitepress compat (#651) (@avik-pal)
- Prevent infinite loop in Tracker (#652) (@avik-pal)
- Test ComponentArrays with Enzyme (#653) (@avik-pal)
- Update DocumenterVitepress compat in docs (#654) (@asinghvi17)
- Use ArgCheck.jl for helpful error messages (#655) (@avik-pal)
- CompatHelper: bump compat for OptimizationOptimJL to 0.3 for package GravitationalWaveForm, (keep existing compat) (#656) (@github-actions[bot])
- CompatHelper: bump compat for OptimizationOptimJL to 0.3 for package SymbolicOptimalControl, (keep existing compat) (#657) (@github-actions[bot])
- CompatHelper: bump compat for Turing to 0.32 for package BayesianNN, (keep existing compat) (#658) (@github-actions[bot])
- Restore the rrule for merge (#659) (@avik-pal)
- Bump julia-actions/julia-format from 2 to 3 (#660) (@dependabot[bot])
- Update & Rewrite the DDIM example (#661) (@avik-pal)
- Quality of Life Improvements (#666) (@avik-pal)
- CompatHelper: bump compat for SymbolicUtils to 2 for package SymbolicOptimalControl, (keep existing compat) (#669) (@github-actions[bot])
- Add Cartesian Embedding methods (#670) (@ldeso)
- More principled rewrite of layer_map (#671) (@avik-pal)
- Clean up the code for debug mode (#674) (@avik-pal)
- CompatHelper: add new compat entry for TensorBoardLogger at version 0.1 for package DDIM, (keep existing compat) (#676) (@github-actions[bot])
- CompatHelper: add new compat entry for CairoMakie at version 0.12 for package DDIM, (keep existing compat) (#677) (@github-actions[bot])
- Remove rrule for merge (#679) (@avik-pal)
- Minor optimizations (#681) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.33 for package BayesianNN, (keep existing compat) (#688) (@github-actions[bot])
- Newer public functions (#690) (@avik-pal)
- Update Boltz API Docs (#691) (@avik-pal)
- Bump crate-ci/typos from 1.21.0 to 1.22.3 (#693) (@dependabot[bot])
- More API updates (#696) (@avik-pal)
- Add ReverseSequence (#698) (@NeroBlackstone)
- Training ConvMixer on CIFAR10 in 10mins (#700) (@avik-pal)
- Add activation functions doc reference (Rebase #694) (#702) (@avik-pal)
- Clean up the CI scripts (#703) (@avik-pal)
- Loss functions module (#704) (@avik-pal)
- Add test guide documentation (#705) (@NeroBlackstone)
- Add
ReverseSequence()
docs (#706) (@NeroBlackstone) - Bidirectional RNN (#708) (@NeroBlackstone)
- Run doctests in the test CI + Lazy install test dependencies (#710) (@avik-pal)
- Bump crate-ci/typos from 1.22.3 to 1.22.7 (#711) (@dependabot[bot])
- Mark unexported symbols as public (#712) (@avik-pal)
- Install packages before loading (#713) (@avik-pal)
- Extend training API and update examples (#714) (@avik-pal)
- Try fixing AMDGPU test stalling (#716) (@avik-pal)
- CompatHelper: bump compat for AMDGPU in [weakdeps] to 0.9, (keep existing compat) (#717) (@github-actions[bot])
- Try to improve coverage (#718) (@avik-pal)
- Try wider docs (#721) (@avik-pal)
- Compiled ReverseDiff for training on CPU (#722) (@avik-pal)
- Makes
name
concrete types (#723) (@avik-pal) - CompatHelper: add new compat entry for StaticArrays at version 1 for package docs, (keep existing compat) (#724) (@github-actions[bot])
- CompatHelper: add new compat entry for KernelAbstractions at version 0.9 for package docs, (keep existing compat) (#725) (@github-actions[bot])
- Bump crate-ci/typos from 1.22.7 to 1.22.9 (#726) (@dependabot[bot])
- Performance Pitfalls and How to Catch them (#727) (@avik-pal)
- CompatHelper: bump compat for DynamicExpressions in [weakdeps] to 0.18, (keep existing compat) (#728) (@github-actions[bot])
- CompatHelper: bump compat for DynamicExpressions to 0.18 for package SymbolicOptimalControl, (keep existing compat) (#729) (@github-actions[bot])
- Store the optimizer in TrainState (#731) (@avik-pal)
- Simply
show
implementations and make them round-trippable (#732) (@avik-pal) - Try removing the type assert with this (#734) (@avik-pal)
- Add enzyme support for loss functions from LossFunctions.jl (#736) (@avik-pal)
- Mark cartersian index tests on cuda broken for now (#737) (@avik-pal)
- Run CI on pre (#739) (@avik-pal)
- Revert bee2de7-1188db7 (#740) (@avik-pal)
- Use shorthand syntax of @concrete (#741) (@avik-pal)
- Check status of broken tests (#742) (@avik-pal)
- Aggregate changes for v1 (#744) (@avik-pal)
- fix: nested ad when using direct eval in function (#745) (@avik-pal)
- CompatHelper: add new compat entry for GPUArraysCore at version 0.1 for package docs, (keep existing compat) (#746) (@github-actions[bot])
- Bump crate-ci/typos from 1.22.9 to 1.23.1 (#748) (@dependabot[bot])
- chore: bump simplechains version (#749) (@avik-pal)
- CompatHelper: bump compat for SciMLSensitivity to 7 for package NeuralODE, (keep existing compat) (#750) (@github-actions[bot])
- docs: restructure the manual entries a bit (#751) (@avik-pal)
- refactor: bring Optimisers.jl into main deps (#754) (@avik-pal)
- refactor: drop the
AMDGPU
extension (#755) (@avik-pal) - rearrange code in extensions (#756) (@avik-pal)
- fix: use proper qualified accesses for modules (#757) (@avik-pal)
- docs: remove redundant old preferences (#759) (@avik-pal)
- feat: allow multiple
@return
(#760) (@avik-pal) - Making all eltypes Float32 in Fitting a Polynomial using MLP (#761) (@Sleort)
- docs: fix inline math rendering (#762) (@avik-pal)
- refactor: use the faster
get_device_type
(#763) (@avik-pal) - refactor: move ForwardDiff.jl into main deps (#764) (@avik-pal)
- test: set st to training (#765) (@avik-pal)
- chore(deps): bump crate-ci/typos from 1.23.1 to 1.23.2 (#766) (@dependabot[bot])
- Update docstring dropout (#770) (@dmetivie)
- chore: recommend GH Discussions for Q/A (#774) (@avik-pal)
- Allow 2d input if RNN order is BatchLastIndex (#778) (@NeroBlackstone)
- test: remove
@test_nowarn
testing (#781) (@avik-pal) - fix: don't reuse pullback for safety (#782) (@avik-pal)
- improvements to compact macro (#783) (@avik-pal)
- test: warp
@inferred
with@test
(#784) (@avik-pal) - chore: add NNlib as a direct dep (#785) (@avik-pal)
- fix: update to latest LuxLib API + deprecations (#786) (@avik-pal)
- perf: fix enzyme benchmarks (#787) (@avik-pal)
- test: trigger enzyme tests (#788) (@avik-pal)
- docs: fix typo in "JVP & VJP Wrappers" (#789) (@ldeso)
- docs: update docs from downstream changes (#790) (@avik-pal)
- CompatHelper: bump compat for WeightInitializers to 1, (keep existing compat) (#791) (@github-actions[bot])
- CompatHelper: bump compat for WeightInitializers to 1 for package docs, (keep existing compat) (#792) (@github-actions[bot])
- test: improved testing (#793) (@avik-pal)
- feat: improvements to the Training API (#794) (@avik-pal)
- feat: easy mechanism to set preferences (#798) (@avik-pal)
- CompatHelper: bump compat for SymbolicUtils to 3 for package SymbolicOptimalControl, (keep existing compat) (#799) (@github-actions[bot])
- test: update to the newer LuxTestUtils (#800) (@avik-pal)
- chore(deps): bump crate-ci/typos from 1.23.2 to 1.23.5 (#804) (@dependabot[bot])
- refactor: move TrackerExt in a directory (#806) (@avik-pal)
- feat:
NilArray
for fast size propagation (#811) (@avik-pal) - docs: add new function to docs (#813) (@avik-pal)
- fix: update Dynamic Expressions to 0.19 (#814) (@avik-pal)
- docs: add documentation for
MLDataDevices
(#815) (@avik-pal) - CompatHelper: add new compat entry for MLDataDevices at version 1 for package docs, (keep existing compat) (#818) (@github-actions[bot])
- test: try separating the test Project files (#819) (@avik-pal)
- feat: use faster version of batched matmul (#820) (@avik-pal)
- ci: setup benchmarking CI (#821) (@avik-pal)
- ci: add CI to benchmark load times (#822) (@avik-pal)
- chore(deps): bump actions/checkout from 2 to 4 (#823) (@dependabot[bot])
- chore(deps): bump peter-evans/create-or-update-comment from 3 to 4 (#824) (@dependabot[bot])
- chore(deps): bump julia-actions/setup-julia from 1 to 2 (#825) (@dependabot[bot])
- chore(deps): bump peter-evans/find-comment from 2 to 3 (#826) (@dependabot[bot])
- chore(deps): bump julia-actions/cache from 1 to 2 (#827) (@dependabot[bot])
- fix: mark objective function as
Const
(#835) (@avik-pal) - ci: separate testing for groups in buildkite (#836) (@avik-pal)
- chore: update all AMDGPU compats (#837) (@avik-pal)
- test: remove Flux as a direct test dep (#838) (@avik-pal)
- test: remove some of the unnecessary Flux tests (#839) (@avik-pal)
- refactor: cleanup of internals (#840) (@avik-pal)
- fix: remove type pirated functions from Lux (#843) (@avik-pal)
- chore(deps): bump actions/upload-artifact from 2 to 4 (#844) (@dependabot[bot])
- chore(deps): bump crate-ci/typos from 1.23.5 to 1.23.6 (#845) (@dependabot[bot])
- CompatHelper: add new compat entry for Static at version 1 for package test, (keep existing compat) (#846) (@github-actions[bot])
- feat: improve batched jacobian (#848) (@avik-pal)
- chore: bump minimum LuxTestUtils version (#850) (@avik-pal)
- docs: minor documentation changes (#855) (@avik-pal)
- chore: marking layers as deprecated (#856) (@avik-pal)
- chore(deps): bump crate-ci/typos from 1.23.6 to 1.24.1 (#857) (@dependabot[bot])
- docs: more details in performance pitfalls (#859) (@avik-pal)
- fix: remove hacky usage of module getproperty rrules (#865) (@avik-pal)
- feat: expand
trainmode
,testmode
,update_state
to support Stateful Layers (#866) (@avik-pal) - CompatHelper: bump compat for Turing to 0.34 for package BayesianNN, (keep existing compat) (#870) (@github-actions[bot])
- chore(deps): bump crate-ci/typos from 1.24.1 to 1.24.3 (#871) (@dependabot[bot])
- test: don't run doctests on pre-releases (#873) (@avik-pal)
- test: run with DD error mode (#874) (@avik-pal)
- refactor: static fields in layers (#875) (@avik-pal)
- CompatHelper: bump compat for DataAugmentation to 0.3 for package ConvMixer, (keep existing compat) (#876) (@github-actions[bot])
- CompatHelper: bump compat for DataAugmentation to 0.3 for package DDIM, (keep existing compat) (#877) (@github-actions[bot])
- ci(buildkite): run some of the tutorials on CPU runners (#879) (@avik-pal)
- CompatHelper: add new compat entry for StableRNGs at version 1 for package docs, (keep existing compat) (#881) (@github-actions[bot])
- CompatHelper: bump compat for JLD2 to 0.5 for package DDIM, (keep existing compat) (#885) (@github-actions[bot])
- CompatHelper: bump compat for JLD2 to 0.5 for package ImageNet, (keep existing compat) (#886) (@github-actions[bot])
- CompatHelper: bump compat for JLD2 to 0.5 for package SimpleRNN, (keep existing compat) (#887) (@github-actions[bot])
- chore(deps): bump peter-evans/create-pull-request from 6 to 7 (#888) (@dependabot[bot])
- chore(deps): bump crate-ci/typos from 1.24.3 to 1.24.5 (#889) (@dependabot[bot])
- Fixed updating_to_v1 link in README.md (#890) (@MartinuzziFrancesco)
- fix: pretty printing of MaxPool Layer (#891) (@avik-pal)
- docs: add a PINN tutorial with nested AD (#894) (@avik-pal)
- fix: remove UnrolledUtilities dep (#895) (@avik-pal)
- refactor: cleanup Training and preserve type-stability in Enzyme (#896) (@avik-pal)
- docs: add an Optimization.jl tutorial showcasing lazy data movement (#897) (@avik-pal)
- CompatHelper: add new compat entry for Literate at version 2 for package PINN2DPDE, (keep existing compat) (#899) (@github-actions[bot])
- feat: update imagenet training script (#909) (@avik-pal)
- docs: simplify getting started docs (#930) (@avik-pal)
- fix: force_inline inside generated functions to avoid recursion issues (#931) (@avik-pal)
- fix: update to use test_gradients macro (#932) (@avik-pal)
- test: froggie tests are broken on gpu (#933) (@avik-pal)
- fix: static vector input to dense (#936) (@avik-pal)
- ci(buildkite): debugging CUDA segfaults on CI (#937) (@avik-pal)
- docs: try using the new documenter vitepress (#943) (@avik-pal)
- docs: collapse docstrings by default (#949) (@avik-pal)
- feat: update minimum version of Enzyme (#950) (@avik-pal)
- docs: fix version picker path (#951) (@avik-pal)
- fix: update Optimization compats (#952) (@avik-pal)
- fix: update GravitationalWaveform tutorial (#953) (@avik-pal)
- chore(deps): bump crate-ci/typos from 1.24.5 to 1.24.6 (#955) (@dependabot[bot])
- docs: update README example (#956) (@avik-pal)
- fix: patch optimization tutorial (#959) (@avik-pal)
- Added to Nested AD example how to use
batched_jacobian
(#964) (@facusapienza21) - Remove line about "not saving the model" (#965) (@asinghvi17)
- fix: optimization integration for gravitational waveform (#966) (@avik-pal)
- docs: add compilation example using Reactant (#967) (@avik-pal)
- docs: add the new
xla_device
(#968) (@avik-pal) - feat: compile training loop automatically using reactant (#969) (@avik-pal)
- chore(deps): bump crate-ci/typos from 1.24.6 to 1.25.0 (#971) (@dependabot[bot])
- ci: run tests only on
1.10
for now (#975) (@avik-pal) - refactor: make
LossFunctions
an optional dep (#976) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.25.0 to 1.26.0 (#978) (@dependabot[bot])
- CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#984) (@github-actions[bot])
- CompatHelper: bump compat for GPUArraysCore to 0.2 for package docs, (keep existing compat) (#985) (@github-actions[bot])
- fix:
LV
/Octavian
moved to optional deps (#986) (@avik-pal) - docs(reactant): simplify the enzyme call (#987) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.35 for package BayesianNN, (keep existing compat) (#989) (@github-actions[bot])
- chore(deps): bump crate-ci/typos from 1.26.0 to 1.26.8 (#992) (@dependabot[bot])
- perf: load
LoopVectorization
andOctavian
for benchmarks (#994) (@avik-pal) - refactor: use Lux primitives for AD (#995) (@avik-pal)
- Move code blocks inside bullet list (#996) (@abhro)
- Fix images.jl link (#997) (@NeroBlackstone)
- Fix broken link in Recurrence docs (#1001) (@MartinuzziFrancesco)
- refactor: move all subpackages into a mono-repo (#1002) (@avik-pal)
- feat: support passing in device and client to XLA (#1020) (@avik-pal)
- fix: avoid tracing through Lux models (#1021) (@avik-pal)
- chore: bump crate-ci/typos from 1.26.8 to 1.27.0 (#1022) (@dependabot[bot])
- ci: combine workflows (#1023) (@avik-pal)
- fix for Zygote and ChainRules OneElement (#1038) (@CarloLucibello)
- Link to quickstart explaining calling models in interface (#1040) (@oxinabox)
- fix: make enzyme testing opt-in for now (#1041) (@avik-pal)
- fix: missing zero leads to NaNs (#1044) (@avik-pal)
- chore: bump all
Optimisers
version (#1058) (@avik-pal) - CompatHelper: bump compat for Optimisers to 0.4 for package DDIM, (keep existing compat) (#1059) (@github-actions[bot])
- fix: gracefully handle
OneHotArrays
(#1064) (@avik-pal) - chore: bump crate-ci/typos from 1.27.0 to 1.27.3 (#1065) (@dependabot[bot])
Closed issues:
- TagBot trigger issue (#6)
- Suboptimal GroupNorm Implementation on GPUs (#10)
- Recurrent Neural Networks (#12)
- Flux Feature Parity (#13)
- Front page example broken (#17)
- Distributed Data Parallel Training on examples/ImageNet error (#18)
] add Lux
doesn't work (#19)- Support for non-CUDNN data types (#22)
- Hope to add more examples (#25)
- Train examples/NeuralODE error (#26)
- Thoughts on docs & tutorials (#28)
- Available architectures (#34)
- Register (#36)
PairwiseFusion
takes more inputs than documented (#38)- Remove Requires.jl (#45)
- Performance regressions with ComponentArrays (#49)
- How do I extend
Chain
to have multiple inputs (#53) - Nested Lists broken with the current Documentation (#68)
- Remove
ActivationFunction
? (#71) - Quickstart Example:
using Optimisers, Zygote
do not work unless we explicitly add those to current environment. (#75) - Remove
track_stats
from GroupNorm (#78) - Named Layers for Container Types (#79)
- Tracking support for Enzyme.jl (#81)
- Lighter syntax for stateless networks? (#83)
- Improve
Julia & Lux for the uninitiated
(#90) - Remaining Deprecations (#91)
- Scalar indexing problem for the NeuralODE example (#92)
- Basic example from Migrating from Flux to Lux is broken || normalization issue (#94)
- WeightNorm causes NaN for Conv layer gradients (#95)
- [Feature request] Another type of
Chain
that sequentially passingx
andst
(#96) - Generalize
normalization
to work for unconstrained types (#98) - RNN and LSTM break when using GPU (#100)
- Can one compose lux layers with graph neural network (#102)
- optimising parameters with Optimization.jl (#108)
- add OrdinaryDiffEq downstream test (#110)
- Make it easier to pass empty state
st = (;)
(#118) - is there transposed convolution (#122)
- Support for multidimensional data? (#123)
- Inconsistent descripition of
PairwiseFusion
(#130) getindex
forChain
(#131)No method matching
with argumentIRTools.Inner.Undefined
in gradient computation. (#134)- checkpointing for backpropagation (#139)
- CUDNNError during backpropagation in simple CNN (#141)
- Proposal of Lux + Enzyme + CUDA differential programming example (#145)
- concat input and output of a layer (#146)
- How to avoid the activation function conversion (#152)
- Allow dispatch on custom array types (#157)
- Nondeterministic method error for some gradients... (#159)
- Tied Weights (#182)
- Frozen Weights (#183)
- layer_map fails on custom containers (#187)
- Remove LuxCore manual installation in workflows (#192)
- Custom layers (#220)
- Lux.setup not found (#224)
- Support for CuArray{Float64} (#237)
- How to create a chain of LSTMcells in Lux.jl? (#239)
- Constrain the output layer! (#242)
- On using ComponentArray for L2 regularization (#243)
- Shared Lux Testing Package (#270)
- Automatic Differentiation Backends (#271)
- Get the full run of a recurrent cell using Lux (#282)
- Nested AD doesn't work with ComponentArrays (#286)
- Remove weak dependencies (#294)
- Lux Recurrence history is not in the correct order (I think) (#302)
- tanh activation function in GRUCell docstring (#308)
- WARNING: Wrapping
Vararg
directly in UnionAll is deprecated (wrap the tuple instead). (#312) - Adding
AbstractRecurrentCell
(#320) - Splitting weights initializers in own package (#321)
- Include documentation on how to save models with Lux (#329)
- network with multiple inputs (#330)
- Working with NamedTuples (#331)
bilinear
doesn't work forAbstractArray{T,3}
(#332)- Use ADTypes (#354)
- Add ability to load weights into
Dense
(#361) - Initialize weights of network from csv file (#369)
- BatchNorm(; affine = false) in a Chain missing _getproperty(::SubArray... when ps = ComponentArray(ps) (#371)
- Slightly broken example Polynomial Fitting (#374)
- Fixing the testing on buildkite (#375)
- Implementation of custom layer in Lux (#376)
- deploy versions (#384)
- DocumenterVitepress module into package (#385)
- Segfault when using Lux.Conv with CUDA (#386)
- Documentation Enhancement Suggestions (#387)
- @save not defined? (#403)
- The MNIST Neural ODE example does not work with
ReverseDiffAdjoint
(#407) - Update Documentation to mention loading AD Packages for Training (#415)
ComponentArrays
makes coupling layers type-unstable unexpectedly (#416)- ComponentArrays makes Custom Layers containing Chains type-unstable (#417)
- Custom Layer, Differential Equation as Activation Function. (#418)
- Gradients of shared parameters do not behave as expected (#420)
- inconsistent LSTM results in time series forecast between Flux.jl and Lux.jl (#424)
- Broadcast Layer (#426)
- Can't use freeze with ComponentArray. (#427)
Lux.testmode
resorts to scalar indexing with frozen params (#432)- Custom Model for Neural ODE (#441)
- Periodic Padding (#451)
- Bug in
ConvTranspose
? (#455) - Generating Parameters with CUDA (#456)
- Zygote gradient fails for Custom Layer (#457)
- Adaptors should not change the dtype (#462)
- Any equivalency to torch.nn.Parameter? (#464)
- Support for MultiRNNCell (#472)
- GPU evaluation of
Recurrence()
broken on Metal (#473) - Recurrent Layers don't take Vectors as Input (#478)
- How to choose a specific GPU device (#480)
- Training in batches and building gradient as mean of individual gradients (#481)
- ComponentArrays type piracy (#482)
- No Gradients with respect to parameters using Custom Layers (#483)
- Where is the API doc for activatations (#486)
- Distributed Training (#494)
- AMDGPU CI takes a lot of time (#495)
- SimpleRNN example is broken on AMDGPU (#498)
- Support for multi-core CPUs? (#502)
- Bayesian NN example throws Pkg Extension load errors (#507)
- 404 many Tutorial links are invalid (#508)
- uninitiated tutorial replicate part shows different numbers but should show the same (#513)
- uninitiated tutorial - Code Font confusing for pipe |> (#514)
- Documentation Request: Standardize the handling of the state
st
(#515) - Let @compact return the updated state (#516)
- Documentation Request: Have a section about Loss Functions (#517)
- Documentation Request: Also list GeometricML.jl and SciML.ai under Ecosystem (#518)
- Should
replicate
be part of LuxCore? (#531) - pad=SamePad() does not work as intended in ConvTranspose. (#534)
- Array of Structs to Struct of Array transformation for some AD backends (#538)
- Documentation on main is broken (#541)
- Lux.AMDGPU: type cast throws error (#542)
l
should be clarified. Maybe a typo? (#543)- Bug when converting model with single layer to SimpleChains (#545)
- Improve broadcasting via
FastBroadcast.jl
(#549) - FYI: Comment and question (#550)
- TypeError using SimpleChains integration (#551)
- SimpleChains-backed models do not setup consistenly with fixed RNG seeding (#554)
- Stable docs missing (#566)
- Tutorial links too small (#567)
- Constraint on weights and bias (#568)
- Continuous Benchmarking (#573)
- Allow "const" arrays as inputs to
@compact
(#588) - Pullback over jacobian (with CUDA) (#602)
- Zygote nested AD failure (#604)
- Meta-Issue for improvements to
@compact
(#606) - Nested AD for Parameter Gradient/Jacobian (#610)
- Rewrite
@layer_map
to use KeyPath from Functors (#615) - Extracting part of a model, with the corresponding parameters and states (#617)
- Differentiating
Zygote.pullback
(#621) - Batched Jacobian Functions (#622)
- Error for JVP by Enzyme (#628)
- [Nested AD] Incorrect gradient when taking a gradient over a gradient using StatefulLuxLayer (#630)
- batched_jacobian + CUDA => InvalidIRError (#636)
- Add a compiled tape version for ReverseDiff (#642)
- Simple MLP requires Enzyme runtimeActivity (#647)
- Using
swish
asConv
activation function errors on the GPU (#662) - Fast activation error (#663)
- Definition and implementation of 'Loss' in Linear Regression Tutorial "Julia & Lux for the Uninitiated" (#664)
- Add improper qualified accesses checks (#667)
rrule
forBase.merge
defined inChainRulesCore
(#678)- Different activation functions in one layer (#680)
- Remove Auto-Flattening of Chains (#682)
- Add type-stability checks via
DispatchDoctor.jl
(#683) - Support for inactive arguments in DifferentiationInterface (#685)
- Feature request: Bidirectional for RNN layer. (#687)
- Predefined loss functions (#689)
- Static Type Parameters not accessible inside
@compact
(#692) - Auto detect and warn against performance pitfalls (#699)
- Add documentation about how to partial tests. (#701)
- Feature request: 1D CNN, i.e. keras.layer.Conv1d (#709)
- AMDGPU CI stalls (#715)
- Inference using
NN :: Chain
inside a GPU kernel (#720) - custom
show
is often not valid julia syntax to reconstruct (#730) - Roadmap to v1 (#735)
- Error in
compute_gradients
when loss already has aZygote.gradient
(#743) - NCCL Complex wrapper (#747)
- Drop
Tracker.jl
support for SimpleChains (#753) - Feature request: TimeDistributed Layer (#758)
- Feature Request: Allow recurrent layers with 2D input (features * seq_length), even if the order is BatchLastIndex (#767)
- Missing statistics tracking in normalization layers (#780)
- unexpected parameter type for AbstractExplicitContainer with single trainable field (#795)
- Test with DispatchDoctor error mode (#797)
- Change defaults for Layers to match Pytorch (#808)
- Gradient checkpointing/ rematerialization (#816)
- how to use Lux.jl utility 'BinaryCrossEntropy' (#841)
- Mixed-Precision Matrix Multiply Performance Regression (#847)
- Lux.testmode not updating state for BatchNorm layers for nested models? (#849)
- Add Float128 support (#851)
- Add multiple cpu cores and multiple Julia computers support (#852)
Enzyme.Forward
hits Octavian dispatch in Dense (#853)- Move uncommon layers to Boltz.jl (#854)
- Update the ImageNet example (#878)
- MethodError: no method matching applychain (#884)
- Question: how can one use TrainState.cache? (#892)
- Problem with Enzyme AD and SArray parameters (#935)
- Is
AbstractLuxContainerLayer
abandoned in Lux 1.0.4? (#942) - Docs build is broken (#957)
- Encoder-Decoder RNNs (#961)
- Efficient way to compute Jacobian in nested AD (#963)
- The returned values loss and train_state of single_train_step! are not compatible (#979)
- Segfault for simple Zygote pullback (#980)
- Question on intialization after breaking changes (#988)
- Documentation: Using MLFlow with Lux.jl (#990)
- Documentation of Layer Freezing might need small update (#991)
- scalar indexing of gpu array in Zygote gradient (#1016)
- Getting NaNs in the pullback of ReverseSequence (#1043)