Skip to content

4-bit serialization and bug fixes

Compare
Choose a tag to compare
@TimDettmers TimDettmers released this 08 Jan 01:19
· 246 commits to main since this release

This release added 4-bit serialization, implemented by @poedator, to bitsandbytes. With this,you can call model.save() and model.load() for models that contain 4-bit bitsandbytes layers meaning you can save and load 4-bit models. All of this is integrated with the Hugging Face transformers stack. The 0.42.0 release also comes with many bug fixes. See below for detailed change logs.

0.42.0

Features:

  • 4-bit serialization now supported. This enables 4-bit load/store. Thank you @poedator #753
  • the bitsandbytes library now has a version attribute: bitsandbytes.__version__ @rasbt #710

Bug fixes:

  • Fixed bugs in dynamic exponent data type creation. Thank you @RossM, @KohakuBlueleaf, @ArrowM #659 #227 #262 #152
  • Fixed an issue where 4-bit serialization would fail for layers without double quantization #868. Thank you, @poedator
  • Fixed an issue where calling .to() or .cuda() on a 4-bit layer twice would result in an error #867. Thank you, @jph00
  • Fixed a bug where a missing access permission in a path searched for CUDA would lead to an error @osma #677
  • Fixed a bug where the GOOGLE_VM_CONFIG_LOCK_FILE variable could cause errors in colab environments @akrentsel @xaptronic #715 #883 #622
  • Fixed a bug where kgetColRowStats (LLM.int8()) would fail for certain dimensions @LucQueen #905
  • Fixed a bug where the adjusted regular Embedding layer was not available via bnb.nn.Embedding @neel04 #563
  • Fixed added missing scipy requirement @dulalbert #525