Skip to content

Releases: deepjavalibrary/djl

DJL v0.31.1 Release

18 Nov 23:14
Compare
Choose a tag to compare

Key Changes

  • Engine Updates:
    • PyTorch 2.5.1 #3517
    • HuggingFace Tokenizers 0.20.3 #3514
  • Added Android support for HuggingFace Tokenizers @naveen521kk in #3531
  • Fixed issue with cross-platform archive extraction #3544

Enhancements

Bug Fixes

Documentation

CI/CD

New Contributors

Full Changelog: v0.30.0...v0.31.1

DJL v0.30.0 Release

13 Sep 19:52
Compare
Choose a tag to compare

Key Changes

  • Engine Updates:
    • OnnxRuntime 1.19.0 #3446
    • Huggingface Tokenizers 0.20.0 #3452
  • Added mask generation task for SAM2 model #3450
  • Text Embedding Inference:
    • Added Mistral, Qwen2, GTE, Camembert embedding model support
    • Added reranker model support

Enhancement

Bug Fixes

Documentation

CI/CD

Full Changelog: v0.29.0...v0.30.0

DJL v0.28.0 Release

16 May 03:46
Compare
Choose a tag to compare

Key Changes

  • Upgrades for engines
  • Enhancements for engines and API
    • Adds experimental Rust engine #3078

Enhancement

Bug Fixes

Documentation

CI/CD

New Contributors

Full Changelog: v0.27.0...v0.28.0

DJL v0.29.0 Release

19 Jul 01:45
d7c8a74
Compare
Choose a tag to compare

Key Changes

  • Upgrades for engines

    • Upgrades PyTorch engine to 2.3.1
    • Upgrades TensorFlow engine to 2.16.1
    • Introduces Rust engine CUDA support
    • Upgrades OnnxRuntime version to 1.18.0 and added CUDA 12.4 support
    • Upgrades javacpp version to 1.5.10
    • Upgrades HuggingFace tokenizer to 0.19.1
    • Fixes several issues for LightGBM engine
    • Deprecated llamacpp engine
  • Enhancements for engines and API

    • Adds Yolov8 segmentation and pose detection support
    • Adds metric type to Metic class
    • Improves drawJoints and drawMask behavior for CV model
    • Improves HuggingFace model importing and conversion tool
    • Improves HuggingFace NLP model batch inference performance
    • Adds built-in ONNX extension support
    • Adds several NDArray operators in PyTorch engine
    • Adds fp16 and bf16 support for OnnxRuntime engine
    • Adds CrossEncoder support for NLP models

Enhancements

Bug Fixes

Documentation

Read more

DJL v0.24.0 Release

16 Oct 20:40
d7c8a74
Compare
Choose a tag to compare

Key Features

Enhancement

Bug fixes

Documentation and Examples

CI

New Contributors

Full Changelog: v0.23.0...v0.24.0

DJL v0.9.0 release note

18 Dec 22:06
d7c8a74
Compare
Choose a tag to compare

DJL 0.9.0 brings MXNet inference optimization, abundant PyTorch new feature support, TensorFlow windows GPU support and experimental DLR engine that support TVM models.

Key Features

  • Add experimental DLR engine support. Now you can run TVM model with DJL

MXNet

  • Improve MXNet JNA layer by reusing String, String[] and PointerArray with object pool which reduce the GC time significantly

PyTorch

  • you can easily create COO Sparse Tensor with following code snippet
long[][] indices = {{0, 1, 1}, {2, 0, 2}};
float[] values = {3, 4, 5};
FloatBuffer buf = FloatBuffer.wrap(values);
manager.createCoo(FloatBuffer.wrap(values), indices, new Shape(2, 4));
  • If the input of your TorchScript model need List or Dict type, we now add simple one dimension support for you.
// assum your torchscript model takes model({'input': input_tensor})
// you tell us this kind of information by setting the name
NDArray array = manager.ones(new Shape(2, 2));
array.setName("input1.input");
  • we support loading ExtraFilesMap
// saving ExtraFilesMap
Criteria<Image, Classifications> criteria = Criteria.builder()
  ...
  .optOption("extraFiles.dataOpts", "your value")  // <- pass in here 
  ... 

TensorFlow

  • Windows GPU is now supported

Several Engines upgrade

Engine version
PyTorch 1.7.0
TensorFlow 2.3.1
fastText 0.9.2

Enhancement

  • Add docker file for serving
  • Add Deconvolution support for MXNet engine
  • Support PyTorch COO Sparse tensor
  • Add CSVDataset, you can find a sample usage here
  • Upgrade TensorFlow to 2.3.1
  • Upgrade PyTorch to 1.7.0
  • Add randomInteger operator support for MXNet and PyTorch engine
  • Add PyTorch Profiler
  • Add TensorFlow Windows GPU support
  • Support loading the model from jar file
  • Support 1-D list and dict input for TorchScript
  • Remove the Pointer class being used for JNI to relieve Garbage Collector pressure
  • Combine several BertVocabulary into one Vocabulary
  • Add loading the model from Path class
  • Support ExtraFilesMap for PyTorch model inference
  • Allow both int32 & int64 for prediction & labels in TopKAccuracy
  • Refactor MXNet JNA binding to reduce GC time
  • Improve PtNDArray set method to use ByteBuffer directly and avoid copy during tensor creation
  • Support experimental MXNet optimizeFor method for accelerator plugin.

Documentation and examples

  • Add Amazon Review Ranking Classification
  • Add Scala Spark example code on Jupyter Notebook
  • Add Amazon SageMaker Notebook and EMR 6.2.0 examples
  • Add DJL benchmark instruction

Bug Fixes

  • Fix PyTorch Android NDIndex issue
  • Fix Apache NiFi issue when loading multiple native in the same Java process
  • Fix TrainTicTacToe not training issue
  • Fix Sentiment Analysis training example and FixedBucketSampler
  • Fix NDArray from DataIterable not being attaching to NDManager properly
  • Fix WordPieceTokenizer infinite loop
  • Fix randomSplit dataset bug
  • Fix convolution and deconvolution output shape calculations

Contributors

Thank you to the following community members for contributing to this release:

Frank Liu(@frankfliu)
Lanking(@lanking520)
Kimi MA(@kimim)
Lai Wei(@roywei)
Jake Lee(@stu1130)
Zach Kimberg(@zachgk)
0xflotus(@0xflotus)
Joshua(@euromutt)
mpskowron(@mpskowron)
Thomas(@thhart)
DocRozza(@docrozza)
Wai Wang(@waicool20)
Trijeet Modak(@uniquetrij)

DJL v0.7.0 release notes

04 Sep 01:14
d7c8a74
Compare
Choose a tag to compare
Pre-release

DJL 0.7.0 brings SetencePiece for tokenization, GravalVM support for PyTorch engine, a new set of Nerual Network operators, BOM module, Reinforcement Learning interface and experimental DJL Serving module.

Key Features

  • Now you can leverage powerful SentencePiece to do text processing including tokenization, de-tokenization, encoding and decoding. You can find more details on extension/sentencepiece.
  • Engine upgrade:
    • MXNet engine: 1.7.0-backport
    • PyTorch engine: 1.6.0
    • TensorFlow: 2.3.0
  • MXNet multi-gpu training now is boosted by MXNet KVStore by default, which saves lots of overhead by GPU memory copy.
  • GraalVM are fully supported on both of regular execution and native image for PyTorch engine. You can find more details on GraalVM example.
  • Add a new set of Neural Network operators that offers capability of full controlling over parameters for CV domain, which is similar to PyTorch nn.functional module. You can find the operator method in its Block class.
Conv2d.conv2d(NDArray input, NDArray weight, NDArray bias, Shape stride, Shape padding, Shape dilation, int groups);
  • Bill of Materials (BOM) is introduced to manage the version of dependencies for you. In DJL, the engine you are using usually is tied to a specific version of native package. By easily adding BOM dependencies like this, you won’t worry about version anymore.
<dependency>
    <groupId>ai.djl</groupId>
    <artifactId>bom</artifactId>
    <version>0.7.0</version>
    <type>pom</type>
    <scope>import</scope>
</dependency>
implementation platform("ai.djl:bom:0.7.0")
  • JDK 14 now get supported
  • New Reinforcement Learning interface including RIAgent, RlEnv, etc, you can see a comprehensive TicTacToe example.
  • Support DJL Serving module. With only a single command, now you can enjoy deploying your model without bothering writing the server code or config like server proxy.
cd serving && ./gradlew run --args="-m https://djl-ai.s3.amazonaws.com/resources/test-models/mlp.tar.gz"

Documentation and examples

  • We wrote the D2L book from chapter 1 to chapter 7 with DJL. You can learn basic deep learning concept and classic CV model architecture with DJL. Repo
  • We launched a new doc website that hosts abundant documents and tutorials for quick search and copy-paste.
  • New Online Sentiment Analysis with Apache Flink.
  • New CTR prediction using Apache Beam and Deep Java Library(DJL).
  • New DJL logging configuration document which includes how to enable slf4j, switch to other logging libraries and adjust log level to debug the DJL.
  • New Dependency Management document that lists DJL internal and external dependencies along with their versions.
  • New CV Utilities document as a tutorial for Image API.
  • New Cache Management document is updated with more detail on different categories.dependency management.
  • Update Model Loading document to describe loading model from various sources like s3, hdfs.

Enhancement

  • Add archive file support to SimpleRepository
  • ImageFolder supports nested folder
  • Add singleton method for LambdaBlock to avoid redundant function reference
  • Add Constant Initializer
  • Add RMSProp, Adagrad, Adadelta Optimizer for MXNet engine
  • Add new tabular dataset: Airfoil Dataset
  • Add new basic dataset: CookingExchange, BananaDetection
  • Add new NumPy like operators: full, sign
  • Make prepare() method in Dataset optional
  • Add new Image augmentation APIs where you can add to Pipeline to enrich your image dataset
  • Add new handy fromNDArray to Image API for converting NDArray to Image object quickly
  • Add interpolation option for Image Resize operator
  • Support archive file for s3 repository
  • Import new SSD model from TensorFlow Hub into DJL model zoo
  • Import new Sentiment Analysis model from HuggingFace into DJL model zoo

Breaking changes

  • Drop CUDA 9.2 support for all the platforms including linux, windows
  • The arguments of several blocks are changed to align with the signature of other widely used Deep Learning frameworks, please refer to our Java doc site
  • FastText is no longer a full Engine, it becomes a part of NLP utilities in favor of FastTextWorkEmbedding
  • Move the WarmUp out from existing Tracking and introduce new WarmUpTracker
  • MxPredictor now doesn’t copy parameters by default, please make sure to use NaiveEngine when you run inference in multi-threading environment

Bug Fixes

  • Fixing Validation Epoch Result bug
  • Fix multiple process downloading the same model bug
  • Fix potential concurrent write bug while downloading metadata.json
  • Fix URI parsing error on Windows
  • Fix multi-gpu training crash when the number of the batch size is smaller than number of devices
  • Fix not setting number of inter-op threads for PyTorch engine

Contributors

Thank you to the following community members for contributing to this release:

Christoph Henkelmann, Frank Liu, Jake Cheng-Che Lee, Jake Lee, Keerthan Vasist, Lai Wei, Qing Lan, Victor Zhu, Zach Kimberg, aksrajvanshi, gstu1130, 蔡舒起

DJL v0.27.0 Release

28 Mar 21:19
Compare
Choose a tag to compare

Key Changes

  • Upgrades for engines
    • OnnxRuntime 1.17.1 #3019
  • Enhancements for engines and API
    • Supports PyTorch stream imperative model load #2981
    • Support encode/decode String tensor #3034

Enhancement

Bug Fixes

Documentation

CI/CD

New Contributors

Full Changelog: v0.26.0...v0.27.0

DJL v0.26.0 Release

16 Jan 19:09
Compare
Choose a tag to compare

Key Changes

  • LlamaCPP Support. You can use DJL to run supported LLMs using the LlamaCPP engine. See the Chatbot example here to learn more.
  • Manual Engine Initialization. You can configure DJL to not load any engines at startup, and query/register engines programmatically at runtime
  • Engine Updates:
    • PyTorch 2.1.1
    • Huggingface Tokenizers 0.15.0
    • OnnxRuntime 1.16.3
    • XGBoost 2.0.3

Enhancement

Bug Fixes

Documentation

CI/CD

New Contributors

Full Changelog: v0.25.0...v0.26.0

DJL v0.25.0 Release

09 Dec 00:13
Compare
Choose a tag to compare

Key Changes

  • Engine Upgrades
    • [XGB] support for .xgb file extension #2810
    • [Tokenizers] Upgrade tokenizers to 1.14.1 #2818
    • [XGB] Updates XGBoost to 2.0.1 #2833
  • Early Stopping support for Training by @jagodevreede #2806

Enhancement

Bug fixes

Documentation and Examples

CI

New Contributors

Full Changelog: v0.24.0...v0.25.0