ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Dec 26, 2024 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
SHARK Studio -- Web UI for SHARK+IREE High Performance Machine Learning Distribution
Concrete: TFHE Compiler that converts python programs into FHE equivalent
BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
VAST is an experimental compiler pipeline designed for program analysis of C and C++. It provides a tower of IRs as MLIR dialects to choose the best fit representations for a program analysis or further program abstraction.
Highly optimized inference engine for Binarized Neural Networks
比做算法的懂工程落地,比做工程的懂算法模型。
Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.
To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."