-
Notifications
You must be signed in to change notification settings - Fork 447
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #3407 from dfinity/ielashi/ai_inference
docs: Add page on decentralized AI inference
- Loading branch information
Showing
3 changed files
with
79 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
--- | ||
keywords: [intermediate, concept, AI, ai, deAI, deai] | ||
--- | ||
|
||
import { MarkdownChipRow } from "/src/components/Chip/MarkdownChipRow"; | ||
|
||
# Decentralized AI inference | ||
|
||
<MarkdownChipRow labels={["Intermediate", "Concept", "DeAI" ]} /> | ||
|
||
|
||
## Overview | ||
|
||
Inference in the context of decentralized AI refers to using a trained model to draw conclusions about new data. | ||
It's possible for canister smart contracts to run inference in a number of ways depending on the decentralization and performance requirements. | ||
|
||
Canisters can utilize inference run on-chain, on-device, or through HTTPS outcalls. | ||
|
||
## Inference on-chain | ||
|
||
Currently, ICP supports on-chain inference of small models using AI libraries such as [Sonos Tract](https://github.com/sonos/tract) that compile to WebAssembly. | ||
Check out the [image classification example](/docs/current/developer-docs/ai/ai-on-chain) to learn how it works. | ||
|
||
### Examples | ||
|
||
- [GPT2](https://github.com/modclub-app/rust-connect-py-ai-to-ic/tree/main/internet_computer/examples/gpt2): An example of GPT2 running on-chain using Rust. | ||
- [ELNA AI](https://github.com/elna-ai): A fully on-chain AI agent platform and marketplace. Supports both on-chain and off-chain LLMs. [Try it here](https://dapp.elna.ai/). | ||
- [Tensorflow on ICP](https://github.com/carlosarturoceron/decentAI): An Azle example that uses TypeScript and a pre-trained model for making predictions. | ||
- [ICGPT](https://github.com/icppWorld/icgpt): A React frontend that uses a C/C++ backend running an LLM fully on-chain. [Try it here](https://icgpt.icpp.world/). | ||
- [ArcMind AI](https://github.com/arcmindai/arcmindai): An autonomous agent written in Rust using chain of thoughts for reasoning and actions. [Try it here](https://arcmindai.app). | ||
|
||
### On-chain inference frameworks | ||
|
||
- [Sonos Tract](https://github.com/sonos/tract): An open-source AI inference engine written in Rust that supports ONNX, TensorFlow, and PyTorch models, and compiles to WebAssembly. | ||
- [MotokoLearn](https://github.com/ildefons/motokolearn): A Motoko package that enables on-chain machine learning. | ||
[The image classification example](https://github.com/dfinity/examples/tree/master/rust/image-classification) explains how to integrate it into a canister to run on ICP. | ||
- [Rust-Connect-Py-AI-to-IC](https://github.com/jeshli/rust-connect-py-ai-to-ic): Open-source tool for deploying and running Python AI models on-chain using Sonos Tract. | ||
- [Burn](https://github.com/tracel-ai/burn): An open-source deep learning framework written in Rust that supports ONNX, and PyTorch models, and compiles to WebAssembly. | ||
[The MNIST example](https://github.com/smallstepman/ic-mnist) explains how to integrate it into a canister to run on ICP. [Try it here](https://jsi2g-jyaaa-aaaam-abnia-cai.icp0.io/). | ||
- [Candle](https://github.com/huggingface/candle): a minimalist ML framework for Rust that compiles to WebAssembly. | ||
[An AI chatbot example](https://github.com/ldclabs/ic-panda/tree/main/src/ic_panda_ai) shows how to run a Qwen 0.5B model in a canister on ICP. | ||
|
||
|
||
## Inference on-device | ||
|
||
An alternative to running the model on-chain would be to download the model from a canister, then run the inference on the local device. If the user trusts their own device, then they can trust that the inference ran correctly. | ||
|
||
A disadvantage of this workflow is that the model needs to be downloaded to the user's device, resulting in less confidentiality of the model and decreased user experience due to increased latency. | ||
|
||
ICP supports this workflow for most existing models because a smart contract on ICP can store models up to 400GiB. | ||
|
||
### Examples | ||
|
||
- [DeVinci](https://github.com/patnorris/DecentralizedAIonIC): An in-browser AI chatbot that uses an open-source LLM model served from ICP. [Try it here](https://x6occ-biaaa-aaaai-acqzq-cai.icp0.io/). | ||
|
||
|
||
## Inference with HTTP calls | ||
|
||
Smart contracts running on ICP can make [HTTP requests through HTTP outcalls](/docs/current/developer-docs/smart-contracts/advanced-features/https-outcalls/https-outcalls-overview) to Web2 services including OpenAI and Claude. | ||
|
||
### Examples | ||
|
||
- [Juno + OpenAI](https://github.com/peterpeterparker/juno-openai): An example using Juno and OpenAI to generate images from prompts. [Try it here](https://pycrs-xiaaa-aaaal-ab6la-cai.icp0.io/). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters