This project implements an API server for SentenceTransformers' embedders which aims to be a drop-in replacement for llama-cpp-python's webserver embeddings.
If you are using the llama-cpp-python webserver and you are experiencing poor embedding performance, now you can try another embedder without modifying your client code!
Clone the project
git clone https://github.com/AlessandroSpallina/SentenceTransformersServer.git
Choose the model you want to use for the embedding, here all the supported models from SentenceTransformers.
Keep in mind that the right model to pick mostly depends on your use case and your data, so try some similarity search in order to understand which model fits better your needs. For example you can start by comparing all-MiniLM-L6-v2 and all-mpnet-base-v2. You might want to check this leaderboard too.
When you are done, rename the .env.example to .env and modify the model name accordingly. If you are behind a corporate proxy remember to uncomment the right section in the docker-compose.yml file.
Then impose your hands on the keyboard, close your eyes and
docker compose up