VectorVerse is an exploratory platform that serves as a hub for exploring the output of various Vector Databases. With VectorVerse, you have the opportunity to delve into the results produced by multiple Vector Databases. Additionally, you can utilize VectorVerse to compare the output generated by multiple Language Model (LLM) Models, including private models. This enables you to gain valuable insights and make informed decisions based on a comprehensive analysis of different data sources and models.
My.Movie.mp4
- Multiple Vector Databases: VectorVerse let you explore multiple Vector Databases are compare/observe the result.
- LLM Model: VectorVerse allows you to explore multiple LLM models output like GPT3, GPT4, GPT4All etc.
- Chat History is maintained using sqlite
Current LLM Models Support
- GPT3
- GPT4
- GPT4All
- LLama
Create a .env file (template provided as example.env) and update the following
Then, download the LLM model and place it in a directory of your choice:
- LLM: default to ggml-gpt4all-j-v1.3-groovy.bin. If you prefer a different GPT4All-J compatible model, just download it and reference it in your
.env
file.
OPENAI_API_KEY=*****
OPENAI_API_BASE=****
OPENAI_API_TYPE=azure
OPENAI_API_VERSION=2023-03-15-preview
MODEL_TYPE=supports LlamaCpp or GPT4All
LLAMA_EMBEDDINGS_MODEL=/path/to/ggml-model-q4_0.bin
MODEL_PATH=/path/to/ggml-gpt4all-j-v1.3-groovy.bin
db_persistent_path=is the folder you want your vectorstore in
collection_name=examples
pdf_uploadpath=OPTIONAL
Note: because of the way langchain
loads the SentenceTransformers
embeddings, the first time you run the script it will require internet connection to download the embeddings model itself.
Run docker-compose up
and browse http://localhost:8501
-
Git clone the project
-
Navigate to the directory where the repository was downloaded
cd vectorverse
-
Install the required dependencies
pip install -r requirements.txt
-
Run
run_es.sh
,run_pg.sh
&run_redis.sh
or set up your own -
Run the project and access the url http://localhost:8501
python -m verse
Configure OpenAI Key
* If Using OpenAI key, simply export OPENAI_API_KEY=*****
* If want to use config file, rename example.env
-> .env
file inside the vectorverse
dir & update either Azure or OpenAI config
By completing these steps, you have properly configured the API Keys for your project.
- Redis Stack Server
- ElasticSearch
Or use docker-compose.yml
provided with the code
-
Check out the project and go the project root dir
VectorVerse
-
If Redis/ES not preinstalled
docker compose up
- Launch the app
python -m verse
- Powered by Langchain
- Uploader Inspired by Quivr