Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs.
This repository provides a simple setup for running the Open WebUI using Docker Compose.
Before you start, ensure that you have the following installed on your machine:
- Docker
- Docker Compose
- Clone the Repository
git clone https://github.com/duyl97/open-webui.git
cd open-webui
- Copy Environment File Before starting the services, you need to create a .env file by copying the provided .env.example file:
cp .env.example .env
Make sure to update the .env file with any necessary environment-specific configurations.
- Run Docker Compose To start the Open WebUI, run:
docker-compose up -d
This command will build and start all the services defined in the docker-compose.yml file.
- Access the WebUI Once the services are up and running, open your browser and navigate to:
http://localhost:3000
If you need to customize the setup (e.g., environment variables, port numbers), modify the .env file or the docker-compose.yml
file.
Use the shared function Azure OpenAI.
The alternative of the function is LiteLLM Proxy, which allows to call all LLM APIs using the OpenAI format (Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.).
To stop the running services, use:
docker-compose down -v
This will stop and remove the containers.
If you encounter any issues, ensure that Docker and Docker Compose are installed correctly and running. For further issues, please check the logs:
docker-compose logs
This project is licensed under the MIT License - see the LICENSE file for details.