This Streamlit application allows users to search for legal opinions in a Milvus vector database and generate summaries using the Llama-3 language model and DSPy.
Before running the Streamlit application, ensure that the following prerequisites are met:
- Milvus standalone is running.
- Ollama llama-3 language model is running.
- The vector database is initialized with the legal opinion documents.
- Clone the repository:
git clone https://github.com/dope-projects/llm-law-hackathon.git
cd llm-law-hackathon
- Install the required dependencies:
pip install -r requirements.txt
- Start the Milvus standalone server:
wget https://raw.githubusercontent.com/milvus-io/milvus/master/scripts/standalone_embed.sh
bash standalone_embed.sh start
-
Start the Llama-3 language model on ollama server.
-
Initialize the vector database with the legal opinion documents:
from langchain_community.vectorstores import Milvus
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model = "text-embedding-3-large")
vector_db = Milvus(
embeddings,
connection_args={"host": "127.0.0.1", "port": "19530"},
collection_name="LangChainCollection",
)
# Add your legal opinion documents to the vector database
documents = [...] # List of legal opinion documents
vector_db.add_documents(documents)
- Run the Streamlit application:
streamlit run app.py
- The Milvus connection settings can be modified in the
vector_db
initialization code block. - The Ollama-3 language model settings can be adjusted in the
ollama_model
initialization code block.
This project is licensed under the MIT License.