Skip to content

Commit

Permalink
[UPDATE] Add Linux Support for Text-Summarizer
Browse files Browse the repository at this point in the history
Added Linux Support for Text-Summarizer-Browser-Plugin.
This Text Summarizer Browser Plugin is now compatible for both, 
Windows & Linux platforms.
  • Loading branch information
AlekhyaVemuri authored Jan 2, 2025
1 parent f848941 commit 956eb77
Show file tree
Hide file tree
Showing 4 changed files with 77 additions and 40 deletions.
30 changes: 26 additions & 4 deletions Text-Summarizer-Browser-Plugin/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Text summarizer browser Plugin
# Text summarizer browser Plugin Sample

A plug-and-play Chrome extension seamlessly integrates with Flask and leverages an OpenVINO backend for fast and efficient summarization of webpages (via URL) and PDFs (via upload). Powered by LangChain tools, it handles advanced tasks like text splitting and vectorstore management to deliver accurate and meaningful summaries.

Expand All @@ -14,10 +14,32 @@ The directory contains:

## Prerequisites

| Optimized for | Description |
| :------------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| OS | Windows 11 64-bit (22H2, 23H2) and newer or Ubuntu* 22.04 64-bit (with Linux kernel 6.6+) and newer |
| Hardware | Intel® Core™ Ultra Processors |
| Software | 1. [Intel® GPU drivers from Intel® Arc™ & Iris® Xe Graphics for Windows](https://www.intel.com/content/www/us/en/download/785597/intel-arc-iris-xe-graphics-windows.html) or [Linux GPU drivers](https://dgpu-docs.intel.com/driver/client/overview.html) <br> 2. NPU(Optional): [Intel® NPU Driver for Windows](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html) or [Linux NPU Driver](https://github.com/intel/linux-npu-driver/releases) |
| Browsers | [Google Chrome](https://www.google.com/chrome/dr/download/?brand=MRUS&ds_kid=43700079286123654&gad_source=1&gclid=EAIaIQobChMI0J3fybvSigMV5dXCBB1TDARCEAAYASAAEgL36_D_BwE&gclsrc=aw.ds) & [Microsoft Edge](https://www.microsoft.com/en-us/edge/download?form=MA13FJ)


1. **Install the below necessary tools/packages:**
- [Git on Windows](https://git-scm.com/downloads)
- [Miniforge](https://conda-forge.org/download/)
- [Google Chrome for Windows](https://www.google.com/chrome/?brand=OZZY&ds_kid=43700080794581137&gad_source=1&gclid=Cj0KCQiAoae5BhCNARIsADVLzZdwNNB5nIyjZ8OyCzg6h_cCig1eoaYquUSEd7BAigJhTzps1Kxuop8aArE6EALw_wcB&gclsrc=aw.ds)
- Git
- [Git for Windows](https://git-scm.com/downloads)
- Git for Linux
```bash
sudo apt update && sudo apt install git
```
- Miniforge
- [Miniforge for Windows](https://conda-forge.org/download/)
- Miniforge for Linux
Download, install the Miniconda using the below commands.
```bash
wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh
cd </move/to/miniforge3/bin/folder>
./conda init
```
Replace </move/to/miniforge3/bin/folder> with your actual Miniforge bin folder path and run the cd command to go there. Initialize the conda environment and restart the terminal.


2. **Create a Conda Environment:**
Expand Down
83 changes: 49 additions & 34 deletions Text-Summarizer-Browser-Plugin/TextSummarizerPlugin.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,45 @@
"metadata": {},
"source": [
"Before converting the models & running the plugin, make sure you have followed all the below listed [steps to prepare the environment](./README.md/#prerequisites)\n",
"- Cloning the Text-Summarizer Plugin Repository\n",
"- Creating conda environment & Installing necessary packages"
"- Setup conda environment\n",
" ```bash\n",
" conda create -n summarizer_plugin python=3.11 libuv\n",
" ```\n",
" ```bash\n",
" conda activate summarizer_plugin\n",
" ```\n",
" ```bash\n",
" python -m pip install ipykernel tqdm ipywidgets\n",
" ```\n",
" ```bash\n",
" python -m ipykernel install --user --name=summarizer_plugin\n",
" ```\n",
" Now choose the `summarizer_plugin` kernel in the notebook.\n",
"- Installing necessary packages"
]
},
{
"cell_type": "markdown",
"id": "787b4825-ad2d-4b1f-9ae2-d03da418ac29",
"metadata": {},
"source": [
"#### Installing necessary packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25425663-cca9-47d7-a93f-5ec6239de66b",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"import subprocess\n",
"import warnings\n",
"warnings.filterwarnings(\"ignore\")\n",
"\n",
"os.system(f\"{sys.executable} -m pip install -r requirements.txt\")"
]
},
{
Expand Down Expand Up @@ -101,31 +138,17 @@
{
"cell_type": "code",
"execution_count": null,
"id": "7b4a2914-f5ab-4464-9e1f-b3ee852eac38",
"id": "29a146b7-2aad-43be-8a8a-68bb1d885a02",
"metadata": {},
"outputs": [],
"source": [
"! mkdir models && cd models "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7c15312-3c8b-42d0-bc1c-ca4987124708",
"metadata": {},
"outputs": [],
"source": [
"! optimum-cli export openvino --model Qwen/Qwen2-7B-Instruct --weight-format int4 ov_qwen7b"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dbb72875-157c-4fab-9ecf-4ba1e15f5327",
"metadata": {},
"outputs": [],
"source": [
"! optimum-cli export openvino --model meta-llama/Llama-2-7b-chat-hf --weight-format int4 ov_llama_2"
"from pathlib import Path\n",
"import os\n",
"ROOT_DIR = Path.cwd()\n",
"MODEL_DIR = ROOT_DIR / 'models'\n",
"MODEL_DIR.mkdir(parents=True, exist_ok=True)\n",
"os.system(f\"optimum-cli export openvino --model Qwen/Qwen2-7B-Instruct --weight-format int4 {MODEL_DIR}/ov_qwen7b\")\n",
"os.system(f\"optimum-cli export openvino --model meta-llama/Llama-2-7b-chat-hf --weight-format int4 {MODEL_DIR}/ov_llama_2\")"
]
},
{
Expand Down Expand Up @@ -322,9 +345,9 @@
" if model_id:\n",
" try:\n",
" if model_id==\"Meta LLama 2\":\n",
" model_path=r\"models\\ov_llama_2\"\n",
" model_path='models/ov_llama_2'\n",
" elif model_id==\"Qwen 7B Instruct\":\n",
" model_path=r\"models\\ov_qwen7b\"\n",
" model_path='models/ov_qwen7b'\n",
" model = OVModelForCausalLM.from_pretrained(model_path , device='GPU')\n",
" tokenizer = AutoTokenizer.from_pretrained(model_path)\n",
" pipe=pipeline(\n",
Expand Down Expand Up @@ -823,14 +846,6 @@
"source": [
"app.run(port=5000)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ef84aa0b-6e95-460c-a55b-f6d4938992ae",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down
4 changes: 2 additions & 2 deletions Text-Summarizer-Browser-Plugin/backend/code.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ def load_llm(model_id):
if model_id:
try:
if model_id == "Meta LLama 2":
model_path = r"..\models\ov_llama_2"
model_path = '../models/ov_llama_2'
elif model_id == "Qwen 7B Instruct":
model_path = r"..\models\ov_qwen7b"
model_path = '../models/ov_qwen7b'
model = OVModelForCausalLM.from_pretrained(
model_path, device='GPU')
tokenizer = AutoTokenizer.from_pretrained(model_path)
Expand Down
Binary file not shown.

0 comments on commit 956eb77

Please sign in to comment.