Skip to content

Commit

Permalink
Add finetune docs & minor improvements (#41)
Browse files Browse the repository at this point in the history
  • Loading branch information
FuzzyReason authored May 10, 2024
1 parent 4ff1bb7 commit 59c62aa
Show file tree
Hide file tree
Showing 16 changed files with 113 additions and 89 deletions.
1 change: 1 addition & 0 deletions astro.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ export default defineConfig({
{ label: 'AI Toolbox', link: '/features/ai-toolbox/' },
{ label: 'Code Completion', link: '/features/code-completion/' },
{ label: 'Context', link: '/features/context/' },
{ label: 'Fine-tuning', link: '/features/finetuning/' },
]
},
],
Expand Down
Binary file added src/assets/ft_create.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/assets/ft_data.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/assets/ft_process.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/assets/launch_ft.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/assets/multi_gpu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/assets/project.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/assets/select_lora.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/assets/team_preferences.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/assets/upload_files.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 6 additions & 0 deletions src/content/docs/features/ai-chat.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,12 @@ description: A reference page for AI Chat.

You can ask questions about your code in the integrated AI chat, and it can provide you with answers about your code or generate new code for you based on the context of your current file.

### **Context Length**
Refact analyzes the code up to a certain length to provide suggestions.
Context length depends on the plan you have chosen for your account:
- **Free**: 4096 characters
- **Pro**: 16384 characters

## @-commands

This section outlines various commands that can be used in the AI chat. Below you can find information about functionality and usage of each command.
Expand Down
4 changes: 3 additions & 1 deletion src/content/docs/features/ai-toolbox.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,4 +56,6 @@ Once you finish, save the file, and your custom toolbox command will be availabl
When entering the `/help` command, you will see your custom command in the list of available commands.


![Refact Toolbox](../../../assets/custom_command.png)
![Refact Toolbox](../../../assets/custom_command.png)

All of the commands in the Toolbox are available in the `~/.cache/refact/customization.yaml` file. If you want to reset the Toolbox to the default, you can delete this file.
4 changes: 2 additions & 2 deletions src/content/docs/features/code-completion.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ Refact utilizes a technique called **Fill-in-the-middle** (FIM), where the conte
### **Context Length**
Refact analyzes the code up to a certain length to provide suggestions.
Context length depends on the plan you have chosen for your account:
- **Free**: 4096 characters
- **Pro**: 16384 characters
- **Free**: 2048 characters
- **Pro**: 4096 characters

### **Cache Mechanism**
To enhance performance, Refact caches previous computations and suggestions.
Expand Down
2 changes: 2 additions & 0 deletions src/content/docs/features/context.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ In order to enable RAG, you need to follow the instructions depending on the ver
![RAG Settings](../../../assets/ast_vecdb.png)

:::note
RAG is more useful for the context size **more than 2048 tokens**, which is available for **Pro users**.

Be aware that RAG indexing is a **high resource-consuming process**, so you will experience increased memory consumption of your **GPU, RAM, and CPU**.
:::
### Refact Enterprise
Expand Down
98 changes: 98 additions & 0 deletions src/content/docs/features/finetuning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
---
title: Fine-tuning
description: A reference page about fine-tuning.
---

Fine-tuning is a process of pretraining a base model to improve the quality of generated code.

Fine-tuning is supported in the following version of Refact.ai:
- [Self-hosted Refact](https://docs.refact.ai/guides/version-specific/self-hosted/)
- [Enterprise Refact](https://docs.refact.ai/guides/version-specific/enterprise/)

## Use-cases

Fine-tuning can be particularly useful for:
- Adapting the model to a specific programming language.
- Customizing the model for a particular technology stack.
- Aligning the model outputs with a predefined style guide.

## Creating a Fine-Tuned Model

### Create a Project
1. Navigate to the `Projects` dropdown.
2. Click the `New Project` button or choose an existing project if applicable.
3. In the pop-up window, enter the project name and click `Create`.

![Create Project](../../../assets/project.png)

### Add Fine-Tuning Data
You can add data for fine-tuning through the following methods:
- **Add Git Repository**:
- For public repositories, use an HTTPS link: `https://github.com/my_company/my_repo`.
- For private repositories, ensure an SSH Key is added, then use an SSH link: `git@github.com:my_company/my_repo.git`.
- Optionally, specify the branch to pull data from.
- **Upload Files**:
- **By Link**: Enter the URL of the file (e.g., `https://yourserver.com/file.zip`). Make sure to use a direct URL.
- **From Local Storage**: Click `Choose file` and select the file to upload from your local device.

![Add data](../../../assets/upload_files.png)

### Scan and Filter Files
1. Click `Run 'git pull', scan files`.
2. After scanning, the file types and counts are displayed in the **File Type Filter** section.
3. Select the file types you want to fine-tune with checkboxes. Details on accepted and rejected files are available by clicking the **Full List** button.
:::note
For rejected files, reasons for rejection are provided next to each file name. To include rejected files, specify directory paths or paths to specific files in the **Include** section.
:::

![File type filter](../../../assets/ft_data.png)

### Start Fine-Tuning
1. Click `Proceed to Fine-tuning` or navigate to the **Finetune** page.
2. In the **Start New Finetune** page, select the project created during the previous steps.
3. Select the model you want to fine-tune from the **Select Model** dropdown.
![New Finetune](../../../assets/ft_create.png)
2. On the fine-tuning page, click `Start Fine-tuning`.
3. In the pop-up, name your fine-tuning session and select:
- **Train embeddings** for large code bases.
- **Keep it smaller** for smaller code bases.
- **GPUs** - select the number of the GPU to use for fine-tuning (number of the GPU starting from 0). If you have multiple GPUs, you can select more than one GPU.
![Finetune with Multiple GPUs](../../../assets/multi_gpu.png)
- Alternatively, manually adjust fine-tuning settings like model capacity or training schedule.

![Advanced settings](../../../assets/launch_ft.png)

### Monitor Fine-Tuning
Once the fine-tuning is started, you can monitor the progress in the **Finetune** page.

On the right side, the following information is displayed:
- **Chart** - shows the results of the fine-tuning
- **ETA bar** - shows the estimated time remaining
- **Information** - provides the following information:
- **Logs** - shows the logs of the fine-tuning
- **Checkpoints** - lists the checkpoints created during the fine-tuning
- **Parameters** - provides information about the parameters used during the fine-tuning
- **Files** - provides information about the files used during the fine-tuning

![Monitor fine-tuning](../../../assets/ft_process.png)

### Select the Base Model
Once the fine-tuning is completed, navigate to the **Model Hosting** page and click `Add Model` to choose the base model.

:::note
Refact.ai offers a variety of base models. Ensure you select the same model used during fine-tuning.
:::

After selecting the model, select the newly created lora **(the result of fine-tuning, which will act as a patch to the base model)** in the **Finetune** row.

![Select base model](../../../assets/select_lora.png)

### Enabling Fine-Tuning for Teams

With **Refact.ai Enterprise**, you can enable different fine-tuning options for different teams.

Navigate to the **Users** page. You will see a list of all users and **Team Preferences** section.

You can specify the completion model. When having multiple projects and fine-tuned models, you can specify the completion model for each project.

![Team Preferences](../../../assets/team_preferences.png)
87 changes: 1 addition & 86 deletions src/content/docs/guides/version-specific/self-hosted.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ Self-hosted version is designed for developers who want to have a full control o

## Prerequisites
- Docker with GPU support
- `docker-compose 1.29.2` or higher

## Installation

Expand Down Expand Up @@ -43,7 +42,7 @@ Check out or delete a docker volume: `commandline docker volume inspect VVV dock

### Sharding

You can choose to deploy a model to several GPUs with sharding. Select the number of GPUs that you would like to run your model on by selecting 1,2 or 3 in the sharding menu.
You can choose to deploy a model to several GPUs with sharding. Select the number of GPUs that you would like to run your model on by selecting 1,2 or 4 in the sharding menu.

### Shared GPU

Expand All @@ -57,90 +56,6 @@ If you have an OpenAI API key, you can connect it to Refact and use GPT-series m
With this integration you will send your data to 3rd party providers (OpenAI). To enable OpenAI integration, you should go to settings (top right) and set and save your API key for the server usage.
:::

## Deploy a LLM
![Deploy](../../../../assets/enterprise-deploy.png)

### Add one of the supported models

Each model has different supported functions (chat / completion / toolbox / fine-tuning). The list of supported models with different functions can be found [here](https://docs.refact.ai/supported-models/supported-models/)

### Preparing a Dataset for Fine-tuning

Refact fine-tuning doesn't require you to prepare your dataset in any way - it happens automatically. In the sources tab, add links to your git repos (public and private are supported), or alternatively you can give it an archive (.zip, .tar.gz or .tar.bz2). You can also upload individual files, that's especially useful if you want to use specific held-out files as a test set for fine-tuning.

After you upload your dataset, Refact will filter the data automatically.
There are 2 stages for filtering the data:
- The first stage uses an adapted version of Git Linguist to filter out binary files, known types of generated files, files with a lot of digits in them, and some other files unsuitable for training. Duplicates are also removed at this stage.
- The second stage uses a language model, it needs to run on GPU. It filters out files that have loss too high according to the language model. High loss means the model cannot predict the text in the file very well, that happens on random data, or text that is not code at all.

You can verify what this automatic process is doing by clicking on "Accepted" and "Rejected" links. These logs will give you a reason why any specific file is rejected. The second stage will give you the loss values for each file.

### Data Scanning

During the scanning process, files uploaded to Refact as a dataset for the fine-tuning process are validated to determine their suitability.

Files rejected during the validation process are dismissed and won't be incorporated into the dataset used in the fine-tuning stage.

The potential rejection reasons are listed below:

1. **Linguist error** - Indicates that Refact couldn't open the file or the file might be corrupted.
2. **Not text** - That reason is applicable for binary files which are not suitable for the fine-tuning.
3. **File is too large** - If the file size is more than 512kb, that file will not be included in the dataset as a too-large file.
4. **Excluded by mask** - Refers to files that are manually excluded.
5. **Duplicates** - Duplicated files are rejected from the dataset.
6. **Lots of digits** - If the percentage of digits in the file exceeds a specific amount, the file will be rejected from the filtered dataset.
7. **Filter empty** - That reason is applicable when perplexity a metric assessing the model's predictive probability) cannot be calculated.

:::note
Files that didn't pass the linguist scanning could not be included manually after the filtering process.
:::

### Start Fine-Tuning

After your dataset has been filtered, you're ready to start the fine-tuning process.
First, select one of the pre-trained models for fine-tuning.
For a list of the models that currently support fine-tuning please see [here](https://docs.refact.ai/supported-models/supported-models/).

Once you start fine-tuning, the training time will be automatically determined by the dataset size and complexity.

The training process involves optimizing the model's weights and parameters to minimize the loss function and improve its performance.

#### Advanced Settings
You can specify custom parameters for fine-tuning in the "Advanced settings" tab.
For example, if you want to improve the model's capacity or change the schedule of learning , ie make the training longer / shorter.

![Advanced settings](../../../../assets/enterprise-advanced.png)

- Lora R / Lora Alpha - some hyperparameters, related to a number of optimization parameters in the fine-tuned model.

- Lora Init Scale - a hyperparameter used during the initialization of trainable weights in the fine-tuned model.

- Lora dropout - a probability at which dropout technique (regularization) is applied, used inside the `lora` trainable parameters.

- Learning Rate - a hyperparameter that determines the step size at each iteration while moving towards a minimum of the loss function. A higher learning rate can lead to faster convergence but might overshoot the optimal solution, while a smaller learning rate might converge slowly or get stuck in local minima.

- Batch Size - the number of training examples utilized in one iteration of a batch. For instance, if you have 1,000 training examples and your batch size is 100, it will take 10 iterations to complete one epoch.

- Warmup Num Steps- the initial phase of training where the learning rate gradually increases from a very small value to its originally set value. This warmup can help stabilize the training at the beginning. For example, if warmup_num_steps is 1000, for the first 1000 steps, the learning rate will increase linearly from nearly 0 to its set value.

- Weight Decay- a regularization technique used to prevent overfitting. It adds a penalty to the loss function, typically in the form of L2 regularization. This means that during training, a fraction of the weights (defined by the weight decay rate) is subtracted, pushing the weights towards zero and preventing them from growing too large.

- Train Steps: The total number of steps (or iterations) for which the model will be trained. If you have a dataset of size 1000 and you use a batch size of 100, then completing 10 steps would mean you've processed the entire dataset once (or completed one epoch).

- Learning Rate Decay Steps: In many training regimes, it's beneficial to reduce the learning rate over time, as it can help the model converge to a better solution. Learning rate decay steps define how often the learning rate should be decreased. For example, if the learning rate decay steps are set to 5000, then for every 5000 training steps, the learning rate would be decreased (multiplied) by a set factor (e.g., 0.9 or 0.95).

- Low GPU memory mode: It's used when you have a GPU with a low amount of memory. It will almost double the computation time but save a bunch of memory.

### Analyzing Fine-tuned Model

We automatically split filtered files into train and test sets and on the plot you can see 2 curves for the loss function: train and test. Checkpoints with minimal test loss are considered to be the best.

### Using a Fine-tuned Model

- Select which checkpoint from the latest fine-tune run you want to use: best from the latest or specify a custom checkpoint.
- If you want to use the base model without fine-tuning switch the toggle to “off”.
- Once you select which fine-tuned model to use, the suggestions from it will appear automatically in your IDE.

## Custom Inference setup

Go to plugin settings and set up a custom inference URL http://127.0.0.1:8008
Expand Down

0 comments on commit 59c62aa

Please sign in to comment.