Skip to content
This repository has been archived by the owner on May 9, 2024. It is now read-only.

alcheeee/Valorant-Object-Detection

Repository files navigation

Valorant-Object-Detection

Object detection for Valorant with YOLO models

Built using YOLOv8n and YOLOv8x, simply choose a file in the application window and it will return the image or video with it's predictions shown.

Table of Contents

  1. Installation & Requirements
  2. Application Setup
  3. Usage
  4. Settings
  5. Training, Tests, and Comparison

Snippet from Inference Test
model-in-action

Installation & Requirements

Note

Python 3.8 was used for this project.
An Nvidia RTX 3070 GPU was used for training.

Running with TensorRT

TensorRT models can be up to 6x Faster than PyTorch models on Nvidia GPUs.

You'll need to have CUDA and cuDNN installed.
See CUDA docs and cuDNN docs for help.

If you are not able to use TensorRT, you can use the torch models with a CPU.

Important

Everything must be compatible with your Python version and hardware.
Check the Nvidia GPU Compatibility list.
You need to install PyTorch whether you're using TensorRT or not.

Clone the Repo

git clone https://github.com/alexromain/Valorant-Object-Detection

Go to the project directory

cd Valorant-Object-Detection

Install requirements.txt

pip install -r requirements.txt

After cloning the repository and installing the requirements go to the application/ directory.

Application

For the application GUI, Eel was used with Brave Portable, but you can use Chrome or Edge.\

Brave-Portable

Note

Alternatively you can just use Chrome or Edge

Install Brave Portable in this directory. The application/ folder should look something like this:

application/
├── brave/
│   └── brave-portable.exe
├── gui/
├── model/
├── predictions/
├── upload/
├── utils/
├── __init__.py
├── config.py
└── main.py

Now navigate to your python environment\Lib\site-packages\eel\browsers.py.

Add the following at line 58, just before elif mode == 'custom':, it should look like this:

      pass
    elif mode == 'brave':
        for url in start_urls:
            brave_path="brave/brave-portable.exe"
            wbr.register('brave-portable', None, wbr.BackgroundBrowser(brave_path))
            wbr.get('brave-portable').open('--app=' + url)
    elif mode == 'custom':

Chrome or Edge

If using Brave is too much work, you can use chrome or edge to access the GUI.

In main.py, change:

  if __name__ == '__main__':
      eel.start('main.html', 
                size=(566, 639), 
                mode='brave', #this line to 'chrome' or 'edge'
                cmdline_args=['--app'])

Model Downloads

Due to the filesize of the models, the files are uploaded to google drive.

Skip this if you are not using TensorRT. Go here instead

The TensorRT models need to be placed in the application/model/ directory.

application/
├── model/
    └── bestv8n.engine
    └── bestv8x.engine
    └── bytetrack.yaml

If you're interested in the notebooks, the Torch models should be put in the Training & Testing/models directory.

Using the PyTorch Models

If you can't use the TensorRT models, you'll need to change the following lines in settings.py
On line 29, 39 and 42, change the file .extension:

From:

self.model_path = 'model/bestv8n.engine'

To:

self.model_path = 'model/bestv8n.pt'

Instead of putting the .engineTensorRT models here, you'll simply put the .pt models.

application/model/
├── model/
    └── bestv8n.pt
    └── bestv8x.pt
    └── bytetrack.yaml

Usage

Now to launch the app, type:

python main.py

The GUI should show up!

Capture

Using the Application

To give the model an image or video, first check if the image format is compatible with the model, YOLO supports a wide range of formats so that shouldn't be an issue.

After that, simply click 'Click Here' to choose a file.

Important

Files you want the model to take in have to be put in the application/upload folder!
For the best results, use 640x640 under Settings. These YOLO models were trained on a dataset that was 416x416, but the pre-trained YOLOv8 models are trained on 640x640, so they perform better at that resolution.

Once the model has completed its process, see the results by clicking Results Folder.

Supported File Extensions

Image Extensions:

  • BMP (.bmp)
  • DNG (.dng)
  • JPEG (.jpeg)
  • JPG (.jpg)
  • MPO (.mpo)
  • PNG (.png)
  • TIFF (.tif, .tiff)
  • WEBP (.webp)
  • PFM (.pfm)

Video Extensions:

  • ASF (.asf)
  • AVI (.avi)
  • GIF (.gif)
  • M4V (.m4v)
  • MKV (.mkv)
  • MOV (.mov)
  • MP4 (.mp4)
  • MPEG (.mpeg)
  • MPG (.mpg)
  • TS (.ts)
  • WMV (.wmv)
  • WebM (.webm)

Settings

There are a few settings I've added that you can play around with, by default the (Width,Height) and Frame-rate will use whatever your image or video is set to.

The model will take any resolution thats a multiple of 32, luckily YOLO handles this for us.

  • Width & Height:
    The suggested Width and Height is 640x640, this will give you the highest accuracy.
    But you're free to use any resolution, though high resolutions will be computationally expensive.

  • Frame-rate:
    Adjusting the frame-rate will have a perfomance impact if raised, but it will put your video in slow-motion if lowered.
    I suggest leaving this field Default.

  • Confidence:
    This is the value you should play around with, depending on the resolution you use the model will have differing results.
    By default it's set at 0.4 (40%), I don't suggest going lower than that. Values range from (0.01 to 1.0).

ByteTrack

For Video Inference, the model uses ByteTrack from the YOLO Repo.
The config I've set can be found here, I haven't adjusted these too much, but feel free to try different values.

Training, Tests, and Comparison

I used this HuggingFace Dataset to train the model.

About

Object detection in Valorant with YOLO models

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published