Object detection for Valorant with YOLO models
Built using YOLOv8n and YOLOv8x, simply choose a file in the application window and it will return the image or video with it's predictions shown.
Snippet from Inference Test
Note
Python 3.8 was used for this project.
An Nvidia RTX 3070 GPU was used for training.
Running with TensorRT
TensorRT models can be up to 6x Faster than PyTorch models on Nvidia GPUs.
You'll need to have CUDA and cuDNN installed.
See CUDA docs and cuDNN docs for help.
If you are not able to use TensorRT, you can use the torch models with a CPU.
Important
Everything must be compatible with your Python version and hardware.
Check the Nvidia GPU Compatibility list.
You need to install PyTorch whether you're using TensorRT or not.
git clone https://github.com/alexromain/Valorant-Object-Detection
Go to the project directory
cd Valorant-Object-Detection
Install requirements.txt
pip install -r requirements.txt
After cloning the repository and installing the requirements go to the application/
directory.
For the application GUI, Eel was used with Brave Portable, but you can use Chrome or Edge.\
Note
Alternatively you can just use Chrome or Edge
Install Brave Portable in this directory.
The application/
folder should look something like this:
application/
├── brave/
│ └── brave-portable.exe
├── gui/
├── model/
├── predictions/
├── upload/
├── utils/
├── __init__.py
├── config.py
└── main.py
Now navigate to your python environment\Lib\site-packages\eel\browsers.py
.
Add the following at line 58, just before elif mode == 'custom':
, it should look like this:
pass
elif mode == 'brave':
for url in start_urls:
brave_path="brave/brave-portable.exe"
wbr.register('brave-portable', None, wbr.BackgroundBrowser(brave_path))
wbr.get('brave-portable').open('--app=' + url)
elif mode == 'custom':
If using Brave is too much work, you can use chrome or edge to access the GUI.
In main.py, change:
if __name__ == '__main__':
eel.start('main.html',
size=(566, 639),
mode='brave', #this line to 'chrome' or 'edge'
cmdline_args=['--app'])
Due to the filesize of the models, the files are uploaded to google drive.
Skip this if you are not using TensorRT. Go here instead
The TensorRT models need to be placed in the application/model/
directory.
application/
├── model/
└── bestv8n.engine
└── bestv8x.engine
└── bytetrack.yaml
If you're interested in the notebooks, the Torch models should be put in the Training & Testing/models
directory.
If you can't use the TensorRT models, you'll need to change the following lines in settings.py
On line 29
, 39
and 42
, change the file .extension:
From:
self.model_path = 'model/bestv8n.engine'
To:
self.model_path = 'model/bestv8n.pt'
Instead of putting the .engineTensorRT models here, you'll simply put the .pt
models.
application/model/
├── model/
└── bestv8n.pt
└── bestv8x.pt
└── bytetrack.yaml
Now to launch the app, type:
python main.py
The GUI should show up!
To give the model an image or video, first check if the image format is compatible with the model, YOLO supports a wide range of formats so that shouldn't be an issue.
After that, simply click 'Click Here' to choose a file.
Important
Files you want the model to take in have to be put in the application/upload
folder!
For the best results, use 640x640 under Settings. These YOLO models were trained on a dataset that was 416x416,
but the pre-trained YOLOv8 models are trained on 640x640, so they perform better at that resolution.
Once the model has completed its process, see the results by clicking Results Folder
.
- BMP (.bmp)
- DNG (.dng)
- JPEG (.jpeg)
- JPG (.jpg)
- MPO (.mpo)
- PNG (.png)
- TIFF (.tif, .tiff)
- WEBP (.webp)
- PFM (.pfm)
- ASF (.asf)
- AVI (.avi)
- GIF (.gif)
- M4V (.m4v)
- MKV (.mkv)
- MOV (.mov)
- MP4 (.mp4)
- MPEG (.mpeg)
- MPG (.mpg)
- TS (.ts)
- WMV (.wmv)
- WebM (.webm)
There are a few settings I've added that you can play around with, by default the (Width,Height) and Frame-rate will use whatever your image or video is set to.
The model will take any resolution thats a multiple of 32, luckily YOLO handles this for us.
-
Width & Height:
The suggested Width and Height is 640x640, this will give you the highest accuracy.
But you're free to use any resolution, though high resolutions will be computationally expensive. -
Frame-rate:
Adjusting the frame-rate will have a perfomance impact if raised, but it will put your video in slow-motion if lowered.
I suggest leaving this field Default. -
Confidence:
This is the value you should play around with, depending on the resolution you use the model will have differing results.
By default it's set at 0.4 (40%), I don't suggest going lower than that. Values range from (0.01 to 1.0).
For Video Inference, the model uses ByteTrack from the YOLO Repo.
The config I've set can be found here, I haven't adjusted these too much, but feel free to try different values.