This repository has been developed to provide a simplified YOLOv5 Object Detection workflow using georefernced images, from training to GIS mapping of results. YOLOv5 is developed by ultralytics and has been modified to adapt to georeferenced images and GIS.
Abstract has been accepted and published at Lunar Planetary Scince Conference 2021
https://www.hou.usra.edu/meetings/lpsc2021/pdf/1316.pdf
This study is within the Europlanet 2024 RI, and it has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871149. Mars images were obtained from NASA PDS.
Two interactive notebooks:
- Training PyTorch YOLOv5 models (developed by ultralytics)
- Inference on georeferenced images and create create a shapefile with results.
- General utilities to support both notebooks
- Script to convert detections to shapefile using georeferenced images or images + world files
Dataset must have a configuration file in YAML format with the following structure. If you are using Roboflow prepared dataset skip this part, else prepare the dataset folder as follow:
Organize the dataset folder as following:
- datasetfolder/train/images
- datasetfolder/valid/images
then create a dataset.yaml file containing:
- train: path to datasetfolder/train/images
- valid: path to datasetfolder/valid/images
- nc: number of classes
- names: ['label1','label2','...']
TRAINING
Prepare the dataset then run the training notebook as it is, it will ask for:
- source folder
- model (small, medium, large, extra)
Edit the training parameters dictionary according to your needs
INFERENCE
Just run the inference notebook as it is, it will ask for:
- weights folder
- source folder
- destination folder