This repo is implementation for PointNet and PointNet++ with Novel data augmentation methods and RandAugment for point cloud data to handle with 3D classification task
Baseline code for PointNet2 mostly borrowed from erikwijmans/Pointnet2_PyTorch
This repo is 3D pointcloud version of RandAugment: Practical automated data augmentation with a reduced search space.
-
Supports Multi-GPU via
nn.DataParallel <https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel>
_. -
Supports PyTorch version >= 1.0.0. Use
v1.0 <https://github.com/erikwijmans/Pointnet2_PyTorch/releases/tag/v1.0>
_ for support of older versions of PyTorch.
-
Install
python
-- This repo is tested with2.7
,3.5
, and3.6
-
Install dependencies
pip install -r requirements.txt
-
Install with:
pip install -e .
If you rush into error message RuntimeError: Ninja is required to load C++ extension
,
please refer to zhanghang1989/PyTorch-Encoding#167 for error shooting.
Two training examples are provided by pointnet2/train/train_sem_seg.py
and pointnet2/train/train_cls.py
.
The datasets for both will be downloaded automatically by default.
They can be run via
python pointnet2/train.py task=cls
# Or with model=msg for multi-scale grouping
python pointnet2/train.py task=cls model=msg
Both scripts will print training progress after every epoch to the command line. Use the --visdom
flag to
enable logging to visdom and more detailed logging of training progress.
- Multiple augmentations were deployed.(Check below image that shows some examples of augmentation methods)
python ./data/pointnet2/ModelNet40Loader
classfication | acc |
---|---|
PointNet++(Official, w/o normal) | 90.7 |
PointNet++(Official, with normal) | 91.9 |
Ours | 92.8 |
Testing
@article{pytorchpointnet++,
Author = {Seungjun Lee},
Title = {Pointnet++ RandAugment},
Journal = {https://github.com/seungjunlee96/PointNet2_RandAugment},
Year = {2020}
}
@inproceedings{qi2017pointnet++,
title={Pointnet++: Deep hierarchical feature learning on point sets in a metric space},
author={Qi, Charles Ruizhongtai and Yi, Li and Su, Hao and Guibas, Leonidas J},
booktitle={Advances in Neural Information Processing Systems},
pages={5099--5108},
year={2017}
}
The primary goal of RandAugment is to remove the need for a separate search phase on a proxy task.
There are only two parameters to tune the RandAugment
- N, Number of augmentation transformation to apply sequentially.
- M, Magnitude for all the transformations Below code shows applying RandAugment.(code from original paper of RandAugment)
transforms = [
’Identity’, ’AutoContrast’, ’Equalize’,
’Rotate’, ’Solarize’, ’Color’, ’Posterize’,
’Contrast’, ’Brightness’, ’Sharpness’,
’ShearX’, ’ShearY’, ’TranslateX’, ’TranslateY’]
def randaugment(N, M):
"""Generate a set of distortions.
Args:
N: Number of augmentation transformations to
apply sequentially.
M: Magnitude for all the transformations.
"""
sampled_ops = np.random.choice(transforms, N)
return [(op, M) for op in sampled_ops]
- Training and evaluation on Segmentation
- SSG vs MSG
- More Data augmentation methods
yanx27/Pointnet_Pointnet2_pytorch
erikwijmans/Pointnet2_PyTorch
ildoonet's RandAugment
halimacc/pointnet3
fxia22/pointnet.pytorch
charlesq34/PointNet
charlesq34/PointNet++