This repository demonstrates the emulation of our GhostImage attacks [PDF]:
@inproceedings{man2020ghostimage,
title={GhostImage: Remote Perception Domain Attacks against Camera-based
Image Classification Systems},
author={Man, Yanmao and Li, Ming and Gerdes, Ryan},
booktitle={Proceedings of the 23rd International Symposium on Research in
Attacks, Intrusions and Defenses (USENIX RAID 2020)},
year={2020}
}
The main structure of our implementaion follows Nicolas Carlini's nn_robust_attacks.
All scripts are written in Python 3, with dependencies
pillow == 7.1.2
numpy == 1.18.5
tensorflow == 2.1.0
tensorflow-probability == 0.10.0
Our core module (mainly rgb_attack.py
) was originally written in TensorFlow
1.x, so it should be able to run with TF 1.x by changing import tensorflow.compat.v1 as tf
to import tensorflow as tf
at the beginnning of
each script, and removing the next line tf.disable_eager_execution()
.
First of all, create a directory named models
; all the following pre-trained
models will go there.
Download a tar file of the Inception V3 pre-trained model
here,
and then tar xvf
it.
Download a pre-trained CIFAR-10 classifier here.
Download a pre-trained LISA classifier here.
Script test_attack.py
presents an example function single_image
that tries
to perturb a given image. From the function, we can learn how to make use of
our attack core module that is defined in rgb_attack.py
.
In the "main function" of test_attack.py
, there are four attack examples:
single_image(img_path='./benign_images/ILSVRC2012_val_00019992.JPEG',
target_label=555, objective='alteration', dataset='imagenet',
dig=.7, ana=.1, num_rows=20, num_columns=20)
Left: A benign image from ImageNet: ILSVRC2012_val_00019992.JPEG; Right: An adversarial image generated by our algorithm.
single_image(img_path='./benign_images/ship1.png', target_label=6,
objective='alteration', dataset='cifar', dig=.7, ana=.1,
num_rows=2, num_columns=2)
Left: A benign image of a ship from CIFAR-10; Right: An adversarial image generated by our algorithm, recognized as a frog.
single_image(img_path='./benign_images/5.jpg', target_label=1,
objective='alteration', dataset='lisa', dig=.7, ana=.1,
num_rows=2, num_columns=2)
Left: A benign image of a STOP sign; Right: An adversarial image generated by our algorithm, recognized as a MERGE sign.
single_image(target_label=5, objective='creation', dataset='lisa',
dig=.7, ana=.1, num_rows=2, num_columns=2)
A generated 2x2 grid recognized as a STOP sign.