Description
Researchers have demonstrated that deep neural networks are vulnerable to
adversarial instances, namely maliciously perturbed inputs that are
indistinguishable from benign data and yet mislead models at test-time [1, 2].
In this work, we propose two novel adversarial defense augmentation methods based
on pixel reordering to boost the model’s robustness. The proposed methods utilize
a Hilbert curve-based block-wise reordering method [3] and an image-wide random
shuffling. The experiments are carried out on 3 different datasets with both adaptive
and non-adaptive maximum-norm bounded white-box attacks. Results show increased accuracy
against various adversaries, specifically FGSM [4] and PGD [5].The proposed methods can be
employed as low-overhead augmentations against possible attacks, and can further be investigated
as pre-processing techniques against adversarial input or in combination with other defenses.
|