The discipline of anomaly detection deals with distinguishing anomalies from normal data. Anomaly detection is especially interesting for intrusion detection systems that find application in IT security. The main challenge is the limited knowledge about anomalies. Oftentimes, the training data available for anomaly detection systems contains only few anomalies and might not represent all types of anomalous data.
In this work, we tackle these challenges with generative deep learning models, i.e. neural networks that create data samples during training. We propose and evaluate two methods that are new in the anomaly detection setting.
Both these methods rely on creating additional samples by finding adversarial examples. These are samples that are generated by slightly altering the input data with the goal to cause a misclassification of the underlying anomaly detection network. The process of using adversarial examples in training has been studied before, but not in the special case of anomaly detection. We theorise that due to the high class imbalance and incompleteness present in the anomaly detection training datasets, this method may provide a benefit for the performance on unaltered data and not only for adversarial robustness.
The first method that we discuss uses adversarial examples created with projected gradient descent to retrain two different anomaly detection models. The second method uses a modified version of the AdvGAN network that originally was designed to create adversarial examples from input data. It uses two components - the generator and the discriminator - that are trained in an alternating and adversarial fashion in order to improve the performance of the discriminator, which acts as an anomaly detector.
We evaluate the effect of the generated adversarial examples and their application in the anomaly detection context. Two distinct datasets for evaluation are considered: the MNIST dataset of handwritten numbers and the NSL-KDD dataset that contains normal as well as malicious network traffic. The results of our work suggest that under certain circumstances, adversarial examples can improve the performance of anomaly detection models.