Description
The protection of a network is one of the main challenges for IT security professionals in a interconnected computer environment. The detection of an attack and the corresponding response times can be crucial in terms of information leakage and availability. For this reason Network Intrusion Detection Systems (NIDSs) are often used to detect or classify any malicious behaviour. Those systems are often based on static rules, defined by professionals in the security area. However the attackers and their action rise in quantity, quality and variety making it harder for a team to gasp all the possibilities and define them in a rule based policy. The need for a more adaptive solution rises. The interest in machine learning algorithms recently is founded by good results in areas where it is hard to classify input based on static rules and algorithms. The emergent nature of Artificial Neural Networks (ANNs) enables the learning without such restrictions only based on given input data and labels. This can also be used in a NIDS to define a learning systems to classify behaviours in a computer network. But the complexity shifts since those systems are like other ANN based classifiers vulnerable to adversarial examples, which propose a security thread since they have the potential to impact the NIDSs performance in a negative way. For this reason the need of protection against those vectors arises and with it the need of a framework to test such behaviours. In this thesis Network Anomaly Detection for Industrial Control Systems (NADICS) was extended with new functionalities to train Deep Neural Networks (DNNs), attack them with multiple adversarial attacks and perform measurements to harden the networks against such. For this purpose multiple datasets from conceptional different sources with different classification scopes will be used.
|