Description
With the rapid growth of the number of Internet of Things (IoT) devices and connectivity options, methods for encrypted data transmission are becoming increasingly important. Although the abstract mathematical designs of state-of-the-art cryptographic algorithms such as Advanced Encryption Standard ( AES) are considered secure, their implementa- tions can leak secrets through physical side channels. Side-Channel Analysis (SCA) is a well-established field of research that studies the electrical power consumption and other physical effects of cryptographic implementations, aiming to extract secret keys. Recently, Deep Learning (DL)-based techniques have gained popularity in SCA, but most of the research proposes new network architectures or parameters to enhance performance. However, from the perspective of a security evaluator, not only the ability to recover a secret key is important but also the location of physical leakage to imple- ment appropriate countermeasures. As common DL architectures do not reveal leakage locations they used for predictions, explainability techniques have been introduced to make models partially interpretable by assigning relevance scores for input data points. This thesis further investigates explainability methods, implementing occlusion tech- niques and saliency maps. The focus is on occlusion, which systematically alters parts of the input to analyze changes in a model output. A new methodology, random occlusion, has been proposed and experiments suggest its superiority in the identifica- tion of Points of Interest (POIs) in our particular context. For the first time, occlusion has also been applied during the training phase of a Deep Neural Network ( DNN). The outcomes indicate that the network is able to detect new features under such circumstances and suggest that a combination of DL models trained on occluded data could potentially enhance prediction accuracy.
|