Description
In this thesis, we propose a novel Deepfake detection approach inspired by the state-of- the-art anomaly detection based on the activation analysis of neural networks. Recent development in generative adversarial networks (GAN) made a significant improvement in the creation of convincing fake videos. One of the techniques, referred to as Deepfake, replaces one’s face while preserving the original facial expression and the original scene. Manipulation of videos on purpose to exploit one’s identity can lead to misuse and cause negative consequences. Deepfakes can be used to harass and intimidate people or even help autocratic governments to misinform and oppress their citizens. We introduce a detection method that is independent of the technology the Deepfakes were created with. We empirically show that the manipulated faces yield different activation patterns than non-manipulated ones. First, we use this knowledge to detect highly realistic Deepfakes on single images. By using sequential deep neural networks we extend the approach to expose synthetic videos. Finally, we achieve promising results, providing competitive detection performance when compared to state-to-the-art methods.
|