Harder or Different? Understanding Generalization of Audio Deepfake Detection
Recent research has highlighted a key issue in speech deepfake detection: models trained on one set of deepfakes perform poorly on others. The question arises: is this due to the continuously improving quality of Text-to-Speech (TTS) models, i.e., are newer DeepFakes just 'harder' to detect? Or, is it because deepfakes generated with one model are fundamentally different to those generated using another model? We answer this question by decomposing the performance gap between in-domain and out-of-domain test data into 'hardness' and 'difference' components. Experiments performed using ASVspoof databases indicate that the hardness component is practically negligible, with the performance gap being attributed primarily to the difference component. This has direct implications for real-world deepfake detection, highlighting that merely increasing model capacity, the currently-dominant research trend, may not effectively address the generalization challenge.
Harder or Different? Understanding Generalization of Audio Deepfake Detection
Artificial Intelligence (cs.AI); Audio and Speech Processing (eess.AS)
Authors: | Nicolas M. Mueller, Nicholas Evans, Hemlata Tak, Philip Sperl, and Konstantin Boettinger |
Year/month: | 2024/7 |
Booktitle: | Artificial Intelligence (cs.AI); Audio and Speech Processing (eess.AS) |
Fulltext: | click here |
Abstract |
|
Recent research has highlighted a key issue in speech deepfake detection: models trained on one set of deepfakes perform poorly on others. The question arises: is this due to the continuously improving quality of Text-to-Speech (TTS) models, i.e., are newer DeepFakes just 'harder' to detect? Or, is it because deepfakes generated with one model are fundamentally different to those generated using another model? We answer this question by decomposing the performance gap between in-domain and out-of-domain test data into 'hardness' and 'difference' components. Experiments performed using ASVspoof databases indicate that the hardness component is practically negligible, with the performance gap being attributed primarily to the difference component. This has direct implications for real-world deepfake detection, highlighting that merely increasing model capacity, the currently-dominant research trend, may not effectively address the generalization challenge. |
Bibtex:
@inproceedings {author = { Nicolas M. Mueller and Nicholas Evans and Hemlata Tak and Philip Sperl and Konstantin Boettinger},
title = { Harder or Different? Understanding Generalization of Audio Deepfake Detection },
year = { 2024 },
month = { July },
booktitle = { Artificial Intelligence (cs.AI); Audio and Speech Processing (eess.AS) },
url = { https://doi.org/10.48550/arXiv.2406.03512 },
}