TUM Logo

A New Approach to Voice Authenticity

Voice faking, driven primarily by recent advances in text-to-speech (TTS) synthesis technology, poses significant societal challenges. Currently, the prevailing assumption is that unaltered human speech can be considered genuine, while fake speech comes from TTS synthesis. We argue that this binary distinction is oversimplified. For instance, altered playback speeds can be used for malicious purposes, like in the 'Drunken Nancy Pelosi' incident. Similarly, editing of audio clips can be done ethically, e.g., for brevity or summarization in news reporting or podcasts, but editing can also create misleading narratives. In this paper, we propose a conceptual shift away from the binary paradigm of audio being either 'fake' or 'real'. Instead, our focus is on pinpointing 'voice edits', which encompass traditional modifications like filters and cuts, as well as TTS synthesis and VC systems. We delineate 6 categories and curate a new challenge dataset rooted in the M-AILABS corpus, for which we present baseline detection systems. And most importantly, we argue that merely categorizing audio as fake or real is a dangerous over-simplification that will fail to move the field of speech technology forward.

A New Approach to Voice Authenticity

Artificial Intelligence (cs.AI); Audio and Speech Processing (eess.AS)

Authors: Nicolas M. Mueller, Piotr Kawa, Shen Hu, Matthias Neu, Jennifer Williams, Philip Sperl, and Konstantin Boettinger
Year/month: 2024/2
Booktitle: Artificial Intelligence (cs.AI); Audio and Speech Processing (eess.AS)
Fulltext: click here

Abstract

Voice faking, driven primarily by recent advances in text-to-speech (TTS) synthesis technology, poses significant societal challenges. Currently, the prevailing assumption is that unaltered human speech can be considered genuine, while fake speech comes from TTS synthesis. We argue that this binary distinction is oversimplified. For instance, altered playback speeds can be used for malicious purposes, like in the 'Drunken Nancy Pelosi' incident. Similarly, editing of audio clips can be done ethically, e.g., for brevity or summarization in news reporting or podcasts, but editing can also create misleading narratives. In this paper, we propose a conceptual shift away from the binary paradigm of audio being either 'fake' or 'real'. Instead, our focus is on pinpointing 'voice edits', which encompass traditional modifications like filters and cuts, as well as TTS synthesis and VC systems. We delineate 6 categories and curate a new challenge dataset rooted in the M-AILABS corpus, for which we present baseline detection systems. And most importantly, we argue that merely categorizing audio as fake or real is a dangerous over-simplification that will fail to move the field of speech technology forward.

Bibtex:

@inproceedings {
author = { Nicolas M. Mueller and Piotr Kawa and Shen Hu and Matthias Neu and Jennifer Williams and Philip Sperl and Konstantin Boettinger},
title = { A New Approach to Voice Authenticity },
year = { 2024 },
month = { Febuary },
booktitle = { Artificial Intelligence (cs.AI); Audio and Speech Processing (eess.AS) },
url = { https://doi.org/10.48550/arXiv.2402.06304 },

}