TUM Logo

Protecting Publicly Available Data With Machine Learning Shortcuts

Machine-learning (ML) shortcuts or spurious correlations are artifacts in datasets that lead to very good training and test performance but severely limit the model's generalization capability. Such shortcuts are insidious because they go unnoticed due to good in-domain test performance. In this paper, we explore the influence of different shortcuts and show that even simple shortcuts are difficult to detect by explainable AI methods. We then exploit this fact and design an approach to defend online databases against crawlers: providers such as dating platforms, clothing manufacturers, or used car dealers have to deal with a professionalized crawling industry that grabs and resells data points on a large scale. We show that a deterrent can be created by deliberately adding ML shortcuts. Such augmented datasets are then unusable for ML use cases, which deters crawlers and the unauthorized use of data from the internet. Using real-world data from three use cases, we show that the proposed approach renders such collected data unusable, while the shortcut is at the same time difficult to notice in human perception. Thus, our proposed approach can serve as a proactive protection against illegitimate data crawling.

Protecting Publicly Available Data With Machine Learning Shortcuts

BMVC 2023

Authors: Nicolas M. Mueller, Maximilian Burgert, Pascal Debus, Jennifer Williams, Philip Sperl,, and Konstantin Boettinger
Year/month: 2023/10
Booktitle: BMVC 2023
Fulltext: click here

Abstract

Machine-learning (ML) shortcuts or spurious correlations are artifacts in datasets that lead to very good training and test performance but severely limit the model's generalization capability. Such shortcuts are insidious because they go unnoticed due to good in-domain test performance. In this paper, we explore the influence of different shortcuts and show that even simple shortcuts are difficult to detect by explainable AI methods. We then exploit this fact and design an approach to defend online databases against crawlers: providers such as dating platforms, clothing manufacturers, or used car dealers have to deal with a professionalized crawling industry that grabs and resells data points on a large scale. We show that a deterrent can be created by deliberately adding ML shortcuts. Such augmented datasets are then unusable for ML use cases, which deters crawlers and the unauthorized use of data from the internet. Using real-world data from three use cases, we show that the proposed approach renders such collected data unusable, while the shortcut is at the same time difficult to notice in human perception. Thus, our proposed approach can serve as a proactive protection against illegitimate data crawling.

Bibtex:

@inproceedings {
author = { Nicolas M. Mueller and Maximilian Burgert and Pascal Debus and Jennifer Williams and Philip Sperl, and Konstantin Boettinger},
title = { Protecting Publicly Available Data With Machine Learning Shortcuts },
year = { 2023 },
month = { October },
booktitle = { BMVC 2023 },
url = { https://doi.org/10.48550/arXiv.2310.19381 },

}