Description
Quantum machine learning (QML) -- one of the main pillars of Quantum
computing -- continues to be an area of active interest from research
and industry. While QML models have been shown to be vulnerable to
adversarial attacks much in the same manner as classical machine
learning models, it is still largely unknown how to compare adversarial
attacks on quantum versus classical models. In this paper, we show how
to systematically investigate the similarities and differences in
adversarial robustness of classical and quantum models using transfer
attacks, perturbation patterns and Lipschitz bounds. More specifically,
we focus on classification tasks on a handcrafted dataset that allows
quantitative analysis for feature attribution. This enables us to get
insight, both theoretically and experimentally, on the robustness of
classification networks. We start by comparing typical quantum
architectures such as Amplitude and ReUpload encoding circuits with
variational parameters to a classical ConvNet architecture. Next, we
introduce a classical approximation of QML circuits (originally obtained
with Random Fourier Features (RFF) sampling but adapted in this work to
fit a trainable encoding) and evaluate this model, denoted Fourier
network, in comparison to other architectures. Our findings show that
this Fourier network can be seen as a ``middle ground'' on the
quantum-classical boundary. While adversarial attacks successfully
transfer across this boundary in both directions, we also show that
regularization helps quantum networks to be more robust, which has
direct impact on Lipschitz bounds and transfer attacks.
|