TUM Logo

Uncertainty Quantification for Quantum Machine Learning

Uncertainty Quantification for Quantum Machine Learning

Supervisor(s): Kilian Tscharke, Pascal Debus
Status: finished
Topic: Machine Learning Methods
Author: Maximilian Wendlinger
Submission: 2024-11-01
Type of Thesis: Masterthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

One of the main challenges in classical machine learning is the decrease in model
transparency resulting from increasingly complex and non-linear model functions.
This model complexity and lack of transparency, in turn, has severe implications like
overfitting (losing the generalization ability) or overconfidence in the model predictions,
opening the door for adversarial attacks and posing a threat to the entire system’s
security.
As a result, multiple lines of research investigating the transparency of machine
learning models have emerged. One of these directions is the idea of quantifying the
model uncertainty, i.e., equipping a model with the ability to predict not only the class
label score or regression value but also give insights into its prediction confidence.
With the recent appearance of quantum machine learning offering interesting possible
advances in computational power and latent space complexity, we notice the same
opaque behavior. However, despite the extensive research efforts in the classical setting,
hardly any work has progressed toward solutions to overcome the black-box state of
quantum machine learning.
Consequently, we approach the mentioned gap in this thesis, building upon existing
work in classical uncertainty quantification and some first steps in quantum Bayesian
modeling to develop techniques of mapping classical uncertainty quantification meth-
ods to the quantum machine learning domain. Thus, we propose different possibilities
for the described mappings across the classical-quantum boundary, before compara-
tively evaluating all resulting quantum machine learning models. More specifically,
we focus on the ideas of Bayesian quantum machine learning, Monte-Carlo dropout,
quantum ensembles, and quantum Gaussian processes, which we evaluate first visually
via predicted uncertainty intervals and second via rigorous calibration metrics.
Our findings underscore the importance of leveraging classical insights into un-
certainty quantification to create more uncertainty-aware quantum machine learning
models. The evaluation results further highlight the necessity of considering the specific
use case to construct well-calibrated models that are suitable for a given task.