Quantum Computing
How secure are quantum machine learning systems?

Machine learning models are exposed to malicious external attacks that drastically falsify results and assessments - to the harm of people in safety-critical situations. Defense strategies already exist. But what happens when quantum computing comes into play? Together with partners, Fraunhofer IKS has taken a close look at the first approaches to averting danger.

I Stock 466866280 Rike
mask I Stock 466866280 Rike

Cybersecurity concerns are on the rise in our world of increasing digitalization, and the advent of artificial intelligence (AI) systems bring about additional challenges that extend beyond the typical security concerns. While the security aspects of classical machine learning (ML) have been extensively studied, the emergence of new technologies like quantum computing (QC) that are making their way into the AI domain, reshape the safety and security landscape of the algorithms. Gaining early understanding of the potential threats and inherent defenses associated with this technology brings us closer to building safe intelligent systems of tomorrow.

Security vulnerabilities and defenses
of machine learning models

In the context of ML, a cyberattack can have different goals. For one, the perpetrator can target a model’s behavior by manipulating the input in different ways. If the input is carefully modified in a way to intentionally cause the model to make an incorrect prediction, it is known as an adversarial (evasion) attack. This type of attack exploits the vulnerabilities in the model's decision-making process, often leading to severe consequences in critical applications such as autonomous driving or medical diagnosis. If the input is generally corrupted during inference and especially during training this can lead to a degradation of model’s general performance, it is known as a data poisoning attack. Such attacks can significantly undermine the reliability of machine learning systems and can be particularly damaging in scenarios where data integrity is crucial.

Another potential attack targets sensitive or proprietary information of the model or its training dataset. For example, in a recent paper [1], researchers from Google DeepMind showed the possibility to extract partial information about the training dataset from ChatGPT, posing a significant security concern. This type of attack raises alarms regarding the confidentiality of the data used in training AI models and highlights the need for robust security measures to protect sensitive information.

To mitigate these threats, a variety of ways to safeguard ML models has been proposed in the literature and widely implemented in the industry. Strategies include modifying the training dataset to enhance robustness against adversarial attacks through techniques like adversarial training, which involves exposing the model to adversarial examples during the training process. Additionally, introducing noise to computations can help ensure data privacy, a method known as differential privacy. Furthermore, employing formal verification methods can provide guarantees about the model's behavior under various conditions. This topic that has been extensively discussed in one of the previous posts [2], emphasizing the importance of ongoing research and development in securing machine learning systems against evolving cyber threats.

Adding quantum computing into the mix

Machine learning is believed to have a significant potential for early utilization of quantum computing in the industrial setting. The integration of quantum computing (QC) with machine learning (ML) could revolutionize various industries by enabling faster data processing, more efficient algorithms, and the ability to tackle problems that are currently intractable for classical computers. However, a solid understanding of security aspects that emerge at the intersection of QC and ML is still to be developed.

Contracted by Federal Office for Information Security of Germany (BSI) and together with Adesso and Quantagonia, the Fraunhofer Institute for Cognitive Systems IKS takes on an ambitious goal of creating a comprehensive state-of-the-art review of potential vulnerabilities and defenses of QML. This work spanned an exhaustive report “Security Aspects of Quantum Machine Learning (SecQML)” as well as a scientific overview paper [3] presented at IEEE Quantum Week 2024 conference in Montreal.

The results of this work reveal a compelling picture. Attack vectors previously examined in classical ML have shown varied success in the context of quantum computing. Our experiments confirmed earlier findings that adversarial attacks remain effective beyond the classical case. Classical defense strategies, such as adversarial training, have shown their capacity to counteract these attacks to a degree. However, due to the scaling properties of the quantum Hilbert space in which we perform QC computations suggest that as the quantum ML model grows larger, it becomes increasingly vulnerable to adversarial attacks. The ways to mitigate this challenge are actively being investigated by the research community.

Certainly, it is not all bad news. Some properties of quantum computers have been found to enhance model’s resilience against classical attacks. For instance, various studies indicate that controlling the noise of a quantum device during training can lead to higher levels of differential privacy and, in some cases, bolster adversarial robustness.

In addition to examining the persistence of vulnerabilities from the classical ML domain, it is crucial to investigate the new vulnerabilities and attacks introduced by the use of quantum computing, which has been the focus of our experiments published in the above mentioned report. We have identified several key areas:

  1. Robustness of Encodings: Evaluating how commonly used encodings of classical data in quantum ML withstand various attacks. The choice of encoding is crucial for model performance, and our results indicate that some embedding choices inherently exhibit greater robustness.
  2. Weaponizing Quantum Noise: Investigating the potential for adversaries to exploit inherent hardware noise in quantum devices to conduct noise injection, particularly in co-tenancy scenarios. Our results indicate that fault injection through crosstalk noise is highly effective on the old generation chips, however, modern devices are no longer vulnerable to these attacks.
  3. Transpilation Process Vulnerabilities: Exploring backdoor attack scenarios during the transpilation of quantum algorithms (converting algorithmic formulations into hardware-specific implementations). This process introduces new vulnerabilities, especially if managed by third parties. Our results indicate that gate injection through a backdoor is a powerful attack for different model types. However, some commonly used error detection methods, such as parity checking, have shown their effectiveness against these attacks.
  4. Readout Attacks: Examining adversarial tampering with the quantum measurement process, leading to inaccurate or misleading results. This emerging risk can compromise quantum ML model performance through manipulation of measurement settings or disruptions in the measurement process. Our results highlight indicate that this attack vector can cause considerable disruption of performance.

The future of ML security hinges on an in-depth understanding of the vulnerabilities and effectiveness of the defense methods. QC presents significant potential to enhance our AI capabilities in the near future and may even strengthen the robustness of ML models against classical attacks. However, it also introduces previously unknown vulnerabilities. With this project Fraunhofer IKS and its partners have made a substantial contribution into their own understanding of a new security landscape and brought building secure and reliable systems one step closer.


References:

[1] M. Nasr, N. Carlini, J. Hayase, M. Jagielski, A. F. Cooper, D. Ippolito, C. A. Choquette-Choo, E. Wallace, F. Tramèr and K. Lee, "Scalable Extraction of Training Data from (Production) Language Models," arXiv:2311.17035, 2023.

[2] N. Franco, "Formal Verification of Neural Networks with Quantum Computers," 12 2024. [Online]. Available: https://safe-intelligence.frau....

[3] N. Franco, A. Sakhnenko, L. Stolpmann, D. Thuerck, F. Petsch, A. Rüll and J. M. Lorenz, "Predominant Aspects on Security for Quantum Machine Learning," IEEE International Conference on Quantum Computing and Engineering (QCE), 2024.


The report was commissioned by the German Federal Office for Information Security (BSI) and produced in collaboration with Adesso and Quantagonia.

Read next

Machine Learning
Formal Verification of Neural Networks with Quantum Computers

Foto Nicola Franco
Nicola Franco
Quantum computing / Fraunhofer IKS
Quantum computing