Neural networks can be unreliable against some basic conditions, such as image scaling and brightness changes. Moreover, they are highly unreliable against domain-specific conditions, such as placing stickers on traffic signs. Therefore, to work reliably, the model must correctly classify the sample under different lighting environments, camera angles, etc. that occur naturally in reality.
This is not a new problem. Formal verification of neural networks has attracted tremendous attention from different communities such as machine learning, formal methods and security. To understand the problem, consider for each image x, a set of allowed perturbations S , which formalizes the manipulative power of the adversary. In image classification, S captures the perceptual similarity between images. For instance, S can be modeled as a convex shape around x. We can formally specify environmental conditions and systematically test neural networks against them.
In this context, S represents the set of possible images induced by the attack, it is also called an adversarial region. Note that it is practically impossible to list all the possible images in that region and simply run the network in each to verify that it is correctly classified. Therefore, two methods exist to certify the robustness of neural networks:
- Convex approximation
- Exact verification
The first method computes the convex shape S around the input x and propagates it through the entire network to obtain an output shape. Robustness is then demonstrated if all concrete outputs within the output form are classified to the correct class.
Another method is to formulate the ReLU (Rectified Linear Unit) with binary variables, i.e. active or inactive. With this method, the verification problem can be represented as a mixed-integer optimization program (MIP). For smaller networks, an input shape can be propagated exactly using MIP. To scale larger networks, classical approaches use the first method and approximate the shape using various convex relaxations (e.g., intervals, zonotopes, and polytopes). Although approximate analysis is usually faster and more scalable than exact analysis, there may be cases where it fails to verify robustness. In contrast, exact analysis is usually exponentially time-consuming and less scalable. However, it guarantees the completeness of the result.
Where quantum computers come into play…
The development of quantum computing (QC) hardware and software has seen significant progress over recent years. However, practical applications of QC remain an active area of research as current quantum computers belong to the category of Noisy Intermediate Scale Quantum (NISQ) devices. They are characterized by their relatively small number (around 100) of qubits, noise, and by the relatively small connectivity between the qubits. Therefore, it is not feasible to perform large and complex calculations solely on a quantum computer. Instead, current algorithms are hybrid in nature and require interaction between classical and quantum computers.
With the motivation of exploiting the superposition of the exponentially large space of qubits, combinatorial optimization has received a lot of attention both within the gate-based QC and quantum annealing. Therefore, we are the first to leverage the combinatorial power of QC to verify the robustness of neural networks with a hybrid quantum-classical algorithm.
In our work (Franco, et al. 2022), we formalize the verification problem to suit quantum optimization algorithms. The problem is broken down to fit the quadratic unconstrained binary optimization (QUBO) framework, which is used by current quantum heuristics, such as the quantum approximate optimization algorithm (QAOA), and to a linear program, which is computed with classical solvers. The approach iterates between the classical and the quantum systems. The QUBO is solved by a quantum computer or a quantum annealer and the parameters are updated by an optimizer running on classical hardware. Given their heuristic nature, more iterations lead to better results. In our approach, we decreased the number of iterations and placed a limit on the maximum number of qubits required.
In summary, the challenges posed by current NISQ devices and quantum heuristic algorithms are well-researched and being able to find suitable applications and practical use cases is the key to unlocking the quantum advantage. Given the combinatorial nature posed by formal verification of neural networks, this challenge appears to hold great promise for highlighting the quantum advantage.
Franco, Nicola, Tom Wollschläger, Nicholas Gao, Jeanette Miriam Lorenz and Stephan Günnemann. 2022. “Quantum Robustness Verification: A Hybrid Quantum-Classical Neural Network Certification Algorithm.” arXiv preprint.
Goodfellow, Ian J, Jonathon Shlens and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” arXiv preprint arXiv:1412.6572.