Read next
Reinforcement Learning
Hierarchical structures can lead us to safe and efficient AI systems
Deep learning has boosted the interest in deploying artificial intelligence (AI) to more systems. Reinforcement learning (RL) has also been through a revival, with Deep RL models designed to play Atari games and even beat grandmasters at the game of Go. However, despite these recently obtained impressive achievements, RL is still struggling to be accepted and successfully deployed in some applications, especially in the case of safety-critical tasks such as robotics, autonomous driving and industrial control.


© iStock/Nate Hovee
Safety is perhaps the main issue that prevents Artificial Intelligence (AI) from being better accepted as an alternative to traditional engineered solutions for safety-critical applications. One of the root causes is that neural networks still lack formal guarantees and therefore are not reliable in most cases.
A notorious problem faced by neural networks are adversarial attacks. They consist of applying small perturbations to sensor inputs that can be caused by usual noise or even malicious attacks that can be enough to alter the decisions given by the neural network [1]. Increasing the robustness and deriving proper testing is an urgent matter when it comes to deep learning.
Neural networks must also be better explained in order to achieve safer AI, as neural networks are usually treated as black boxes. In that sense, formalizing the requirements that specify the intended behavior of such systems is not a trivial task. The reason AI is used in the first place is because describing the desired behavior cannot be easily done with the use of explainable logic [2]. Being able to reason the decision taken by the model is a necessary step in the validation and verification of such systems.
Sample efficiency
Most state-of-the-art deep learning models are data-inefficient and collecting the massive amount of data necessary to train such models is not always feasible.
Think about autonomous driving. The approach taken by the leading players (e.g., Tesla, Waymo) consists of training their models on thousands of hours of recorded driving data. AlphaGO and OpenAI Five, celebrated AI models able to play the game of Go and Dota 2 respectively, also need an extremely high amount of collected data to achieve their impressive results.
The root of this problem comes from a well-known issue for RL: the curse of dimensionality, which describes how the number of states exponentially grows with task complexity, easily becoming a computationally intractable problem. Improving sample efficiency is therefore paramount for deploying RL in complex scenarios.
Sample efficiency refers to the amount of data the agent needs to experience before reaching a chosen target level of performance [3]. The fewer interactions with the environment required for the agent to learn a good control strategy, the more efficient the learning method. Increasing sample efficiency can be achieved by better structuring the model to process the collected data more efficiently and using better strategies to interact with the environment.
Hierarchical Reinforcement Learning
Hierarchical RL (HRL) tackles problems like sample inefficiency, scalability and generalization by breaking the problem down into modules that have different levels of abstraction. It opposes end-to-end learning, which consists of optimizing via a single model that is responsible for processing the input that comes from the sensors and outputting the decision that will be sent to the actuators.
Multiple studies in neuroscience and behavioral psychology suggest that our brain is structured in a hierarchical manner. For instance, toddlers already use temporal abstraction to generate subgoals when solving their tasks [4]. Guiding our behavior in accordance with goals, plans and broader contextual knowledge is what sets human beings apart and allows us to solve highly complex problems [5].
The core idea of HRL, inspired by such biological shreds of evidence, is to learn how to solve a task by learning specific skills (also called abstract actions) that are combined to accomplish higher-level goals. A great impact on sample efficiency comes from the fact that the set of learned skills can be leveraged to solve variations of the task or even completely novel ones.
Industries such as automotive and avionics have traditionally designed their safety-critical systems in a modular fashion. This modular approach facilitates maintainability, the implementation of redundancy modules and tracing back the root of a detected failure on both hardware and software systems. This can serve as motivation for designing AI models that solve complex tasks by breaking the problem down into subproblems that can be understood and verified much more easily. HRL is therefore a viable approach to achieving safer AI-based systems.
Even though research on AI has come a long way and obtained impressive results, there is still much to be done in terms of deploying learning-based models in real, complex, safety-critical applications. Hierarchical reinforcement learning is a promising approach that can help to achieve this audacious goal.
[1] Lütjens, Björn, Michael Everett, and Jonathan P. How. “Certified adversarial robustness for deep reinforcement learning.” Conference on Robot Learning. PMLR, 2020.
[2] Alves, Erin E., et al. “Considerations in assuring safety of increasingly autonomous systems.” No. NASA/CR-2018-220080. 2018.
[3] Botvinick, Matthew, et al. “Reinforcement learning, fast and slow.” Trends in cognitive sciences 23.5 (2019): 408-422.
[4] Ribas-Fernandes, Jose J F, et al. “A neural signature of hierarchical reinforcement learning.” Neuron 71.2 (2011): 370-379.
[5] Badre, David, et al. “Hierarchical cognitive control deficits following damage to the human frontal lobe.” Nature neuroscience 12.4 (2009): 515-522.
This work was funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Institute for Cognitive Systems.