Artificial intelligence
Playing it safe with machine learning

Machine learning, and especially deep learning, can be used to enable many highly complex applications, such as autonomous driving. However, there are new challenges that need to be overcome to secure such systems. An overview.

Mountain road
mask Mountain road

Autonomous vehicles are equipped with many different sensors, from cameras to radar and lidar systems to ultrasonic sensors. These continuously supply large quantities of data about their environment. However, these data cannot easily be used in its unprocessed form to steer the vehicles. This is where perception, or the interpretation of these sensor data, comes into play. In the context of autonomous vehicles, this could mean identifying objects in camera images, for example. A precise and reliable perception of the surroundings is essential for autonomous vehicles: it is critical in determining driving direction and speed, for example.

Perception as the basis for autonomous systems

Reliable perception is no easy task, especially when the systems need to be used in many different situations. It is important to consider that the sensors can be impaired by factors such as the weather. Situations and objects that have not previously been observed may also arise. Conventional, handmade algorithms cannot capture this complexity, however. For this reason, machine learning is being used in this area, especially deep neural networks (DNNs). Of course, these also need to be made safe. Due to the data-driven nature of the algorithms, this in turn poses new challenges. Three of these perception-related challenges that we are dealing with at Fraunhofer IKS are presented below.

Certain ≠ safe

Modern DNNs are often too certain in their predictions. This is due, among other things, to the fact that larger and larger network architectures are gathering increasing amounts of evidence about individual possible predictions. The final step taken by these architectures is to create a probability distribution of the possible predictions. They do this using what is known as the softmax function, which often causes the predictions with the most evidence to be assigned disproportionately high probabilities. The reason for this lies in the exponential nature of the softmax function.

Autonomous car
Bild

For autonomous vehicles, precise and reliable perception of the environment is crucial.

In safety-critical areas in particular, however, neural networks need to be able to express their uncertainty reliably. This makes it possible to take precautions and avoid potentially critical situations. We have already published some approaches to solutions in this area in the two articles “Uncertainties we can rely on” and “How to teach an AI to question itself.”

Old from new

Another problem arises when a neural network is presented with a concept, such as an object, that it did not see in the training phase. This is because, instead of giving no answer at all, the network will attempt to find patterns in the unknown input that fit concepts it has already encountered. This challenge is clear in the example of a classifier trained to categorize images of cats and dogs: When presented with an elephant, it assigns probabilities to the two categories dog and cat, and those probabilities will add up to 100%.

This is because the network finds evidence in the elephant picture in the form of patterns that it has learned to use to distinguish cats from dogs. Although there are fewer matching patterns to be found overall, in the end this is masked by the fact that a probability distribution across the known categories is nonetheless created. Identifying this type of unknown input, also called out-of-distribution detection or novelty detection, is necessary because the system could otherwise perceive the surrounding environment incorrectly. Any subsequent decisions will therefore be based on false assumptions, which could trigger an incorrect response from the system as a whole.

The obvious approach, which is to add a separate category for unknown concepts, is difficult to implement because it requires the patterns for all unknown concepts to be learned. Approaches that observe which neurons are active at a particular time at runtime are more promising. This activation pattern is compared to the patterns observed during training. If they are too different, the concept could be one that has not been learned.

Another option is to train a network that can create a highly simplified representation of the known concepts and reconstruct the original images from this representation. For known concepts, the difference between the input and output images is small, but novel concepts have a much greater reproduction error rate.

Learning to understand neural networks

The last challenge to mention here is the inability to adequately explain modern neural networks.

In the area of perception, these usually have millions of parameters called neuron weights. This makes it impossible for a human to understand what the neural network has based its decision on. In this context, we talk about a DNN as a black box. To allow us to build an argument regarding the safety of a system used by a neural network, it would be extremely helpful to design the prediction process more transparently. This could ensure that a network uses the right characteristics to make the prediction and does not, for example, distinguish polar bears from grizzlies based on how much snow is in the picture.

The field of explainable AI has qualitative and quantitative approaches to solving this problem. The qualitative approaches include, for example, visualizing the effect that each individual pixel in the input image has on the result. This is called heatmapping or saliency maps. Another method is to visualize the learned patterns and how they interact across the different levels of the network. However, qualitative methods prove to be inadequate in providing a complete argument regarding safety. They do nonetheless produce findings that help when developing neural networks and analyzing errors that cannot be completely ruled out.

In contrast, quantitative approaches include learning concepts and patterns that can also be interpreted by humans. For example, a neural network could be trained to recognize vehicles using specific elements such as tires or windshields. These elements can in turn also be broken down into specific components.

Although conventional neural networks do also learn hierarchies of patterns and concepts, the difference in these explainable learning approaches is that they explicitly define the components to be learned or that they group patterns that have already been learned into specific, interpretable concepts.

Safety requires a holistic view

The approaches that have been presented for solving the problems of neural networks are intended to make them more robust and provide more information to assess their condition.

However, additional safe processes and architectures surrounding the neural networks are needed, such as the four-plus-one safety architecture.

Only with the help of a holistic safety concept will it be possible to guarantee safety even for the most complex tasks requiring the use of machine learning components.

You also want to play it safe with Deep Learning?

On our website you will find our services in the areas of Dependable Artificial Intelligence and Safety Architectures for AI-based systems. Or contact us directly at bd@iks.fraunhofer.de.

Read next

Autonomous driving
Where driverless cars still have some catching up to do

DSC 5696 web
Reinhard Stolle
Autonomous driving / Fraunhofer IKS
Autonomous driving