Autonomous driving
Reliable detection of pedestrians in road traffic

The system in autonomous vehicles must be able to reliably detect pedestrians. Deep learning approaches are the main method used for this task. However, in comparison with classic software, the results of these approaches must also be checked and verified, which requires various complicated technical measures. This article provides an overview of these measures.

mask Bridge with car traffic over a river

An important function in the context of autonomous driving is object detection. The system must detect and be aware of which objects are around or in front of the vehicle. This information is then used for decisions such as initiating a braking action or changing the driving trajectory. Pedestrians are a particularly critical factor here, which is why pedestrian detection is such an essential function.

The main challenge here is as follows: How can we ensure that the system is detecting pedestrians reliably? And how can safety be verified in a logical manner?

Autobahnknoten

Safe artificial intelligence for autonomous driving

Proving the safety of AI systems must be possible. That’s why the “KI-Absicherung” project for AI assurance with 28 project partners has defined its goal of making the safety of in-car AI systems verifiable.

To the project “KI-Absicherung” Pfeil nach rechts

There are various classic approaches to computer vision. Today, however, the focus is on developing deep learning approaches (DNN) in the field of 2D object detection; for example, as part of the “KI-Absicherung” consortium project for AI assurance. These DNNs produce a two- or three-dimensional rectangular box which determines the position of the object in the image, with the corresponding category (pedestrian, car, truck, etc.) and confidence. The latter is a measure of how likely it is that the assigned category is correct.

However, current DNNs and the associated evaluation methods do not fulfill the usual safety expectations; for example, a pedestrian detection rate of over 99 percent. Furthermore, due to the black-box properties of the DNNs, correct functionality can never be 100% guaranteed.

And there is another factor to consider too: It is not enough to measure the performance with only one averaged metric, as is commonly done in many benchmarks. Single averaged metrics covering the entire test data set allow few conclusions (if any) to be drawn regarding the weak points of the model. The following problems can therefore be identified when developing and verifying the safety of (object detection) DNNs:

  • Inadequate performance
  • Inadequate evaluation metrics
  • Lack of logical argumentation structure with regard to safety

Understanding neural networks

In order to tackle these problems, it is important to understand what these networks are capable of and, in particular, where the individual weak points lie. This information can then be used to set out assumptions and restrictions within which the performance is good enough, thus allowing the networks to be used in specific areas.

When it comes to pedestrian detection, for example, these assumptions could include restricting the application to specific heights of pedestrians (between 50 centimeters — e.g., for children — and 250 centimeters for adults), distances from the vehicle (between three and fifty meters) or a maximum concealment of the person (e.g., under 50 percent). Further restrictions are also possible; however, investigations are required in order to specify the restrictions precisely.

When it comes to evaluating the performance, the options include developing a combination of metrics and correlating them with each other or developing metrics specially. Many metrics are admittedly suitable for comparing various networks but do not reflect the necessary safety properties. The Fraunhofer Institute for Cognitive Systems IKS is therefore working on gradually adapting classic metrics and on defining new metrics specifically for evaluating safety.

Putting the training and test data set to the test

In addition, both the network and the training and test data set must undergo a detailed investigation in order to identify their shortcomings. The training data set should cover the entire operating range of the function as completely as possible, which — in the case of road traffic in particular — is impossible. Therefore, as mentioned earlier, the application area is restricted in order to ensure adequate coverage in the training set.

Furthermore, synthetically generated data is used in order to complete the training set step by step. The test data set, meanwhile, should contain particularly rare and difficult cases — known as corner cases — in order to check whether the DNN can cope with them.

It is important to understand the effect that the choice of architecture and parameters of the model has on the predictions and performance. Furthermore, the degree of confidence that must be achieved in order to be sure that the object has been correctly identified is to be defined as part of the evaluation. There is also the question of the level of accuracy required when localizing the objects. Both depend on the aim and structure of the overall system: If it is sufficient to simply detect a pedestrian and their position is not particularly crucial, a lower degree of localization accuracy may be enough. If, on the other hand, it is extremely important for the function in question that the whole body is localized as correctly as possible as it moves, the mapping must be as precise as possible in order to eliminate the risk of a collision with the pedestrian.

Getting to the bottom of incorrect results

During the evaluation, particular attention is to be paid to the data and objects that the model can detect correctly. Are there specific reasons why certain objects are not identified correctly? Are there signs of systematic errors? Fraunhofer IKS is currently working on methods to track down these systematic errors in the model.

When considering the overall system with the aim of developing a reliable detection function, it is important to look not just at the DNN and the data sets used, but also to include measures during the development process and during runtime in order to guarantee the safety of the function, and to evaluate the effectiveness of these measures.

Learn more!

Would you like to learn more about the Fraunhofer IKS's safeguarding methods for safety-relevant AI systems? Then please contact our Business Development for a personal discussion.

Write an e-mail Pfeil nach rechts

As mentioned earlier, the data sets are to be created carefully during development and a suitable DNN architecture and parameters are to be selected. Parallel development and coupling of other modules within the overall system architecture — for example, a runtime monitoring system which checks the plausibility of inputs and outputs — can be particularly helpful at runtime. The same applies to out-of-distribution detection, which detects data that cannot be processed.

Fraunhofer IKS is working on methods and tools which will ultimately make it possible to establish a valid safety argumentation. This involves looking at the results from the investigations of the model, the data sets and other system components in combination with each other, and includes the following approaches:

  • Tooling for safeguarding safety-relevant AI systems
  • Developing new metrics with a particular focus on the safety aspect
  • Formal argumentation logic for proof of safety

This work was funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Institute for Cognitive Systems.

Read next

Safetronic 2024: Preview
A Linux based OS solution for safety related applications up to ASIL-B/ SIL-2

Dr. Michael Armbruster
Michael Armbruster
Safety engineering / Fraunhofer IKS
Safety engineering