Autonomous driving
Complex systems are a challenge for safety

When ensuring the safety of automated driving systems, it is important to take into account technological as well as societal and legal aspects.

mask Intersection from above

Human error is by far the most frequent cause of traffic accidents. Vehicle defects play a comparatively minor role. Automated driving systems therefore have the potential to make the roads much safer by optimizing the flow of traffic and recognizing and responding to hazards on the road. What’s more, automated driving systems keep the damage caused by distracted and unreliable human drivers in check.

Breakthroughs in the field of artificial intelligence (AI), and in particular the use of machine learning to register surroundings and guide the vehicle’s response to them, are seen as the key to making automated driving a reality. The first steps towards introducing this technology have already been taken. Despite the initial hype and enormous investments, however, progress has been much slower than originally expected. Added to this are a number of shocking accidents caused by self-driving cars, involving severe injuries and deaths. These undermine trust in the technology while also making it clear that new approaches to the safety of automated driving systems are needed.

The limits of the system are often unclear

The safety of automated driving, and of autonomous driving in particular, is nonetheless a complex undertaking. Not only is the task technically difficult and resource-intensive, but autonomous vehicles and their wider sociotechnical context have characteristics of complex systems in the strict sense of the term. For example, the limits of the system to be considered are often unclear.

When focusing on the functional performance of an automated vehicle, the system could be seen as a quantity of electronic components that register their surroundings, make steering decisions and implement them by way of actuators. However, when a mobility service is viewed as a whole, the system also includes other road users, emergency services and the city or highway infrastructure. The interplay of all these components can have significant effects on safety.

Another challenge for the development of automated driving systems is the divide between societal and legal expectations regarding the behavior of the systems on the one hand and the ability of the economy and science to specify the technical functions with precision or to define and test them on the other. This is mainly due to one thing: The environment in which the vehicles operate is itself complex and continuously developing. It therefore cannot be fully specified during the development of the vehicle. Let’s take an example: Defining the appearance and behavior of all possible pedestrians precisely enough to allow an autonomous vehicle to recognize them accurately is enormously difficult. The number of possible scenarios in which a vehicle needs to act safely is simply too large for them to be specified in full, let alone tested.

Machine learning often delivers dubious results

For this reason, machine learning is seen as the key to automated driving. Machine learning algorithms are being trained to recognize pedestrians on the basis of video signals, with the help of a large quantity of representative example data. However, there is a new problem: The decision-making process followed by this technology is opaque, imprecise and unpredictable, and small differences in input can lead to different and, at worst, incorrect results.

Finally, there are consequences to taking the responsibility for making decisions away from the driver and transferring it to the system. The system is required, often in situations that are unclear, to make critical decisions that would otherwise require ethical assessments and the interpretation of legal constraints and societal norms.

Taken together, all these points significantly restrict our ability to apply traditional safety measures during both the development and the operation of the systems. They lead to gaps in the safety guarantee of such systems, in governance, in liability and in moral responsibility.

Standards for dealing with the technical process of developing safe automated driving systems are currently being drawn up. In addition, there are plans to draft EU regulations to support the introduction of partially automated vehicles and take into account the ethical requirements for the use of AI. These efforts are a significant and crucial step toward the reliable use of autonomous and AI-based systems for the benefit of humanity as a whole.

How safe is “safe enough”?

Many questions remain unanswered, however. A significant interdisciplinary feat is needed before progress can be made. There is a conflict between the need to achieve a level of safety that is at least as good as that of a human driver (this is called the positive risk balance) and the general tendency of society to reject preventable risks, particularly ones that are systematic in nature. In other words: Even if automated driving could lead to a net reduction in the number of deaths on the road, a significantly increased risk of accidents in certain situations would still not be tolerated. This leads to a non-trivial definition of when such systems are “safe enough” and whether this question can be answered satisfactorily with statistics alone.

To answer these open questions, we need stronger systematic thinking, not just at the technical engineering level but also when operating and regulating these systems. We also need the groups involved to work much more closely together. This requires an ethically founded approach to the development of such systems, in which explicitly formulated ethical standards, such as freedom from discrimination, must be translated into concrete technical properties of the system. There is also the need for a discussion, guided by engineering and science, to clarify the ethical and legal requirements for the system on the basis of a more profound understanding of the technical possibilities and limits of the technology.


This piece first appeared as an article in the conference magazine of the “Artificial Intelligence Innovation Symposium” organized by the “Behörden Spiegel” in June 2021.

Read next

Interview with Reinhard Stolle
"We use AI to make systems safer“

Hans thomas hengl
Hans-Thomas Hengl
Safety engineering / Fraunhofer IKS
Safety engineering