Read next
Interview with Reinhard Stolle
“Bringing the best technology safely into the vehicle”
Ensuring the safety of AI functions in vehicles remains a challenge that must be mastered step by step. However, visible progress is being made on the road to autonomous driving, says Dr. Reinhard Stolle, deputy director of the Fraunhofer IKS. And the success of ChatGPT & Co. is also likely to be leveraged for highly automated vehicles.



© iStock/Scharfsinn86
H. T. Hengl:
On their way onto European roads, autonomous vehicles appear to have become stuck in traffic shortly after starting. Why?
Dr. Reinhard Stolle:
That sounds too negative to me, even though there is some truth to it, of course. Looking at the positive side, there has been a lot of progress in autonomous driving. This is particularly evident in the U.S., where driverless taxis are already on the roads in some cities. But there have also been major successes in Europe ...
H. T. Hengl:
Which ones? Can you give some examples?
Dr. Reinhard Stolle:
Mercedes and BMW already offer vehicles that are approved for Level 3 autonomous driving (see box). And there are a number of large and small companies working on Level 4 solutions that already have advanced prototypes to show for their efforts. Applications range from robot taxis in cities to autonomous transport vehicles in ports, airports, logistics centers, and on factory premises.
The levels of automated driving
The Society of Automotive Engineers (SAE) provides a clear overview of the levels of automated driving from level 0 to 5.
H. T. Hengl:
And how does the whole thing look from the negative side?
Dr. Reinhard Stolle:
In fact, it is taking longer than many thought for driverless cars to become a common sight on our roads. One important reason for this is the use of machine learning (ML) models in vehicles. Autonomous driving has only come within “product reach” thanks to the spectacular advances in machine learning research over the past decade. At the same time, however, this new technology poses a challenge for the safety of autonomous vehicles, as the safety methods for conventional vehicle software are only to a very limited extent applicable to AI software.
H. T. Hengl:
Could you please explain that in more detail?
Dr. Reinhard Stolle:
These ML models are used, for example, in perception, i.e., for perceiving and understanding the vehicle's surroundings, including the safe detection of people. This is done using images. Until around 2012, the evaluation of what can be seen in images was only possible with limited reliability. In 2012, there was a leap forward with the introduction of AlexNet, which uses neural networks, a form of machine learning model, for image processing. Today, everyone can experience this technology through the photo recognition functions in their phones and web browsers.

Dr. Reinhard Stolle: “ML models are black boxes, meaning it is not immediately apparent how the AI arrives at its decisions.”
H. T. Hengl:
And where is the problem?
Dr. Reinhard Stolle:
ML models are black boxes, meaning it is not immediately apparent how the AI arrives at its decisions. This creates the paradox that the AI that makes autonomous driving possible in the first place is initially seen as an additional factor of uncertainty.
H. T. Hengl:
How can you get this uncertainty under control?
Dr. Reinhard Stolle:
Fraunhofer IKS researches and develops methods for using such ML models in safety-relevant areas (such as autonomous driving and advanced driver assistance systems) despite their inherent uncertainty. We do this in collaboration with partners from academia and industry.
H. T. Hengl:
Are there any examples of this?
Dr. Reinhard Stolle:
Certainly, two funding projects exemplify this research work: AutoDevSafeOps and Safe AI Engineering. The AutoDevSafeOps project focuses on ensuring that AI-based products are safe not only at the time of delivery, but throughout their entire product life cycle, even if the context of use changes. For example, autonomous vehicles must be able to recognize scooter riders, even if scooters were not yet part of the usual street scene at the time the vehicle was developed. To this end, a unique holistic DevOps approach with integrated safety methods has been developed. It is designed to enable modular updates of safety-related driving functions, including the associated safety processes and procedures, across the system boundary between the vehicle and the backend – and to do so in a dynamically changing environment.
Safe AI Engineering's research closes the gap between concept and safety verification through verification, validation (V&V), and monitoring of AI. To this end, existing standards such as ISO/PAS 8800, ISO/PAS 21448 (SOTIF), and ISO 26262, which set international standards for AI functions, are integrated.
H. T. Hengl:
How will the results of this research find their way into individual cars in the next step? What needs to happen for AI-supported systems to be used as standard in vehicles?
Dr. Reinhard Stolle:
The goal must be to bring the best technology safely into the vehicle, i.e., to the customer. The systems are designed to use machine learning models, but in such a way that the uncertainty inherent in black box models becomes manageable in the overall system. The results obtained in the funded projects can then be incorporated into product development in industry as well as into the standardization of verification methods.
H. T. Hengl:
Could the development and use of large language models serve as a model here? Could large language models (LLMs) also be used in automated and, later, autonomous driving?
Dr. Reinhard Stolle:
Research and industry are in the midst of a very exciting development. There are various ideas about how LLMs can improve vehicles and their safety: First, they could support natural language communication between drivers or passengers and the vehicle. Second, it is possible to use a co-pilot tailored for safety engineering, which involves vast amounts of documentation, both formal and semi-formal, as well as in the form of natural language. Fraunhofer IKS has produced research results and solutions in this area (https://safe-intelligence.frau...). And thirdly, the world knowledge available in LLMs can be used for autonomous driving. If you ask an LLM the question “How fast can you drive past parked cars?”, you will see that the LLM knows the answer. So, you don't need to learn this “from scratch” from lots of training drives, but can incorporate this world knowledge directly into the driving strategy in a suitable way. How exactly this can be done, for example through the appropriate use of multimodal foundation models, is a very exciting and very promising research task.