Read next
Interview with Mario Trapp and Florian Geissler
»We combine the strengths of GPT with those of established safety tools«
The new Fraunhofer IKS Safety Companion is an AI agent designed to support human developers with safety-critical tasks. It uses large language models (LLMs) to understand tasks and to identify and implement the appropriate safety concepts. Executive Director Mario Trapp and Florian Geissler, Senior Scientist at Fraunhofer IKS, introduce the Safety Companion in an interview.
© iStock/in future
H. T. Hengl:
Mario, you and your team are developing the new Fraunhofer IKS Safety Companion. Can you briefly describe what this is all about or, to put it another way, who is accompanying whom and why?
Prof. Dr.-Ing. Mario Trapp:
In simple terms, safety means that the risk posed to a person by a system must be low. Ultimately, a person who is responsible for the safety of the system must always check whether this is the case. Due to rapidly increasing system complexity paired with increasingly shorter development cycles, this can hardly be done manually without automated support. However, generative artificial intelligence (AI) is opening new possibilities for providing the human safety engineer with an intelligent, AI-based safety companion to carry out and adapt safety verifications for complex systems in very short cycles. The human remains in control of the entire process. But the Safety Companion supports them in this task as an AI-based assistant, allowing them to work more efficiently.
H. T. Hengl:
What approach are you taking to achieve this?
Prof. Dr.-Ing. Mario Trapp:
Anyone who has ever used GPT realizes that the answers are not always consistent and can even be completely wrong. You never know whether the answers are based on facts or are made up. It is often not so easy to check. This is why the use of generative AI quickly becomes a source of error, especially when supporting safety verifications, and creates a lot of additional work instead of reducing effort. The Fraunhofer IKS Safety Companion can therefore be asked questions and given tasks in the same way as GPT. In the background, however, the Companion combines existing information from a company's documents with the execution of classic safety analyses, development models, analysis tools and simulations etc... It therefore combines the intuitiveness familiar from GPT with the efficiency and resilience of quality-checked information and existing professional tools.
© Fraunhofer IKS
H. T. Hengl:
Florian, you are very close to the implementation of the Fraunhofer IKS Safety Companion. How exactly does it work?
Dr. Florian Geissler:
Our approach is to develop agents that have LLM (Large Language Models) queries at their core, as well as non-AI-based functions. The agent orchestrates complex problems and decides which tasks are forwarded to the LLM and which are not. In terms of safety-critical tasks, this means that the agent follows certain guidelines to minimize uncertainties. Critical subtasks can be solved with external functions, for example, while others are best solved with LLMs.
H. T. Hengl:
And what are the benefits of the Safety Companion?
Dr. Florian Geissler:
The complexity is significantly reduced, i.e., tasks that are difficult for developers to grasp and monotonous tasks that are prone to errors are automated. As a result, efficiency is significantly increased and development is faster, all while ensuring the best possible security. In addition, human and AI creativity can complement each other well in many cases: In fact, we have seen that LLMs can be very good at thinking »out-of-the-box«. This helps human developers to validate their thinking patterns.
H. T. Hengl:
Similar approaches already exist. So, what distinguishes the Safety Companion from other offerings already on the market?
Prof. Dr.-Ing. Mario Trapp:
As mentioned at the beginning, we do not simply »play« with GPT, but combine the strengths of GPT with those of established models and tools. Combining this classic engineering with the potential of AI and a dialog-based user experience enables a significant increase in automation with reliable results. And this is what makes the Fraunhofer IKS Safety Companion different from other solutions.
H. T. Hengl:
Can you give an illustrative example of the use of the Fraunhofer IKS Safety Companion?
Prof. Dr.-Ing. Mario Trapp:
Especially in the context of autonomous systems, it is hardly possible to manually analyze all conceivable and inconceivable environmental factors, from rain to the occlusion of objects - the Fraunhofer IKS Safety Companion can perform very efficient preliminary work here. When creating safety analyses and safety concepts, the Companion can automate many steps and thus draw on the company's knowledge to make qualified suggestions. This reduces development times if, for example, a system update is carried out as part of DevOps, the Safety Companion can determine the impact of the change, identify concrete evidence that needs to be provided and, in some cases, already provide suggestions for solutions. This means, that modern update cycles can also be adhered to in safety-relevant applications.
H. T. Hengl:
What areas of application are you currently working on?
Dr. Florian Geissler:
There are many possible use cases in the field of mobility, such as the development or verification of complex system architectures. Complex error chains of individual components can thus be better analyzed and their safety arguments supported. In the field of medicine and health, we see that there are complex correlations between disease symptoms, medication, and patient data. Documentation is often scattered and incomplete. Here, agents can contribute suggestions on chains of action or propose treatment methods.
H. T. Hengl:
What are the next steps?
Dr. Florian Geissler:
In discussions with industry representatives, we often see similar “pain points”: the complexity of systems is increasing and becoming difficult to manage, while at the same time, there is great time pressure to carry out safety certifications, i.e., the need for automation is increasing. Generative language models have huge support potential here. At the same time, there is a certain reluctance on the part of many users to integrate AI into their safety certification process. We would first like to show that generative AI and safety are compatible. This is the case if appropriate guidance is provided to the LLMs, e.g., by safety-aware agents. We do not want to solve the overall problem with LLMs, but rather support human decision-makers to the best of our ability. Together with our industry partners, we want to bring this concept into real-world application.