Interview
"We have to maintain a view of the overall safety concept" – Interview with apl. Prof. Dr. Mario Trapp

In this interview apl. Prof. Dr. habil. Mario Trapp, Director of the Fraunhofer Institute for Cognitive Systems IKS, talks about the competencies of the institute that make cognitive systems safe.

Blaue Kristallstruktur
mask Blaue Kristallstruktur
Frage

Hans-Thomas Hengl:

Mr. Trapp, as its name suggests, your institute is focused on cognitive systems. This term is not really established outside of the research world. What does it actually mean?

Antwort

Mario Trapp:

When we talk about cognitive systems, we mean technical devices and machines that use artificial intelligence (AI) to open up a whole new world of opportunities. This happens for example when we analyze sensor data and utilize information from the network in order to represent the functionality of these machines and devices with learning methods. This leads to totally new options such as autonomous driving and autonomous flying, robots that literally cooperate hand-in-hand with people, and even new medical devices that feature unprecedented diagnosis and therapy capabilities.

Mario Trapp
Bild

apl. Prof. Dr. habil. Mario Trapp, Director of the Fraunhofer Institute for Cognitive Systems IKS

Frage

Hans-Thomas Hengl:

And where does Fraunhofer IKS actually focus its research activities in this area?

Antwort

Mario Trapp:

The key term here is safe intelligence. To date, safety and intelligence have been viewed as two separate entities.
In other words, a cognitive system can be either safe or intelligent. The biggest challenge, and what represents for the most the part the decisive competitive edge in many industries, is actually bringing intelligence and safety together. This is exactly what safe intelligence and the research activities of Fraunhofer IKS represent.

Artificial intelligence is very good in a lot of industries and most of the time it functions fairly well. But when human lives are involved, “most of the time” is not sufficient. Just think about autonomous driving. That means we need safety guarantees - an assurance that the software does not present a danger for the user. The mission of Fraunhofer IKS is to answer the question: How can I exploit the potential of the intelligence in the software without risking safety and dependability?

Frage

Hans-Thomas Hengl:

What role does dependability play in this context?

Antwort

Mario Trapp:

When we talk about safe intelligence, we’re talking about safety of course. But by itself, safety is not sufficient. An automobile parked in a garage tends to be safe, but it’s not available, and it’s not dependable. What that means is that we always have to view safety in conjunction with dependability, because only then can I actually deliver a benefit. The bottom line is, it’s not enough to simply make a cognitive system safe, including the AI technology. We have to make it safe and dependable.

Frage

Hans-Thomas Hengl:

Can you briefly explain the approaches that you follow?

Antwort

Mario Trapp:

To make AI safe, it’s important to initially focus on the system. In other words, we have to make the system safe, not just the AI. For this reason it’s important to maintain a view of the overall safety concept. Fraunhofer IKS works across four levels here. Of course I have to ensure that the AI itself is safe, explainable and robust, and that I can comprehend it. AI will always be more susceptible to errors than conventional software. So at the second level, I have to monitor the AI using conventional principles and allow access to the control mechanisms only when the validation leads to a positive assessment. But since safety always assumes the worst case scenario, I won’t be able to manufacture a cost-effective product if I utilize AI. In turn, that means that at the third level, we have to build dynamics into the system by directly monitoring the situation at runtime, carrying out dynamic risk assessments and dynamically adapting the safety concept, and thus the system. At the fourth level we establish a concept that we call »continuous safety management«, which we use to give organizations the capability to quickly install the system in the field, learn from the field data step-by-step and expand the scope of the system with short update cycles and to react quickly to errors.

Frage

Hans-Thomas Hengl:

What core expertise from Fraunhofer ESK will Fraunhofer IKS develop further?

Antwort

Mario Trapp:

Cognitive systems are not only about AI, but about the systems. In this case, the term »systems« is still synonymous with complex software systems. The intelligence stems from more than just the pure AI. We’re familiar with lots of processes, such as those from smartphones, which you can use to download apps. We’re familiar with the service-oriented world of cloud technologies. Transferring these to technical systems and machines is a major challenge. And this is exactly where Fraunhofer ESK’s experience lies. How can I build flexible and highly-dependable architectures – or highly-connected systems? This is all experience that we can utilize on the path to cognitive systems. When we expand this existing expertise with AI, then we come to cognitive systems. And it’s only through this combination that we can actually develop them.

Frage

Hans-Thomas Hengl:

What industries are directly relevant to the institute’s research activities?

Antwort

Mario Trapp:

Our work is ultimately relevant to all industries that truly need dependable information for AI or intelligent software. It’s industries in which human lives are tied to this dependability. Autonomous driving is one example. But we’re also talking about industries that have to deal with serious business risks for example. If a production system goes idle because of a misjudgment on the part of the AI, the company could face immense costs. As a result, we are active in any industry in which the success of the AI depends on quality guarantees.

Read next

Interview
"Agile leadership attracts agile employees"

Hans thomas hengl
Hans-Thomas Hengl