Interview with Simon Burton
“I’ve always had a great affinity for research.”

Science needs to prove that systems based on artificial intelligence (AI) are safe enough. That is what Prof. Simon Burton, who recently became Division Director for Safety at Fraunhofer IKS, is calling for. In this interview he explains the institute’s approaches to ensuring safety.

Blaue Feder in der Nahaufnahme
mask Blaue Feder in der Nahaufnahme
Frage

Hans-Thomas Hengl:

Simon, surveys on artificial intelligence (AI) regularly show that people have concerns about safety. Are those reservations justified?

Antwort

Simon Burton:

In part, yes. We’re not talking about nightmare scenarios like in Hollywood movies, with hyperintelligent robots ruling over humans. It’s more about the type of AI that is currently being discussed for many functions that are relevant for safety — this type of AI is called machine learning. This creates some new challenges when it comes to safety compared to existing software-based approaches.

Frage

Hans-Thomas Hengl:

Why is that?

Antwort

Simon Burton:

There are a number of reasons. Firstly, we use AI in situations where we can’t specify the function in detail, such as recognizing objects in complex traffic situations. This means that the actual safety requirements for the function are difficult to define. Secondly, the calculations that the algorithm carries out are almost impossible for us to understand as humans. It’s therefore difficult for us to validate whether the function’s thought process was “right”. Thirdly, the algorithms produce imprecise statements like “the object is a human with 80 percent confidence”. Of course, there are often cases where the algorithms come close to 100 percent.

Frage

Hans-Thomas Hengl:

What can science do to deal with people’s safety concerns?

Antwort

Simon Burton:

I believe that science has the obligation not only to improve the performance of such approaches, but also to provide adequate proof that the resulting systems are safe enough. The accepted level of safety also depends on the perceived benefit of the system, so it needs to be discussed as part of a societal discourse. It’s important to establish an ethical position and determine what is expected of the system. It’s also important, however, for science and industry to show what the technical limits of these systems are in terms of their capabilities.

Frage

Hans-Thomas Hengl:

In what areas do you consider the topic of safety in relation to AI to be particularly relevant?

Antwort

Simon Burton:

AI is currently used for many perception functions, from recognizing pedestrians in autonomous driving to monitoring safety clearances in production and diagnosing symptoms of disease in medical engineering. These functions are a kind of Achilles’ heel for the safety of the system if they’re not performed reliably. A system that doesn’t perceive its surroundings correctly won’t be able to take safe actions.

Interview Simon Burton
Bild

Simon Burton: "Perception is the Achilles’ heel for the safety of the system."

Frage

Hans-Thomas Hengl:

What approaches is the IKS taking to make AI safer?

Antwort

Simon Burton:

We’re working at several different levels to ensure the safety of AI-based systems. At the level of the function, we’re teaching AI to question itself. What that means is that we want the function to recognize when it’s in a situation that didn’t come up during the training phase, so the function’s response isn’t trustworthy. At the system level, we want to use this information to adaptively adjust the system’s behavior in that type of situation. The aim is to minimize risks caused by uncertain perception. To do this, we’re developing methods which we’re using to collect conclusive evidence for the performance of the AI functions so we can integrate them into a structured argument to demonstrate their safety, which we can then use in the approval process for the systems.

Frage

Hans-Thomas Hengl:

Can you give an example?

Antwort

Simon Burton:

One concrete example of one of our approaches is what’s called “uncertainty quantification”. It’s a way of calculating the level of confidence that an assessment made by the AI function is correct.

Frage

Hans-Thomas Hengl:

You’ve worked in industry while also being a visiting professor at the University of York. By moving to Fraunhofer IKS, you’ve decided to dedicate yourself entirely to science and research. What motivated you?

Antwort

Simon Burton:

I’ve always had a great affinity for research. I find it incredibly motivating to introduce structure to key problems so I can do something valuable for society. The question of how to make complex, autonomous, AI-based systems safe is fascinating to me. I’ve worked on it intensively in recent years. It requires interdisciplinary perspectives that call for interdisciplinary collaboration, and the different points of view constantly introduce you to new ways of thinking. I’m really looking forward to taking on this issue with the highly talented and diverse team at the IKS and making a lasting impact in this area of research.

Simon Burton im Interview
Bild

Simon Burton: "I find it incredibly motivating to introduce structure to key problems so I can do something valuable for society."

Frage

Hans-Thomas Hengl:

What can you bring to your work at Fraunhofer IKS from your experience in industry?

Antwort

Simon Burton:

During my time in industry, I was often responsible for research teams and scientific strategy. I also have just as much experience in the concrete development of safety-relevant systems, though, as well as in consulting, so I can easily relate to both perspectives. My work in industry has given me an understanding of the practical challenges of technology transfer. That will help us at the IKS to pass on our scientific results. I also have experience in founding new organizations that then go on to grow, which should help us in the current phase of developing the institute.

Frage

Hans-Thomas Hengl:

People say Germans are particularly sensitive when it comes to safety and have very, very high standards. As a German with British roots, can you confirm that?

Antwort

Simon Burton:

Germans have many virtues that I appreciate a lot. One of them is a structured way of working with a focus on safety. That quality is not exclusive to Germans, though. About 20 years ago at the University of York in England, I wrote my doctoral thesis on the use of formal approaches to ensure the safety of safety-relevant software.

As a visiting professor, I still maintain a very close relationship with York. When I am there, I work with colleagues from a range of disciplines, including philosophy and law, to consider the safety of complex and autonomous systems from a holistic perspective. Together with my dual citizenship and my experience of working in different countries, I’m sure that gives me a somewhat broader perspective on life, in both a professional and personal sense.

Read next

Safetronic 2024 Review
Extending existing processes of Functional Safety

Foto Hans N Beck
Hans N. Beck
Autonomous driving / Fraunhofer IKS
Autonomous driving