Safe Intelligence Day 2025
Automation: Three Insights from the Industry

What are the key ideas shaping the current automation landscape? Read on to learn about three insights that emerged from discussions between participants at Fraunhofer IKS’ Safe Intelligence Day 2025.

I Stock 1442677960 kynny
mask I Stock 1442677960 kynny

In February , the Fraunhofer Institute of Cognitive Systems IKS opened its doors to industry guests for the Safe Intelligence Day 2025, providing an overview of the topics that are guiding the current automation research at the institute. During showcases, roundtable discussions, and networking opportunities, participants shared interesting ideas and experiences that are contributing to defining current automation technologies and practices. These insights, which are captured in more detail below, invite further discussion and offer ways to further improve the connection between artificial intelligence (AI) and automation.

Insight 1: Leveraging Human Expertise in AI

A crucial point discussed is that the focus behind the implementation of AI is not on replacing workers or removing human insights from the equation. In fact, actively collecting feedback from people (i.e., following a human-in-the-loop approach) is essential for guaranteeing that the answer produced for a given problem fits the overall context and takes important assumptions into account. For instance, an AI-based assistant can work collaboratively with humans in safety-critical contexts, such as when performing a hazard analysis and risk assessment (HARA) for a system. To encourage the cooperation between humans and AI, the following ideas were highlighted as particularly important:

  • Seamless workflow integration: The first aspect to consider is how seamlessly the AI tool integrates into the workflow of the end user. Interfaces should display what users expect to see, for example, employ dashboards to display monitoring data and tabular visualizations to list the results of a HARA. Effective interface design should provide a concise summary of relevant information but also enable the user to drill down into any details. For instance, this could include explanations on how a given result was compiled and why it is expected to be valid. This feature ensures that users can get a better understanding of how their data is being processed.
  • Iterative approach: Instead of generating a complete solution via AI and adopting it without question, a more sensible approach is to work iteratively. Once an initial solution is auto-generated, human feedback is vital to further improve it and align the result with what the user intended. Then, this improved version is fed once again to the AI tool, incorporating the recommended changes and refining the overall solution. This process is repeated as many times as necessary, until the human expert is satisfied with the outcome. The software, thus, only supports the user, who remains in full control of the engineering process.
  • Training of people: Training and education can enable humans to use AI tools most effectively and have recently become a mandatory measure according to the EU AI act (e.g., Recital (91)). For users of AI, this includes gaining further insight into AI techniques and understanding their limitations. With this knowledge, users can identify which use cases are most suitable to be tackled using AI and understand how to leverage the results.

Insight 2: Trusting AI Results

Another key insight is that a significant obstacle to the wider adoption of AI is the distrust of the results produced by it. This reluctance has several potential causes, ranging from a lack of understanding regarding the underlying AI techniques to known limitations for certain approaches. For instance, for Large Language Models, hallucinations – i.e., content that is factually incorrect or irrelevant to the input provided – are an issue that requires close attention when relying on this output for further actions, especially if they are safety-critical. To increase the trust on AI-generated solutions, the following approaches were mentioned:

  • Quality metrics: Well-defined quality metrics can be used to quantify the accuracy and relevance of AI-generated results. The specific choice of metrics depends on the use case, with some examples in the context of Large Language Models being topic coherence indicators and similarity indicators (in comparison to reference content).
  • Testing and safety checks: The metrics mentioned above can be thought of as part of a larger testing and checking framework, providing regular and structured feedback on AI-generated solutions. For instance, in the context of code generation for programmable logic controllers, AI-generated tests (duly reviewed by humans) could be used to verify that the intended functionality is captured by the solution. For safety-critical applications, processes for generating and verifying AI results should foster compliance with existing standards.
  • Explainability: In addition to quantifying and testing the generated solutions, understanding these results in detail and building a big picture of how the AI techniques produced them can increase users’ trust in the process. On the one hand, this can be accomplished by educating users on AI; on the other hand, the explainability of the techniques can be improved (e.g., tracing the decision steps that lead to a result, providing accompanying explanations, etc.).

Insight 3: Overcoming Infrastructure Challenges

Finally, a few recurring infrastructure obstacles to the implementation of AI were addressed. These are common digitalization pitfalls that can prevent AI from being effectively deployed, ranging from problems with data acquisition and protection to the dilemma of cloud-based versus on-premise AI execution. The following strategies were highlighted to overcome these issues:

  • Data connectors: To tackle a potential lack of data access and standardization, connectors can be used to assist in data retrieval and harmonization. They should be regarded as part of a larger pipeline that cleans, structures, and integrates the raw data, making it ready for further processing by AI.
  • Flexible integration: As manufacturers set up their production environments in a highly individualized manner, the integration of AI techniques into the existing software systems must be tailored accordingly. In this context, a service-oriented architecture may provide the modularity and flexibility needed to bring AI into the fold.
  • Plug-and-play solution: From the manufacturer’s perspective, especially small and medium enterprises, the ideal solution is plug-and-play. This allows users to immediately benefit from AI while minimizing the setup effort. Therefore, the packaging of the AI solution should combine the necessary software and hardware while providing a simple interface for integration.

Do you need assistance?
Would you like to learn more?

Whether you are collecting initial requirements for the implementation of AI tools or already handling the intricacies of integrating them into an existing process, the experts at Fraunhofer IKS are happy to provide guidance on the best way forward. Please contact us any time to discuss your specific issues and how we can support your project:

business.development@iks.fraunhofer.de

Together, these three insights offer a high-level roadmap for bringing AI into production: First, the required infrastructure is put in place, supplying the raw data and resources required for deploying an AI tool. Then, the chosen AI tool is customized according to the desired use case, ensuring a level of quality and trust in the results. Finally, the results obtained by AI are complemented by the knowledge of workers, providing the best possible course of action for the given automation context – including safety-critical scenarios.


This work was funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Institute for Cognitive Systems.

Read next

Artificial Intelligence
Railway AI Systems: The Importance of Operational Design Domain (ODD)

Christian Drabek / Fraunhofer IKS
Christian Drabek
Artificial intelligence / Fraunhofer IKS
Artificial intelligence