EU AI Act
EU adopts AI label

The EU has developed a legal framework for artificial intelligence (AI) products. The aim is to protect citizens and companies from potential harm that may be caused by the use of artificial intelligence. What does this mean for companies that develop AI products?

mask green colour pattern

EU AI Act training

The Fraunhofer IKS training on the EU AI Act and high-risk AI systems provides you with an overview of the EU AI Regulation and its implications. Learn more about the compliance approach for artificial intelligence and the verification methodology.

Register directly and make your AI systems fit for the new legal regulations.

Register now Pfeil nach rechts

“The EU has delivered“, says a delighted Dragos Tudorache MEP about the European Parliament's decision on the EU AI Act. “We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies.”

Dr Ralf Wintergerst, President of the industry association Bitkom, is somewhat less euphoric: "Germany must focus on the opportunities offered by AI. Only 13 per cent of companies in Germany use artificial intelligence, although 82 per cent believe it to be of great importance for the future competitiveness of our economy."

EU regulates AI risk categories

What is it all about? The European Union's AI regulation sets out ethical guidelines and a legal framework for the development, deployment and use of artificial intelligence systems in the European Union. AI systems are categorised based on risk levels, ranging from minimal-risk to unacceptably-risk, each entailing a number of regulatory obligations.

In detail: Limited-risk AI systems, such as chatbots, must fulfil the transparency obligations set out in the regulation. AI systems whose risk has been categorised as low (minimal-risk) - including video games and spam filters - can follow a voluntary code of conduct set out in the law.

On the other hand, unacceptable-risk AI systems are simply banned. According to the European Parliament, these include

  • Cognitive behavioural manipulation of individuals or certain vulnerable groups, for example voice-controlled toys that encourage dangerous behaviour in children;
  • Social scoring: classification of people based on behaviour, socio-economic status and personal characteristics;
  • biometric identification and categorisation of natural persons;
  • biometric real-time remote identification systems, for example facial recognition.

The EU Parliament expressly points out that some exceptions can be authorised for law enforcement purposes.

Caution with high-risk AI systems

AI high-risk systems are of particular interest to companies. They are divided into two main categories:

1. AI systems used in products that fall under EU product safety regulations. These include toys, aviation, vehicles, medical devices and lifts.

2. AI systems that fall into specific areas and must be registered in an EU database. These include, but are not limited to, management and operation of critical infrastructure, education and training, employment, labour management and access to self-employment, law enforcement, management of migration, asylum and border controls, and assistance in the interpretation and application of laws.

Strict requirements apply to such high-risk AI systems in the areas of risk management, data management, transparency and monitoring by humans. Providers, but also companies that use such systems, are obliged to fulfil these requirements. High-risk AI systems, for example autonomous vehicles or medical devices, can pose a significant threat to the health, safety or fundamental rights of individuals. They must be thoroughly tested and assessed before they are placed on the market. This assessment shall be carried out by the AI product provider itself.

White paper:

The European Artificial Intelligence Act
Overview and Recommendations for Compliance


In this white paper, Fraunhofer IKS provides an overview of the most important provisions of the EU AI Act.

Read white paper Pfeil nach rechts

White paper categorises requirements for AI systems

Researchers at the Fraunhofer Institute for Cognitive Systems IKS have written a white paper that addresses the requirements for high-risk AI systems as set out in Articles 9 to 15 of the EU AI law. The focus is on their practical implementation in the company. Gaps in relation to existing safety standards are also highlighted.

Building on this, Fraunhofer IKS has developed a framework to close these gaps. The framework derives specific requirements from the EU AI Act, based on the contract-based design approach. Fraunhofer IKS's expertise in trustworthy AI and security forms the basis for deriving argumentation trees for generic properties of machine learning (ML) systems.

Read next

Interview with Reinhard Stolle
"We use AI to make systems safer“

Hans thomas hengl
Hans-Thomas Hengl
Safety engineering / Fraunhofer IKS
Safety engineering