mask Aurora photographed from below

Artificial intelligence

Artificial intelligence / Fraunhofer IKS

To date, the decision paths of artificial intelligence (AI) are so opaque that it is impossible to assess its safety straight away. Artificial intelligence must be transparent for people, however, because only then will it be safe enough to be used in safety-critical systems like autonomous driving.

This is why Fraunhofer IKS is conducting research on how artificial intelligence can be made safer and more reliable. One of our research targets is to design transparent neural networks and quantify uncertainties. Our software architecture enhances AI-based systems and reviews decisions for plausibility. To avoid restricting the AI unnecessarily, dynamic procedures are used instead of conventional safety approaches.

For more information about the safety aspects of artificial intelligence, visit our website: Artificial intelligence

Articles

Related to Artificial intelligence