How visual concepts help to understand an AI’s decision
Modern AI models have demonstrated remarkable capabilities on various computer vision tasks such as image classification. Their decision-making process, though, remains mostly opaque (black box problem). This can be particularly problematic for safety-critical applications like medical imaging or autonomous driving. Fraunhofer IKS explores ways to increase the transparency of this black box.