Applying AI and ML to real-world systems remains a challenge for many reasons. A major reason is that their use tends to increase system complexity and reduce human understanding and trust in those systems. Digital Twins is a paradigm that attempts to map real world objects and systems into corresponding virtual counterparts that collect and organise information in ways that are understandable from a user point of view. Digital Twins also simplify data management in many ways, notably by simplifying the use of data for ML purposes. Explainable AI (XAI) again develops methods for ensuring the trustworthiness of the underlying AI/ML systems and (in the best case) justify their results to human end-users. In practice, a large part of XAI research is focusing on trying to explain image classification, e.g. “I believe there’s a cat in this image and I show you where it is”. However, it can be doubted whether current XAI methods are capable of providing sufficient insight for end-users who deal with real-world systems. The state-of-the-art XAI methods will be described, as well as potential solutions to the current challenges.
September 6 @ 09:15
09:15 — 09:55 (40′)

Prof. Kary Främling (Umea University)