The emerging field of Explainable AI (XAI) aims to bring transparency to today’s powerful but opaque deep learning models. While local XAI methods explain individual predictions in form of attribution maps, thereby identifying “where” important features occur (but not providing information about what they represent), global explanation techniques visualize “what” concepts a model has generally learned to encode. Both types of methods thus only provide partial insights and leave the burden of interpreting the model’s reasoning to the user. Building on Layer-wise Relevance Propagation (LRP), one of the most popular local XAI techniques, this talk will connect lines of local and global XAI research by introducing Concept Relevance Propagation (CRP), a next-generation XAI technique which explains individual predictions in terms of localized and human-understandable concepts. Other than the related state-of-the-art, CRP answers both the “where” and “what” question, thereby providing deep insights into the model’s reasoning process. In the talk we will demonstrate on multiple datasets, model architectures and application domains, that CRP-based analyses allow one to (1) gain insights into the representation and composition of concepts in the model as well as quantitatively investigate their role in prediction, (2) identify and counteract Clever Hans filters focusing on spurious correlations in the data, and (3) analyze whole concept subspaces and their contributions to fine-grained decision making. By lifting XAI to the concept level, CRP opens up a new way to analyze, debug and interact with ML models, which is of particular interest in safety-critical applications and the sciences.
September 6 @ 10:50
10:50 — 11:30 (40′)

Prof. Wojciech Samek (Fraunhofer, HHI)