In this presentation, I will argue that research and practice in explainable AI should take a human-centered approach. Developing explainable AI approaches that work for technical developers will result in approaches that work only for developers. This is akin to ‘inmates running the asylum’. I will present an overview of the intersection of explainable AI and will present some key examples of how to integrate social science knowledge into these methods for explainability in sequential decision making problems.
September 6 @ 11:30
11:30 — 12:10 (40′)

Prof. Tim Miller (Melbourne University, AU)