Tittle, J.1, Niehaus, J.1, Druce, J.1, Harradon, M.1, Roth, E.2, and Voshell, M.1
14th International Naturalistic Decision Making Conference, San Francisco, CA (June 2019)
Designing effective explainable artificial intelligence (XAI) systems represents a fundamental challenge for trusted and cooperative human machine collaboration. In this paper we describe the development and evaluation of an explanation interface that addresses some of these new XAI design challenges. Our team developed an interface design approach for XAI that employs a causal model to describe the output of a machine learning (ML) classifier so that humans can understand, trust, and correctly interpret the AI system output on a visual pedestrian detection task. This Causal Models to Explain Learning (CAMEL) approach incorporates a narrative based interface, including multiple representations, to present explanations of different ML techniques. The results from a user study conducted with 22 participants performing a pedestrian classification task showed that the CAMEL explanation interface to the ML system lead to enhanced user trust, and system acceptance, but not improved user system prediction accuracy. These results suggest that our approach to combining causal models with a narrative based interface has the potential to make powerful but opaque machine learning techniques more accessible to a human user of the system, but further work is needed to adequately assess underlying user mental models of the explained system.
1 Charles River Analytics
2 Roth Cognitive Engineering
For More Information
To learn more or request a copy of a paper (if available), contact James Niehaus.
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)