Druce, J., Harradon, M., Tittle, J.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada
We consider the problem of providing users of deep Reinforcement Learning (RL) based systems with a better understanding of when their output can be trusted. We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation: a graphical depiction of the systems generalization and performance in the current game state, how well the agent would play in semantically similar environments, and a narrative explanation of what the graphical information implies. We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment. The results demonstrate a statistically significant increase in user trust and acceptance of the AI system with explanation, versus the AI system without explanation.
For More Information
To learn more or request a copy of a paper (if available), contact Jeff Druce.
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)