Druce, J.1, Niehaus, J.2, Moody, V.1, Jensen, D.2, Littman, M.3
arXiv:2106.05506v1 (June 2021)
The advances in artificial intelligence enabled by deep learning architectures are undeniable. In several cases, deep neural network driven models have surpassed human level performance in benchmark autonomy tasks. The underlying policies for these agents, however, are not easily interpretable. In fact, given their underlying deep models, it is impossible to directly understand the mapping from observations to actions for any reasonably complex agent. Producing this supporting technology to “open the black box” of these AI systems, while not sacrificing performance, was the fundamental goal of the DARPA XAI program. In our journey through this program, we have several “big picture” takeaways: 1) Explanations need to be highly tailored to their scenario; 2) many seemingly high performing RL agents are extremely brittle and are not amendable to explanation; 3) causal models allow for rich explanations, but how to present them isn’t always straightforward; and 4) human subjects conjure fantastically wrong mental models for AIs, and these models are often hard to break. This paper discusses the origins of these takeaways, provides amplifying information, and suggestions for future work.
1 Charles River Analytics
2 University of Massachusetts Amherst
3 Brown University
For More Information
To learn more or request a copy of a paper (if available), contact Jeff Druce.
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)