Charles River Analytics was awarded a $1.8M Small Business Innovation Research (SBIR) contract from the Air Force Research Laboratory (AFRL) to develop a framework that provides explanations for deep reinforcement learning (DRL) agents. The effort aims to make AI systems usable in mission-critical environments by providing insight into how DRL-based autonomous agents make decisions.
The program, called Reinforcement Learning with Adaptive Explainability (RELAX), focuses on DRL, a type of AI that uses neural nets to make high-speed, high-performance decisions. The same type of DRL agents in the RELAX framework have been able to outperform humans in chess and other strategy games, but until now their decision-making process hasn’t been transparent for their human-in-the-loop counterparts. In addition, AI intended for Department of Defense operational environments must undergo stringent verification and validation requirements.
“When one of these systems makes a recommendation or decision that is not intuitive to you, and without knowing how and why the DRL agents are formulating their strategies, you don’t know if you should go along with it,” said Dr. James Neihaus, Principal Scientist at Charles River Analytics and Program Manager on RELAX. “The only way to understand more in that situation is to get an explanation from the system.”
Since participating in DARPA’s Explainable AI (XAI) program from 2017 through 2021, Charles River has been pioneering advances in human-understandable autonomous agents. Now, they are applying this knowledge and foundational technology to help the Air Force with mission planning and execution. For RELAX, the team is developing an “explanation dictionary” to define the key concepts that operators can understand, using causal model learning to relate the AI system’s internal decision-making to those human-understandable concepts, and adding a summarization component to include real-world conditions in AI’s decisions over time. The information will be presented in a multimodal interface to provide operators with transparent explanations of not only the AI’s reasoning, but also which features were critical to the agent’s decision-making and what anticipated future states look like to the agent.
“It needs to be tailored and presented in a way that makes sense to a broad audience, not just computer scientists. Explanations should be clear and accessible to people with varying levels of experience and understanding of how the system works. We conducted several human subject studies to refine our approach and to ensure that the explanations are genuinely helpful to the users of the system,” said Dr. Jeff Druce, Senior Scientist and Principal Investigator on RELAX.
The RELAX framework is expected to increase operator trust in autonomous systems on the battlefield, as well as provide assistance with mission planning and internal military operations. Eventually, the technology could also help improve autonomous entities such as drones that assist with disaster relief efforts, self-driving vehicles, and more.
Contact us to learn more about RELAX and our capabilities in explainable AI .
This material is based upon work supported by the United States Air Force Materiel Command under Contract No. FA8750-24-C-B133. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force Materiel Command.