RELAX

An AI-driven system that explains how
DRL-based agents make decisions

An AI-driven system that explains how DRL-based agents make decisions

Reinforcement Learning with Adaptive Explainability (RELAX)

RELAX aims to make AI systems usable in mission-critical environments by providing insight into how DRL-based autonomous agents make decisions. It focuses on deep reinforcement learning (DRL), a type of AI that uses neural nets to make high-speed, high-performance decisions.

The same type of DRL agents in the RELAX framework have been able to outperform humans in chess and other strategy games, but until now their decision-making process hasn’t been transparent for their human-in-the-loop counterparts. 

In addition, AI intended for Department of Defense operational environments must undergo stringent verification and validation requirements.

Military officer in camouflage standing in server room with laptop computer and checking data on monitors.
Charles River Analytics is developing explainable AI to help the US Air Force make decisions in mission-critical environments.

“When one of these systems makes a recommendation or decision that is not intuitive to you, and without knowing how and why the DRL agents are formulating their strategies, you don’t know if you should go along with it. The only way to understand more in that situation is to get an explanation from the system.

Dr. James Niehaus
Dr. James Niehaus
Principal Scientist and Program Manager on RELAX

Since participating in DARPA’s Explainable AI (XAI) program from 2017 through 2021, Charles River has been pioneering advances in human-understandable autonomous agents. Now, they are applying this knowledge and foundational technology to help the Air Force with mission planning and execution.

For RELAX, the team is developing an “explanation dictionary” to define the key concepts that operators can understand, using causal model learning to relate the AI system’s internal decision-making to those human-understandable concepts, and adding a summarization component to include real-world conditions in AI’s decisions over time. The information will be presented in a multimodal interface to provide operators with transparent explanations of not only the AI’s reasoning, but also which features were critical to the agent’s decision-making and what anticipated future states look like to the agent.

“It needs to be tailored and presented in a way that makes sense to a broad audience, not just computer scientists. Explanations should be clear and accessible to people with varying levels of experience and understanding of how the system works. We conducted several human subject studies to refine our approach and to ensure that the explanations are genuinely helpful to the users of the system.

Dr. Jeff Druce
Senior Scientist and Principal Investigator on RELAX

The RELAX framework is expected to increase operator trust in autonomous systems on the battlefield, as well as provide assistance with mission planning and internal military operations. Eventually, the technology could also help improve autonomous entities such as drones that assist with disaster relief efforts, self-driving vehicles, and more.

Contact us to learn more about RELAX and our other explainable AI capabilities.

This material is based upon work supported by the United States Air Force Materiel Command under Contract No. FA8750-24-C-B133. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force Materiel Command.

Our passion for science and engineering drives us to find impactful, actionable solutions.