AI systems can make decisions and take action at superhuman speeds, but two obstacles limit their use in high-stakes military operations. First, many high-performing AIs are “black boxes”—the reasons for their responses to different scenarios aren’t clear. Second, high-performing AIs may act in surprising ways or make incorrect conclusions. These characteristics are common features of AIs created using a powerful technique called deep reinforcement learning (DRL).
That’s where Charles River comes in, with EXTRA (Explainability and Terrain Reasoning for Autonomy). By applying cutting-edge research in AI and best practices in human-machine interface design, Charles River scientists and engineers will create tools that make DRL agents understandable and trustable.
“The future of AI is human-machine teaming,” said Jeff Druce, Senior Scientist at Charles River Analytics and Co-Principal Investigator of the EXTRA effort. “For human-machine teams to be able to engage in novel military tactics together, people need explanations from the AI that they can understand and trust.”
Led by Druce and Co-Principal Investigator James Niehaus, Charles River scientists and engineers will identify simulation environments that reflect the complexity of Navy and Marine Corps operations. The project team will train robust DRL agents in these environments and create tools that map DRL agent decisions to human-understandable explanations. EXTRA will also integrate reasoning capabilities for mission context and terrain into the DRL agents. Finally, EXTRA’s intuitive user interface will combine all this information to deliver human-understandable explanations of the DRL agent’s behavior.
EXTRA has the potential to greatly benefit Navy and Marine Corps mission planning and execution. In addition, the technologies developed for EXTRA will enhance Charles River’s custom XAI (explainable AI) software solution. This solution increases the adoption and value of DRL systems by augmenting them with explainability technologies developed through our advanced R&D programs.
Contact us to learn more about EXTRA and our other capabilities in Explainable AI and reinforcement learning.
This material is based upon work supported by the Office of Naval Research under Contract No. N6833521C0321. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Office of Naval Research.