Charles River Analytics Inc., developer of intelligent systems solutions, is leading a team as part of a Defense Advanced Research Project Agency (DARPA) program on Explainable Artificial Intelligence (XAI). In the XAI program, DARPA is funding multiple teams to make artificial intelligence more explainable through the interdependent development of machine learning techniques and human-computer interaction designs. The four-year broad agency announcement contract is valued close to $8 million.
Charles River’s team for the Causal Models to Explain Learning effort, known as CAMEL, includes Brown University, the University of Massachusetts at Amherst, and Roth Cognitive Engineering. The CAMEL team will use the notion of causality as a key concept in creating explanations humans can understand and trust.
“In CAMEL, we’re developing causal models of the operation of highly-complex machine learning systems, such as autonomous systems that use deep neural networks and deep reinforcement learning,” explained Dr. Brian Ruttenberg, Senior Scientist at Charles River. “On top of these causal models is a sophisticated human-machine interface that presents causal inferences about these machine learning systems in an intuitive and human-understandable medium. Using the tools and techniques developed under CAMEL, we hope to crack the opaqueness of these effective systems and give the user the means to understand why a machine learning system reached a particular conclusion. This will ultimately provide the justification, rationale, and verification to use these sophisticated machine learning systems in complex and adverse environments.”
Charles River will use the Figaro™ open-source probabilistic programming language to better explain complex machine learning models. Learn more about Figaro.
Distribution Statement “A” (Approved for Public Release, Distribution Unlimited).