Explainable AI

­­Scientists and engineers at Charles River Analytics are generating new knowledge at the frontiers of this rapidly growing area, creating a future where humans and AIs collaborate—to drive cars, respond to disasters, and make medical diagnoses.

Why XAI?

Artificial intelligence (AI) systems are increasingly becoming part of high-stakes decision processes. In addition, many emerging applications of AI depend on collaboration between people and AI systems. These trends are creating a need for AI systems that people can trust.To be trustworthy, AI systems must be resilient, unbiased, accountable, and understandable. But most AI systems cannot explain how they reached their conclusions. Explainable AI (XAI) promotes trust by providing reasons for all system outputs—reasons that are accurate and human-understandable.

Adapted from Four Principles of XAI, a draft report from the National Institutes of Standards and Technology

Our Approach

CAMEL Project: Using a Real-Time Strategy Game as an Environment for XAI

Our XAI does more than provide correlation-based explanations of AI system outputs: it offers deep insight and causal understanding of the decision-making process. The value of this approach is backed by user studies demonstrating increased trust in the AI system and enhanced human-AI collaboration. Combining our cutting-edge research on XAI with our decades of experience applying AI to real problems for real users, we develop complete systems that work with the entire AI ecosystem—hardware, software, algorithms, individuals, teams, and the environmental context. We can also help you add XAI functionality to your system, supporting your compliance with laws and regulations requiring that decisions from automated systems explain the logic behind those decisions.

Making Machine Learning Trustworthy

Data-driven machine learning is the flavor of the day in artificial intelligence, but the “black box” nature of these systems makes them hard to trust. Charles River Analytics President Karen Harper explains how the new field of explainable AI can help us understand these systems and support informed human decision-making.

Featured Projects

A Leading Laboratory

Charles River’s scientists and engineers have been conducting leading-edge research since our founding nearly 40 years ago. Our open, collegial lab collaborates with dozens of universities and research labs across the U.S. We are currently engaged in more than 200 R&D projects, focusing on some of the biggest challenges in AI. Find out more from these selected academic publications and presentations.

Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems
Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations
Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and Successes in the XAI Program
Evaluation of an AI System Explanation Interface

Our News

DISCERN Helps Humans Understand Reinforcement Learning Agents
Read More


With ALPACA, Charles River Analytics Transforms AI from Tool to Collaborative Partner
Read More


Charles River Presents “Autonomy You Can Trust” at AUVSI XPONENTIAL 2020
Read More

Our People

Jeff Druce
Senior Scientist

Michael Harradon
Senior Scientist

More +

Stephanie Kane
Principal Scientist and Division Director

More +

James Niehaus
Principal Scientist and Division Director

More +

Our passion for science and engineering drives us to find impactful, actionable solutions.