Explainable AI

­­Scientists and engineers at Charles River Analytics are generating new knowledge at the frontiers of this rapidly growing area, creating a future where humans and AIs collaborate—to drive cars, respond to disasters, and make medical diagnoses.

Scientists and engineers at Charles River Analytics are generating new knowledge at the frontiers of this rapidly growing area, creating a future where humans and AIs collaborate—to drive cars, respond to disasters, and make medical diagnoses.

Why XAI?

Artificial intelligence (AI) systems are increasingly becoming part of high-stakes decision processes. In addition, many emerging applications of AI depend on collaboration between people and AI systems. These trends are creating a need for AI systems that people can trust.

To be trustworthy, AI systems must be resilient, unbiased, accountable, and understandable. But most AI systems cannot explain how they reached their conclusions. Explainable AI (XAI) promotes trust by providing reasons for all system outputs—reasons that are accurate and human-understandable.

Adapted from Four Principles of XAI, a draft report
from the National Institutes of Standards and Technology

Our Approach

CAMEL Project: Using a Real-Time Strategy Game as an Environment for XAI

Our XAI does more than provide correlation-based explanations of AI system outputs: it offers deep insight and causal understanding of the decision-making process. The value of this approach is backed by user studies demonstrating increased trust in the AI system and enhanced human-AI collaboration. Combining our cutting-edge research on XAI with our decades of experience applying AI to real problems for real users, we develop complete systems that work with the entire AI ecosystem—hardware, software, algorithms, individuals, teams, and the environmental context. We can also help you add XAI functionality to your system, supporting your compliance with laws and regulations requiring that decisions from automated systems explain the logic behind those decisions.

Making Machine Learning Trustworthy

Data-driven machine learning is the flavor of the day in artificial intelligence, but the “black box” nature of these systems makes them hard to trust. Charles River Analytics President Karen Harper explains how the new field of explainable AI can help us understand these systems and support informed human decision-making.

Featured Projects

CAMEL

Part of the DARPA XAI Program

ALPACA

Part of the DARPA CAML Program

DISCERN

Sponsored by the Office of Naval Research

News

DISCERN Helps Humans Understand Reinforcement Learning Agents
Read More


With ALPACA, Charles River Analytics Transforms AI from Tool to Collaborative Partner
Read More


Charles River Presents “Autonomy You Can Trust” at AUVSI XPONENTIAL 2020
Read More

Publications

Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems
Read More

Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and Successes in the XAI Program
Read More

Evaluation of an AI System Explanation Interface
Read More

Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations
Read More

Press

The Wall Street Journal

Inside DARPA’s Push to Make Artificial Intelligence Explain Itself

MIT Technology Review

The U.S. Military Wants Its Autonomous Machines to Explain Themselves

AUVSI Technology News

Assured Onboard Autonomy Architecture for AUVs

People

Jeff Druce
Senior Scientist

Michael Harradon
Senior Scientist

More +

Vanessa Moody
Scientist

More +
James Niehaus

James Niehaus
Principal Scientist and Division Director

More +

Our passion for science and engineering drives us to find impactful, actionable solutions.