A tool that strengthens trust in machine learning
Advancing Learning via Probabilistic Causal Analysis for Competency Awareness (ALPACA)
Charles River Analytics is partnering with the University of Massachusetts at Amherst and the University of Texas at Austin to develop our Advancing Learning via Probabilistic Causal Analysis for Competency Awareness (ALPACA) framework under DARPA’s CAML program. ALPACA strengthens trust in machine learning systems by clearly communicating how an AI system’s competence is affected by different, complex environments. When human operators fully understand AI tools, they can become trusted, collaborative members of a human-machine team.
“Machine learning systems fail to provide their human teammates with insight into variables—such as weather or terrain—that affect performance. We’ve changed the game with ALPACA, which provides information the operator needs to make more informed decisions.”
Scientist and Principal Investigator on the ALPACA effort
Machine learning systems play an increasingly important role in both government and industry; they can even partner with humans to complete complex tasks, such as collaboratively classifying images and video, forming human-AI teams in video-simulated environments, or using swarms of drones to carry out specific subtasks for search and rescue missions. However, these systems must first earn the trust of their human counterparts through predictable performance, which many systems lack.
ALPACA learns probabilistic causal models that allow machine learning systems to assess their own competencies and relay that data to the operator. ALPACA’s intuitive interface provides rich measures of system performance and recommends when to make adjustments. With robotics expert Dr. Joydeep Biswas at UT at Austin, we are taking ALPACA out of the lab and into the field with the Campus Jackal, a state-of-art autonomous mobile robot.
Our team has deep expertise in explainable artificial intelligence, machine learning, probabilistic and causal modeling, and autonomous systems. ALPACA builds on our Causal Models to Explain Learning approach, which supports dialogue between humans and artificial intelligence systems and was developed under DARPA’s XAI program. With ALPACA, operators can plan for missions with greater confidence than ever before.
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0031. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA.