PUBLICATIONS

Performance, Trust, and Decision Support

Gorman, J.

American Institute of Aeronautics and Astronautics (AIAA) Infotech@Aerospace, St. Louis, MO (March 2011)

Intelligent systems increase a decision maker’s ability to analyze and make sense of uncertain conditions and imprecise observations, however these systems are challenging to formally validate and verify. Formal assessments are challenging to conduct and performance assessment results are difficult to relate to the reliability and mission relevance of an intelligent system’s products due to the heuristic and probabilistic nature of the analysis performed by intelligent systems as well as the difficulty in determining the “ground truth” in many scenarios. Extensive evaluation procedures and objective performance assessment measures of merit have been developed that (1) characterize system’s task performance, (2) measure a system’s use of resources, and (3) estimate the improvements in operator performance attributable to the system. Objective measures do not immediately contribute to a user’s perception of trust in the intelligent system. Yet, the literature of cognitive systems engineering and knowledge management makes clear that user trust is necessary for successful adoption of intelligent systems. Assessment approaches are needed that enable decision makers to effectively assess the performance and trustworthiness of intelligent systems based on the subjective priorities of decision makers using the intelligent systems.

In this paper, we introduce a novel approach to performance assessment of intelligent decision-supporting systems. Traditional performance assessment tests automatically collect objective measures of merit including; measures of performance (MOPs) that describe task performance (e.g., false positive rate), measures of effectiveness (MOEs) that assess resource utilization (e.g., input message rate), and measures of force effectiveness (MOFEs) that characterize changes in user performance. These measures are a necessary characterization of system performance, but are not sufficient to establish user trust in the system. Decision maker trust is a product of the qualitative and subjective assessment of a system’s suitability for a user defined task. We use an automated balanced scorecard to implement user defined evaluation criteria and subjective measures of merit for each evaluation criteria. A symbolic argumentation network describes how users combine and weight the subject measures of merit to evaluate each evaluation criteria. Fuzzy logic is used to automatically collect the subjective measures of merit from the objective MOPs, MOEs, and MOFEs generated by performance assessment. The combination of quantitative and qualitative system assessment enables decision makers to build trust in an intelligent system by (1) automatically collecting and reporting quantitative MOPs, MOEs, and MOFEs, (2) automatically interpreting quantitative measures based on the user’s subjective criteria, and (3) exposing the user’s subjective criteria for examination and understanding. Keywords: Intelligent Systems, Performance Assessment, Trust, Decision Support, Symbolic Argumentation, Balanced Scorecards, and Fuzzy Logic.

For More Information

To learn more or request a copy of a paper (if available), contact J. Gorman.

(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)