Bisantz, A.1, Endsley, M.2, Hoffman, R.3, Klein, G.4, Militello, L.5, and Pfautz, J.6
Discussion panel at the 58th Annual Meeting of the Human Factors and Ergonomics Society, Chicago, IL (October 2014)
Cognitive Task Analysis (CTA) has become part of the standard tool set of cognitive engineering. CTAs are routinely used to understand the cognitive and collaborative demands that contribute to performance problems, the basis of expertise, as well as the opportunities to improve performance through new forms of training, user interfaces, or decision aids. While the need to conduct CTAs has become well established, there is little in the way of available guidance with respect to “best practice” for how to conduct a CTA or how to evaluate the quality of a CTA that has been conducted by others. This is an important gap as the range of consumers of CTAs is expanding to include program managers and regulators who may need to make decisions based on CTA findings. This panel brings together some of the leaders in development and application of CTA methods to address the question: Given the variety of methods available, and the lack of rigid guidance on how to perform a CTA, how does one judge the quality of a CTA? The goal of the panel is to explore points of consensus with respect to “best practice” in conducting and evaluating a CTA, in spite of differences in particular CTA method, as well as draw insights from unique and provocative perspectives.
Panel organizers and co-chairs: Emilie M. Roth, Roth Cognitive Engineering, and John O’Hara, Brookhaven National Laboratory
1 University at Buffalo, The State University of New York
2 US Air Force
3 Florida Institute for Human and Machine Cognition
4 MacroCognition
5 Applied Decision Science
6 Charles River Analytics
For More Information
To learn more or request a copy of a paper (if available), contact Jonathan Pfautz.
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)