PUBLICATIONS

An Approach for Multimodal Display Development for UAV Systems

Pfautz, J., Fouse, A., von Wiegand, T., Roth, E., & Fichtl, T.

Proceedings of the Third Annual Workshop on Human Factors of Unmanned Aerial Vehicles, Mesa, AZ (May, 2006)

Different classes of UAV systems anticipate controlling the UAV from a variety of operational environments, from safe, remote locations to locations under immediate threat of enemy fire. This range of operational environments, combined with visions of single operator-multiple UAV control, leads to complex human-system interface needs that need to be explored. We approached these needs for two classes of UAVs using cognitive engineering techniques to analyze the demands of the work environment and the operator’s cognitive tasks. The understanding we developed through our analysis led us to hypothesize that multimodal display techniques may improve operator performance.

Presenting information through two or more sensory channels has the dual benefit of addressing high information loads as well as offering the ability to present information to the operator within a variety of environmental constraints. A critical issue with multimodal interface systems is the inherent complexity in designing systems integrated across display modalities and user input methods. In designing multi-sensory displays, the capability of each sensory channel must be taken into account, along with the physical capabilities of the display and the software methods by which the data is rendered for the operator. Similarly, the relationship between different modalities and the domination of some modalities over others must be considered.

In this presentation, we will describe a toolkit that we have developed to support the rapid prototyping of multimodal interfaces for UAV systems and the interface techniques that we have developed within this toolkit. This toolkit approach allows us to quickly examine new interface techniques and their combination, and therefore supports an iterative design and evaluation process. Using our toolkit, integrated multimodal displays can be rapidly configured by mapping incoming data types to three potential displays (visual, aural, haptic), using a set of reconfigurable rendering methods. This toolkit is also designed to support experimental evaluation of these displays.

For More Information

To learn more or request a copy of a paper (if available), contact J. Pfautz.

(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)