2019 Applied Human Factors and Ergonomics (AHFE) International Conference, Washington, DC (July 2019)
Charles River Analytics has been leading numerous research and development efforts within the Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) domains, collectively Extended Reality (XR), including an exploration of the human factors considerations for XR development. While the XR industry rapidly progresses, essentially reinventing the very definition of the terms VR, AR, and MR with the introduction of each new HMD, there remains an opportunity for human factors research on the interactions and displays in these augmented environments, and perhaps more unchartered, the ways in which these synthetic realities can interface.
Under a larger effort to assess XR’s utility as a training tool for a VR-based combat medic training simulation by enhancing naturalistic interactions and immersions, we are exploring the ways in which users wearing AR/MR HMDs and VR HMDs can effectively interact within the same, networked XR environment. Due to the medical nature of the effort, these networked interactions have been largely centered on wounded casualties in the virtual training environment, which has led to an interesting investigation into the appropriate methods of interaction and fidelity of those interactions between the users in the augmented environment, the virtual environment, and the simulated agents (in this case, wounded casualties). Notably, we took a physiological approach in building these simulated casualties, such that each agent is backed by Charles River’s Simulated Interpretive Model for Finite State Machines (SIM-FSM) engine. SIM-FSM is a state machine-based simulation engine we internally built for medical training projects and then integrated into XR. Leveraging the SIM-FSM engine, each unique simulated agent has physiologically-grounded vitals, injuries, and life span, dynamically updating in real time throughout the simulation scenario.
In the instance of our combat medic training scenario, we have designed it such that the user in VR (i.e. trainee) can only see the physical status of the casualties to indicate their health (i.e. skin pallor, respiration, bleeding etc.), while the observer in AR (i.e. trainer) can see the physical status of the causalities (in the exact positions they are in VR) in addition to the SIM FSM data, indicating the casualties’ specific injuries, blood pressure, heart rate, respiration rate, etc. There are other differences as well, such as the VR user’s ability to converse with and treat the casualties, and the AR user’s ability to monitor locations of the human user and simulated agents in the respective virtual environment through a mini map. This novel integration between mixed realities and intelligent simulated agents, has raised interesting design and development questions regarding the appropriate level of interaction between human users and simulated agents, the most effective modalities of interaction between human users and simulated agents, best practices for conveying information between networked augmented and virtual environments, visualization techniques for representing human users, agents, and dynamic variables, among many others. This presentation will discuss our research questions, methodologies, and recommendations within this space. This presentation may also address future considerations and research directions, such as human-agent collaboration between the AR/MR and VR environments.
For More Information
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)