PUBLICATIONS

Design of a Multimodal Interface Solution to Facilitate Nonmedical User Mediation of TBI Screening Using Virtual Reality

Presented at the International Symposium on Human Factors and Ergonomics in Health Care, Boston, MA (March 2018).

Vestibular dysfunction (VD) commonly accompanies traumatic brain injuries (TBI). Assessments of VD are currently only conducted in clinical settings by highly trained medical professionals (e.g., otolaryngologists). These professionals use either: (1) subjective assessments requiring specific expertise to expose and interpret the subtle symptoms that typically accompany VD; or (2) controlled laboratory assessments. The latter provide greater objectivity and diagnosticity, but take the form of obtrusive (e.g., caloric testing) and lengthy procedures requiring costly specialized equipment (e.g., rotational chairs). While numerous companies are working toward solutions to assess potentially concussed patients, these solutions all focus on screening only a narrow subset of overall symptom areas (e.g., oculomotor, cognitive, or auditory) and, most often, completely overlook vestibular function. Consequently, there are no currently available methods for objectively assessing VD as a component of a more robust TBI assessment at the point of injury (POI), or even early in the chain of care. If such a capability could be developed and deployed, personnel with little to no expertise in otolaryngology could make more appropriate return to duty (RTD; in the case of military applications) or return to play (RTP; in the case of athletic applications) decisions and document acute symptoms early in the chain of care to enhance medical care further down the line. However, to enable this type of controlled screening for often subtle TBI symptoms by non-medical personnel, a solution must be designed that can minimize the likelihood of user error, while also facilitate appropriate understanding of patient screening results to inform RTD, RTP, or other application-informed decisions.

To address this challenge, Charles River Analytics and our teammates are currently funded via a US Army contract to develop and evaluate a prototype system for the Assessment and Diagnosis of Vestibular Indicators of Soldier Operational Readiness (ADVISOR™). ADVISOR is a patent-pending (62/373,083; PCT/US2017/046266) solution that addresses all of the requirements above for providing POI TBI screening to support RTD/RTP decisions and ongoing monitoring throughout the chain of care to track symptom progression. This is accomplished via a battery of clinically validated assessments that have been translated for deployment using only a smartphone application and a suite of commercially available off-the-shelf (COTS) hardware. This hardware includes a stereoscopic head-mounted display (HMD) with video oculography capabilities (to objectively measure patient ocular response) that leverages the smartphone to present configurable binocular visual stimulus in a virtual reality (VR) environment. This is combined with stereo headphones (for presentation of audio stimulus) and a suite of additional COTS sensors (e.g., motion, acceleration, and positioning sensors to measure patient posture and sway) to enable objective measurement of patient responses. Combining these capabilities, ADVISOR provides a low-cost and portable solution to screen individuals for a TBI related symptoms at the POI via objective assessment of oculomotor, vestibular, and auditory function, with the explicit intention of informing non-medical experts in making RTD/RTP decisions.

This presentation will provide an overview of our design process and resultant system interface, which combines a smartphone application, a virtual reality (VR) patient interface, and a number of body-worn sensors. This collective system interface was designed to promote consistent patient screening administration, leveraging intuitive graphical user interfaces (GUI), a wizard-like workflow to guide users to select appropriate sets of assessments (e.g., based on the context of the patient, injury, available screening kit components, and screening environment). It then provides on-screen and in VR instructions and automated guidance for the selected assessments that are presented to either the patient or the care giver depending on the context of the workflow and the next step in the assessment to be executed. This presentation will also provide initial results in attempts to design visualizations and information displays to present high level screening results as well as deeper level details to facilitate care giver understanding, interpretation, and decision making. Finally, we will also briefly touch upon our planned study (currently under IRB review) to validate these designs with respect to the goals we set out to accomplish via the design of the system’s integrated multimodal interfaces.

The significance of this work is in its novelty of applying human factors to the design of not only a smartphone application for a medical application, to be used by non-medical expert users, but also to the design of VR user interfaces (typically experienced by VR-naive users) and hardware itself to promote a holistically user friendly and error resilient integrated system interface (combining the smartphone, VR, and hardware) solution whereby both the caregiver and patient need to move between interfaces of completely different modalities.

For More Information

To learn more or request a copy of a paper (if available), contact Michael Jenkins.

(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)