Proceedings from the Interservice/Industry Training, Simulation & Education Conference (I/ITSEC), Orlando, FL (December 2010)
Current methods for controlling one’s avatar in a virtual environment interacting with intelligent virtual agents (IVAs) are unnatural, typically requiring a complex set of keyboard commands for controlling your avatar, and dialog menus for interacting with IVAs. Recent advances in markerless body and motion tracking, speech and gesture recognition technologies, coupled with intelligent agent/behavior modeling and speech synthesis technologies, now make it possible to naturally control one’s avatar through the movement of one’s body and to interact with IVAs through speech and gesture. These capabilities are now just beginning to emerge in the arena of computer gaming, and offer great promise for military training. In this paper we describe our recent work integrating motion capture, gesture recognition, speech recognition, natural language understanding, and intelligent agent/behavior modeling technologies to produce more natural mechanisms for avatar control as well as IVAs that are able to understand relatively unconstrained speech and recognize human movement and gesture. We illustrate these capabilities within the domain of roadside security checkpoint training, where trainees are able to gesture (e.g., wave forward, stop, point to a location) and speak to IVAs (drivers and passengers) in the scene.
1 Raytheon BBN Technologies
2 Charles River Analytics
3 Smart Information Flow Technologies (SIFT)
For More Information
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)