Presented at the 2013 SPIE Defense, Security, and Sensing Conference, Baltimore, MD (29 April – 3 May 2013)
While many leader-follower technologies for robotic mules have been developed in recent years, the problem of reliably tracking and re-acquiring a human leader through cluttered environments continues to pose a challenge to widespread acceptance of these systems. Recent approaches to leader tracking rely on leader-worn equipment that may be damaged, hidden from view, or lost, such as radio transmitters or special clothing, as well as specialized sensing hardware such as high-resolution LIDAR. We present a vision-based approach for robustly tracking a leader using a simple monocular camera. The proposed method requires no modification to the leader’s equipment, nor any specialized sensors on board the host platform. The system learns a discriminative model of the leader’s appearance to robustly track him or her through long occlusions, changing lighting conditions, and cluttered environments. We demonstrate the system’s tracking capabilities on publicly available benchmark datasets, as well as in representative scenarios captured using a small unmanned ground vehicle (SUGV).
For More Information
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)