Proceedings SPIE Defense & Security, vol 5807. Orlando, FL (April, 2005)
A major challenge for ATR evaluation is developing an accurate image truth that can be compared to an ATR algorithm’s decisions to assess performance. While many standard truthing methods and scoring metrics exist for stationary targets in still imagery, techniques for dealing with motion imagery and moving targets are not as prevalent. This is partially due to the fact that the moving imagery / moving targets scenario introduces the data association problem of assigning targets to tracks. Video datasets typically contain far more imagery than static collections, increasing the size of the truthing task. Specifying the types and locations of the targets present for a large number of images is tedious, time consuming, and error prone. In this paper, we present an updated version of a complete truthing system we call the Scoring, Truthing, And Registration Toolkit (START). The application consists of two components: a truthing components that assists in the automated construction of image truth, and a scoring component that assesses the performance of a given algorithm relative to the specified truth. In motion imagery, both stationary and moving targets can be detected and tracked over portions of a motion imagery clip. We summarize the capabilities of START with emphasis on the target tracking and truthing diagnostics. The user manually truths certain key frames, truth for intermediate frames is then inferred and sets of diagnostics verify the quality of the truth. If ambiguous situations are encountered in the intermediate frames, diagnostics flag the problem so that the user can intervene manually. This approach can dramatically reduce the effort required for truthing video data, while maintaining high fidelity in the truth data. We present the results of two user evaluations of START, one addressing the accuracy and the other focusing on the human factors aspects of the design.
For More Information
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)