Proceedings of SPIE , Vol. 3375, Aerosense, Orlando, FL (April 1998)
To understand how a human operator performs visual search in complex scenes, it is necessary to take into account top-down cognitive biases in addition to bottom-up visual saliency effects. We constructed a model to elucidate the relationship between saliency and cognitive effects in the domain of visual search for distant targets in photo-realistic images of cluttered scenes. In this domain, detecting targets is difficult and requires high visual acuity. Sufficient acuity is only available near the fixation point, i.e. in the fovea. Hence, the choice of fixation points is the most important determinant of whether targets get detected. We developed a model that predicts the 2-D distribution of fixation probabilities directly from an image. Fixation probabilities were computed as a function of local contrast (saliency effect) and proximity to the horizon (cognitive effect: distant targets are more likely to be found close to the horizon). For validation, the model’s predictions were compared to ensemble statistics of subject’s actual fixation locations, collected with an eye-tracker. The model’s predictions correlated well with the observed data. Disabling the horizon-proximity functionality of the model significantly degraded prediction accuracy, demonstrating that cognitive effects must be accounted for when modeling visual search.
For More Information
(Please include your name, address, organization, and the paper reference. Requests without this information will not be honored.)