Article
Details
Citation
Tu Z, Abel A, Zhang L, Luo B & Hussain A (2016) A New Spatio-Temporal Saliency-Based Video Object Segmentation. Cognitive Computation, 8 (4), pp. 629-647. https://doi.org/10.1007/s12559-016-9387-7
Abstract
Humans and animals are able to segment visual scenes by having the natural cognitive ability to quickly identify salient objects in both static and dynamic scenes. In this paper, we present a new spatio-temporal-based approach to video object segmentation that considers both motion- and image-based saliency to produce a weighted approach which can segment both static and dynamic objects. We perform fast optical flow and then calculate the motion saliency based on this temporal information, detecting the presence of global motion and adjusting the initial optical flow results accordingly. This is then fused with a region-based contrast image saliency method, with both techniques weighted. Finally, our joint weighted saliency map is used as part of a foreground–background labelling approach to produce the final segmented video files. Good results in a wide range of environments are presented, showing that our spatio-temporal system is more robust and consistent than a number of other state-of-the-art approaches.
Keywords
Video object segmentation; Global motion; Spatio-temporal saliency; Foreground–background labelling
Journal
Cognitive Computation: Volume 8, Issue 4
Status | Published |
---|---|
Funders | |
Publication date | 31/08/2016 |
Publication date online | 08/03/2016 |
Date accepted by journal | 19/02/2016 |
Publisher | Springer |
ISSN | 1866-9956 |
eISSN | 1866-9964 |