English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/127513
Share/Impact:
Statistics
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Title

Combining where and what in change detection for unsupervised foreground learning in surveillance

AuthorsHuerta, Iván; Pedersoli, Marco; Gonzàlez, Jordi; Sanfeliu, Alberto
KeywordsMultiple appearance models
Video surveillance
Support vector machine
Latent variables
Motion segmentation
Object detection
Unsupervised learning
Issue Date2015
PublisherElsevier
CitationPattern Recognition 48(3): 709-719 (2015)
AbstractChange detection is the most important task for video surveillance analytics such as foreground and anomaly detection. Current foreground detectors learn models from annotated images since the goal is to generate a robust foreground model able to detect changes in all possible scenarios. Unfortunately, manual labelling is very expensive. Most advanced supervised learning techniques based on generic object detection datasets currently exhibit very poor performance when applied to surveillance datasets because of the unconstrained nature of such environments in terms of types and appearances of objects. In this paper, we take advantage of change detection for training multiple foreground detectors in an unsupervised manner. We use statistical learning techniques which exploit the use of latent parameters for selecting the best foreground model parameters for a given scenario. In essence, the main novelty of our proposed approach is to combine the where (motion segmentation) and what (learning procedure) in change detection in an unsupervised way for improving the specificity and generalization power of foreground detectors at the same time. We propose a framework based on latent support vector machines that, given a noisy initialization based on motion cues, learns the correct position, aspect ratio, and appearance of all moving objects in a particular scene. Specificity is achieved by learning the particular change detections of a given scenario, and generalization is guaranteed since our method can be applied to any possible scene and foreground object, as demonstrated in the experimental results outperforming the state-of-the-art.
Publisher version (URL)http://dx.doi.org/10.1016/j.patcog.2014.09.023
URIhttp://hdl.handle.net/10261/127513
DOI10.1016/j.patcog.2014.09.023
Identifiersissn: 0031-3203
Appears in Collections:(IRII) Artículos
Files in This Item:
File Description SizeFormat 
surveillance.pdf6,13 MBAdobe PDFThumbnail
View/Open
Show full item record
Review this work
 

Related articles:


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.