English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/30597
Share/Impact:
Statistics
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Title

Dependent multiple cue integration for robust tracking

AuthorsMoreno-Noguer, Francesc ; Sanfeliu, Alberto ; Sanfeliu, Alberto ; Samaras, Dimitris
KeywordsBayesian tracking
Multiple cue integration
Pattern recognition
Issue Date2008
PublisherInstitute of Electrical and Electronics Engineers
CitationIEEE Transactions on Pattern Analysis and Machine Intelligence 30(4): 670-685 (2008)
AbstractWe propose a new technique for fusing multiple cues to robustly segment an object from its background in video sequences that suffer from abrupt changes of both illumination and position of the target. Robustness is achieved by the integration of appearance and geometric object features and by their estimation using Bayesian filters, such as Kalman or particle filters. In particular, each filter estimates the state of a specific object feature, conditionally dependent on another feature estimated by a distinct filter. This dependence provides improved target representations, permitting us to segment it out from the background even in nonstationary sequences. Considering that the procedure of the Bayesian filters may be described by a "hypotheses generation-hypotheses correction" strategy, the major novelty of our methodology compared to previous approaches is that the mutual dependence between filters is considered during the feature observation, that is, into the "hypotheses-correction" stage, instead of considering it when generating the hypotheses. This proves to be much more effective in terms of accuracy and reliability. The proposed method is analytically justified and applied to develop a robust tracking system that adapts online and simultaneously the color space where the image points are represented, the color distributions, the contour of the object, and its bounding box. Results with synthetic data and real video sequences demonstrate the robustness and versatility of our method.
Publisher version (URL)http://dx.doi.org/10.1109/TPAMI.2007.70727
URIhttp://hdl.handle.net/10261/30597
DOI10.1109/TPAMI.2007.70727
ISSN0162-8828
Appears in Collections:(IRII) Artículos
Files in This Item:
File Description SizeFormat 
Dependent multiple cue.pdf3,38 MBAdobe PDFThumbnail
View/Open
Show full item record
Review this work
 

Related articles:


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.