English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/96350
Share/Impact:
Statistics
logo share SHARE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE
Exportar a otros formatos:

Title

Dense segmentation-aware descriptors

AuthorsTrulls, Eduard ; Kokkinos, Iasonas; Sanfeliu, Alberto ; Moreno-Noguer, Francesc
Issue Date2013
PublisherInstitute of Electrical and Electronics Engineers
CitationIEEE Computer Society Conference on Computer Vision and Pattern Recognition: 2890-2897 (2013)
AbstractIn this work we exploit segmentation to construct appearance descriptors that can robustly deal with occlusion and background changes. For this, we downplay measurements coming from areas that are unlikely to belong to the same region as the descriptor¿s center, as suggested by soft segmentation masks. Our treatment is applicable to any image point, i.e. dense, and its computational overhead is in the order of a few seconds. We integrate this idea with Dense SIFT, and also with Dense Scale and Rotation Invariant Descriptors (SID), delivering descriptors that are densely computable, invariant to scaling and rotation, and robust to background changes. We apply our approach to standard benchmarks on large displacement motion estimation using SIFT-flow and widebaseline stereo, systematically demonstrating that the introduction of segmentation yields clear improvements.
DescriptionTrabajo presentado al CVPR celebrado en Portland del 23 al 28 de junio de 2013.
Publisher version (URL)http://dx.doi.org/10.1109/CVPR.2013.372
URIhttp://hdl.handle.net/10261/96350
DOIhttp://dx.doi.org/10.1109/CVPR.2013.372
Identifiersdoi: 10.1109/CVPR.2013.372
issn: 1063-6919
Appears in Collections:(IRII) Artículos
Files in This Item:
File Description SizeFormat 
Dense segmentation-aware.pdf4,3 MBAdobe PDFThumbnail
View/Open
Show full item record
Review this work
 


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.