Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/167087
COMPARTIR / EXPORTAR:
logo share SHARE logo core CORE BASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE

Invitar a revisión por pares abierta
Título

Modal space: A physics-based model for sequential estimation of time-varying shape from monocular video

AutorAgudo, Antonio CSIC ORCID ; Montiel, J. M. M.; Agapito, Lourdes; Calvo, Begoña
Palabras claveFinite elements
Modal analysis
Sequential nonrigid structure from motion
Dense reconstruction
Fecha de publicación2017
EditorSpringer Nature
CitaciónJournal of Mathematical Imaging and Vision 57(1): 75-98 (2017)
ResumenThis paper describes two sequential methods for recovering the camera pose together with the 3D shape of highly deformable surfaces from a monocular video. The nonrigid 3D shape is modeled as a linear combination of mode shapes with time-varying weights that define the shape at each frame and are estimated on-the-fly. The low-rank constraint is combined with standard smoothness priors to optimize the model parameters over a sliding window of image frames. We propose to obtain a physics-based shape basis using the initial frames on the video to code the time-varying shape along the sequence, reducing the problem from trilinear to bilinear. To this end, the 3D shape is discretized by means of a soup of elastic triangular finite elements where we apply a force balance equation. This equation is solved using modal analysis via a simple eigenvalue problem to obtain a shape basis that encodes the modes of deformation. Even though this strategy can be applied in a wide variety of scenarios, when the observations are denser, the solution can become prohibitive in terms of computational load. We avoid this limitation by proposing two efficient coarse-to-fine approaches that allow us to easily deal with dense 3D surfaces. This results in a scalable solution that estimates a small number of parameters per frame and could potentially run in real time. We show results on both synthetic and real videos with ground truth 3D data, while robustly dealing with artifacts such as noise and missing data.
Versión del editorhttps://doi.org/10.1007/s10851-016-0668-2
URIhttp://hdl.handle.net/10261/167087
DOI10.1007/s10851-016-0668-2
Identificadoresdoi: 10.1007/s10851-016-0668-2
e-issn: 1573-7683
issn: 0924-9907
Aparece en las colecciones: (IRII) Artículos




Ficheros en este ítem:
Fichero Descripción Tamaño Formato
ModaSpaceVideo.pdf8,8 MBAdobe PDFVista previa
Visualizar/Abrir
Mostrar el registro completo

CORE Recommender

SCOPUSTM   
Citations

16
checked on 16-abr-2024

WEB OF SCIENCETM
Citations

14
checked on 28-feb-2024

Page view(s)

393
checked on 22-abr-2024

Download(s)

281
checked on 22-abr-2024

Google ScholarTM

Check

Altmetric

Altmetric


NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.