English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/167214
Share/Impact:
Statistics
logo share SHARE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

DC FieldValueLanguage
dc.contributor.authorAmor-Martinez, Adrian-
dc.contributor.authorSantamaria-Navarro, Àngel-
dc.contributor.authorHerrero, Fernando-
dc.contributor.authorRuiz, Alberto-
dc.contributor.authorSanfeliu, Alberto-
dc.date.accessioned2018-06-29T10:07:05Z-
dc.date.available2018-06-29T10:07:05Z-
dc.date.issued2016-
dc.identifierdoi: 10.1109/SSRR.2016.7784271-
dc.identifierisbn: 978-1-5090-4350-7-
dc.identifier.citationIEEE International Symposium on Safety, Security, and Rescue Robotics: 15-20 (2016)-
dc.identifier.urihttp://hdl.handle.net/10261/167214-
dc.descriptionTrabajo presentado al IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), celebrado en Lausanne (Suiza) del 23 al 27 de octubre de 2016.-
dc.description.abstractWe present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar PØP. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.-
dc.description.sponsorshipThis work has been partially funded by the AEROARMS EU project H2020-ICT-2014-1-644271, the CICYT project DPI2013-42458-P, and by Spanish MINECO and EU FEDER grant TIN2015-66972-C5-3-R.-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/644271-
dc.relationMINECO/ICTI2013-2016/DPI2013-42458-P-
dc.relationMINECO/ICTI2013-2016/TIN2015-66972-C5-3-R-
dc.relation.isversionofPostprint-
dc.rightsopenAccess-
dc.titlePlanar PØP: Feature-less pose estimation with applications in UAV localization-
dc.typecomunicación de congreso-
dc.relation.publisherversionhttps://doi.org/10.1109/SSRR.2016.7784271-
dc.date.updated2018-06-29T10:07:05Z-
dc.description.versionPeer Reviewed-
dc.language.rfc3066eng-
dc.contributor.funderComisión Interministerial de Ciencia y Tecnología, CICYT (España)-
dc.contributor.funderMinisterio de Economía y Competitividad (España)-
dc.contributor.funderEuropean Commission-
dc.relation.csic-
dc.identifier.funderhttp://dx.doi.org/10.13039/501100007273es_ES
dc.identifier.funderhttp://dx.doi.org/10.13039/501100003329es_ES
dc.identifier.funderhttp://dx.doi.org/10.13039/501100000780es_ES
Appears in Collections:(IRII) Libros y partes de libros
Files in This Item:
File Description SizeFormat 
POPplanar.pdf1,37 MBUnknownView/Open
Show simple item record
 


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.