English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/96813
Share/Impact:
Statistics
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE
Exportar a otros formatos:

DC FieldValueLanguage
dc.contributor.authorAksoy, Eren Erdal-
dc.contributor.authorAbramov, Alexey-
dc.contributor.authorDörr, Johannes-
dc.contributor.authorNing, Kejun-
dc.contributor.authorDellen, Babette-
dc.contributor.authorWörgötter, Florentin-
dc.date.accessioned2014-05-16T12:06:52Z-
dc.date.available2014-05-16T12:06:52Z-
dc.date.issued2011-
dc.identifierdoi: 10.1177/0278364911410459-
dc.identifierissn: 0278-3649-
dc.identifiere-issn: 1741-3176-
dc.identifier.citationInternational Journal of Robotics Research 30(10): 1229-1249 (2011)-
dc.identifier.urihttp://hdl.handle.net/10261/96813-
dc.description.abstractRecognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem. We address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation. Thereby, we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge. To achieve this we continuously track image segments in the video and construct a dynamic graph sequence. Topological transitions of those graphs occur whenever a spatial relation between some segments has changed in a discontinuous way and these moments are stored in a transition matrix called the semantic event chain (SEC). We demonstrate that these time points are highly descriptive for distinguishing between different manipulations. Employing simple sub-string search algorithms, SECs can be compared and type-similar manipulations can be recognized with high confidence. As the approach is generic, statistical learning can be used to find the archetypal SEC of a given manipulation class. The performance of the algorithm is demonstrated on a set of real videos showing hands manipulating various objects and performing different actions. In experiments with a robotic arm, we show that the SEC can be learned by observing human manipulations, transferred to a new scenario, and then reproduced by the machine. © SAGE Publications 2011.-
dc.description.sponsorshipThe research leading to these results has received funding from the European Community's Seventh Framework Programme FP7=2007-2013 - Challenge 2 - Cognitive Systems, Interaction, Robotics - under grant agreement No 247947 - GARNICS. B.D. acknowledges support from the Spanish Ministry for Science and Innovation via a Ramon y Cajal Fellowship.-
dc.publisherSage Publications-
dc.relationinfo:eu-repo/grantAgreement/EC/FP7/247947-
dc.relation.isversionofPostprint-
dc.rightsopenAccess-
dc.subjectAction recognition-
dc.subjectAffordances-
dc.subjectObject categorization-
dc.subjectSemantic scene graphs Unsupervised learning-
dc.subjectObject–action complexes (OACs)-
dc.titleLearning the semantics of object-action relations by observation-
dc.typeartículo-
dc.identifier.doihttp://dx.doi.org/10.1177/0278364911410459-
dc.relation.publisherversionhttp://dx.doi.org/10.1177/0278364911410459-
dc.date.updated2014-05-16T12:06:52Z-
dc.description.versionPeer Reviewed-
dc.language.rfc3066eng-
Appears in Collections:(IRII) Artículos
Files in This Item:
File Description SizeFormat 
Learning the semantics.pdf8,57 MBAdobe PDFThumbnail
View/Open
Show simple item record
 


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.