English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/170685
Share/Impact:
Statistics
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE
Exportar a otros formatos:

DC FieldValueLanguage
dc.contributor.authorRoldán Gómez, Juan J.-
dc.contributor.authorPeña-Tapia, E.-
dc.contributor.authorMartín-Barrio, A.-
dc.contributor.authorOlivares Méndez, Miguel A.-
dc.contributor.authorCerro, Jaime del-
dc.contributor.authorBarrientos, Antonio-
dc.date.accessioned2018-10-05T13:08:53Z-
dc.date.available2018-10-05T13:08:53Z-
dc.date.issued2017-
dc.identifier.citationSensors 17 (2017)-
dc.identifier.issn1424-8220-
dc.identifier.urihttp://hdl.handle.net/10261/170685-
dc.description.abstractMulti-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation.-
dc.description.sponsorshipThis work is framed on SAVIER (Situational Awareness Virtual EnviRonment) Project, which is both supported and funded by Airbus Defence & Space. The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. Fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU, and from the DPI2014-56985-R project (Protección robotizada de infraestructuras críticas) funded by the Ministerio de Economía y Competitividad of Gobierno de España. We would like to thank to the students of Technical University of Madrid that took part in the experiments and provided us valuable information.-
dc.publisherMolecular Diversity Preservation International-
dc.relationS2013/MIT-2748/RoboCity2030-III-
dc.relationMINECO/ICTI2013-2016/DPI2014-56985-R-
dc.relation.isversionofPublisher's version-
dc.rightsopenAccess-
dc.subjectmulti-robot-
dc.subjectmachine learning-
dc.subjectvirtual reality-
dc.subjectprediction-
dc.subjectimmersion-
dc.subjectsituational awareness-
dc.subjectoperator interface-
dc.titleMulti-robot interfaces and operator situational awareness: Study of the impact of immersion and prediction-
dc.typeartículo-
dc.identifier.doihttp://dx.doi.org/10.3390/s17081720-
dc.date.updated2018-10-05T13:08:54Z-
dc.description.versionPeer Reviewed-
dc.language.rfc3066eng-
dc.rights.licensehttp://creativecommons.org/licenses/by/4.0/-
dc.contributor.funderComunidad de Madrid-
dc.relation.csic-
dc.identifier.funderhttp://dx.doi.org/10.13039/100012818es_ES
Appears in Collections:(CAR) Artículos
Files in This Item:
File Description SizeFormat 
Roldan_Multi-Robot_sensors-17-01720.pdf9,58 MBAdobe PDFThumbnail
View/Open
Show simple item record
 


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.