English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/170685
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE
Exportar a otros formatos:

DC FieldValueLanguage
dc.contributor.authorRoldán Gómez, Juan J.-
dc.contributor.authorPeña-Tapia, E.-
dc.contributor.authorMartín-Barrio, A.-
dc.contributor.authorOlivares Méndez, Miguel A.-
dc.contributor.authorCerro, Jaime del-
dc.contributor.authorBarrientos, Antonio-
dc.identifier.citationSensors 17 (2017)-
dc.description.abstractMulti-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation.-
dc.description.sponsorshipThis work is framed on SAVIER (Situational Awareness Virtual EnviRonment) Project, which is both supported and funded by Airbus Defence & Space. The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. Fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU, and from the DPI2014-56985-R project (Protección robotizada de infraestructuras críticas) funded by the Ministerio de Economía y Competitividad of Gobierno de España. We would like to thank to the students of Technical University of Madrid that took part in the experiments and provided us valuable information.-
dc.publisherMolecular Diversity Preservation International-
dc.relation.isversionofPublisher's version-
dc.subjectmachine learning-
dc.subjectvirtual reality-
dc.subjectsituational awareness-
dc.subjectoperator interface-
dc.titleMulti-robot interfaces and operator situational awareness: Study of the impact of immersion and prediction-
dc.description.versionPeer Reviewed-
dc.contributor.funderComunidad de Madrid-
Appears in Collections:(CAR) Artículos
Files in This Item:
File Description SizeFormat 
Roldan_Multi-Robot_sensors-17-01720.pdf9,58 MBAdobe PDFThumbnail
Show simple item record

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.