Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/133222
COMPARTIR / EXPORTAR:
logo share SHARE BASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE

Invitar a revisión por pares abierta
Título

Safe robot execution in model-based reinforcement learning

AutorMartínez, David CSIC; Alenyà, Guillem CSIC ORCID ; Torras, Carme CSIC ORCID
Fecha de publicación2015
EditorInstitute of Electrical and Electronics Engineers
CitaciónIROS 2015
ResumenTask learning in robotics requires repeatedly executing the same actions in different states to learn the model of the task. However, in real-world domains, there are usually sequences of actions that, if executed, may produce unrecoverable errors (e.g. breaking an object). Robots should avoid repeating such errors when learning, and thus explore the state space in a more intelligent way. This requires identifying dangerous action effects to avoid including such actions in the generated plans, while at the same time enforcing that the learned models are complete enough for the planner not to fall into dead-ends. We thus propose a new learning method that allows a robot to reason about dead-ends and their causes. Some such causes may be dangerous action effects (i.e., leading to unrecoverable errors if the action were executed in the given state) so that the method allows the robot to skip the exploration of risky actions and guarantees the safety of planned actions. If a plan might lead to a dead-end (e.g., one that includes a dangerous action effect), the robot tries to find an alternative safe plan and, if not found, it actively asks a teacher whether the risky action should be executed. This method permits learning safe policies as well as minimizing unrecoverable errors during the learning process. Experimental validation of the approach is provided in two different scenarios: a robotic task and a simulated problem from the international planning competition. Our approach greatly increases success ratios in problems where previous approaches had high probabilities of failing.
DescripciónTrabajo presentado a la International Conference on Intelligent Robots and Systems celebrada en Hamburgo (Alemania) del 28 de septiembre al 2 de octubre de 2015.
Versión del editorhttp://dx.doi.org/10.1109/IROS.2015.7354295
URIhttp://hdl.handle.net/10261/133222
DOI10.1109/IROS.2015.7354295
Identificadoresdoi: 10.1109/IROS.2015.7354295
Aparece en las colecciones: (IRII) Comunicaciones congresos




Ficheros en este ítem:
Fichero Descripción Tamaño Formato
reinforcement learning.pdf694,99 kBUnknownVisualizar/Abrir
Mostrar el registro completo

CORE Recommender

Page view(s)

236
checked on 23-abr-2024

Download(s)

487
checked on 23-abr-2024

Google ScholarTM

Check

Altmetric

Altmetric


NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.