Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/30550
COMPARTIR / EXPORTAR:
logo share SHARE BASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE

Invitar a revisión por pares abierta
Campo DC Valor Lengua/Idioma
dc.contributor.authorPorta, Josep M.-
dc.contributor.authorCelaya, Enric-
dc.date.accessioned2010-12-17T13:20:07Z-
dc.date.available2010-12-17T13:20:07Z-
dc.date.issued2005-
dc.identifier.citationJournal of Artificial Intelligence Research 23: 79-122 (2005)-
dc.identifier.issn1076-9757-
dc.identifier.urihttp://hdl.handle.net/10261/30550-
dc.description.abstractIn this paper, we confront the problem of applying reinforcement learning to agents that perceive the environment through many sensors and that can perform parallel actions using many actuators as is the case in complex autonomous robots. We argue that reinforcement learning can only be successfully applied to this case if strong assumptions are made on the characteristics of the environment in which the learning is performed, so that the relevant sensor readings and motor commands can be readily identified. The introduction of such assumptions leads to strongly-biased learning systems that can eventually lose the generality of traditional reinforcement-learning algorithms. In this line, we observe that, in realistic situations, the reward received by the robot depends only on a reduced subset of all the executed actions and that only a reduced subset of the sensor inputs (possibly different in each situation and for each action) are relevant to predict the reward. We formalize this property in the so called 'categorizability assumption' and we present an algorithm that takes advantage of the categorizability of the environment, allowing a decrease in the learning time with respect to existing reinforcement-learning algorithms. Results of the application of the algorithm to a couple of simulated realistic-robotic problems (landmark-based navigation and the six-legged robot gait generation) are reported to validate our approach and to compare it to existing flat and generalization-based reinforcement-learning approaches.-
dc.description.sponsorshipThis work was supported by the project 'Sistema reconfigurable para la navegación basada en visión de robots caminantes y rodantes en entornos naturales.'(00). The second author has been partially supported by the Spanish Ministerio de Ciencia y Tecnología and FEDER funds, under the project DPI2003-05193-C02-01 of the Plan Nacional de I+D+I.-
dc.language.isoeng-
dc.publisherAssociation for the Advancement of Artificial Intelligence-
dc.relation.isversionofPublisher's version-
dc.rightsopenAccess-
dc.titleReinforcement learning for agents with many sensors and actuators acting in categorizable environments-
dc.typeartículo-
dc.description.peerreviewedPeer Reviewed-
dc.relation.publisherversionhttps://www.jair.org/papers/paper1437.html-
dc.type.coarhttp://purl.org/coar/resource_type/c_6501es_ES
item.languageiso639-1en-
item.fulltextWith Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
item.grantfulltextopen-
item.openairetypeartículo-
Aparece en las colecciones: (IRII) Artículos
Ficheros en este ítem:
Fichero Descripción Tamaño Formato
Reinforcement learning.pdf2,2 MBAdobe PDFVista previa
Visualizar/Abrir
Show simple item record

CORE Recommender

Page view(s)

352
checked on 24-abr-2024

Download(s)

210
checked on 24-abr-2024

Google ScholarTM

Check


NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.