English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/30126
logo share SHARE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Exploiting domain symmetries in reinforcement learning with continuous state and action spaces

AuthorsCelaya, Enric ; Agostini, Alejandro
KeywordsDomain symmetries
Reinforcement learning
Issue Date2009
PublisherInstitute of Electrical and Electronics Engineers
CitationInternational Conference on Machine Learning and Applications: 331-336 (2009)
AbstractA central problem in Reinforcement Learning is how to deal with large state and action spaces. When the problem domain presents intrinsic symmetries, exploiting them can be key to achieve good performance. We analyze the gains that can be effectively achieved by exploiting different kinds of symmetries, and the effect of combining them, in a test case: the stand-up and stabilization of an inverted pendulum.
DescriptionPresentado al ICMLA'09 celebrado en Miami (EE.UU.) del 13 al 15 de diciembre.
Publisher version (URL)http://dx.doi.org/10.1109/ICMLA.2009.41
Appears in Collections:(IRII) Libros y partes de libros
Files in This Item:
File Description SizeFormat 
Exploiting domain symmetries.pdf190,01 kBAdobe PDFThumbnail
Show full item record
Review this work

Related articles:

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.