Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/30078
COMPARTIR / EXPORTAR:
logo share SHARE BASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE

Invitar a revisión por pares abierta
Campo DC Valor Lengua/Idioma
dc.contributor.authorAgostini, Alejandro-
dc.contributor.authorCelaya, Enric-
dc.date.accessioned2010-12-15T13:40:37Z-
dc.date.available2010-12-15T13:40:37Z-
dc.date.issued2006-
dc.identifier.citationIRI-TR-06-01 (2006)-
dc.identifier.urihttp://hdl.handle.net/10261/30078-
dc.description.abstractA Reinforcement Learning problem is formulated as trying to find the action policy that maximizes the accumulated reward received by the agent through time. One of the most popular algorithms used in RL is Q-Learning which uses an action-value function q(s,a) to evaluate the expectation of the maximum future cumulative reward that will be obtained from executing action a in situation s. Q-Learning, as well as conventional RL techniques, is defined for discrete environments with a finite set of states and actions. The action-value function is explicitly represented by storing values for each state-action (s,a) pair. In order to reach a good approximation of the value function all the (s,a) pairs must be experienced many times but in practical applications the amount of experience for learning to take place is unfeasible. Therefore, the value function must be generalized to infer in situations never experienced so far. The generalization problem has been widely treated in the field of machine learning. Supervised learning directly treats this issue and many generalization techniques have been developed in this field. Any of the representations used in supervised learning could, in principle, be applied to RL. But there are some important issues to take into account that make good generalization in RL very hard to achieve. One of the most remarkable is that the value function is learned while represented. In this work we propose a RL approach that uses a new function representation of the Q function that allows good generalization by capturing function regularities into decision rules. The representation is a kind of Decision List where each rule configures a subspace of the state-action space and provides an approximation of the Q function in its covered region. Rule selection for action evaluation is given by the rule with both, good accuracy in the estimation and high confidence in the related statistics.-
dc.language.isoeng-
dc.publisherCSIC-UPC - Instituto de Robótica e Informática Industrial (IRII)-
dc.relation.isversionofPublisher's version-
dc.rightsopenAccess-
dc.subjectReinforcement learning-
dc.subjectGeneralization-
dc.subjectCategorizartion-
dc.subjectDecision list-
dc.subjectAutomatic theorem proving-
dc.subjectIntelligent robots and autonomous agents-
dc.subjectMachine learning-
dc.titleGeneralization in reinforcement learning with a task-related world description using rules-
dc.typeinforme técnico-
dc.relation.publisherversionhttp://www.iri.upc.edu/publications/show/811-
dc.relation.csic-
dc.type.coarhttp://purl.org/coar/resource_type/c_18ghes_ES
item.cerifentitytypePublications-
item.grantfulltextopen-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.fulltextWith Fulltext-
item.languageiso639-1en-
item.openairetypeinforme técnico-
Aparece en las colecciones: (IRII) Informes y documentos de trabajo
Ficheros en este ítem:
Fichero Descripción Tamaño Formato
Generalization in reinforcement.pdf628,19 kBAdobe PDFVista previa
Visualizar/Abrir
Show simple item record

CORE Recommender

Page view(s)

364
checked on 24-abr-2024

Download(s)

142
checked on 24-abr-2024

Google ScholarTM

Check


NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.