English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/30078
Share/Impact:
Statistics
logo share SHARE   Add this article to your Mendeley library MendeleyBASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:
Title

Generalization in reinforcement learning with a task-related world description using rules

AuthorsAgostini, Alejandro ; Celaya, Enric
KeywordsReinforcement learning
Generalization
Categorizartion
Decision list
Automatic theorem proving
Intelligent robots and autonomous agents
Machine learning
Issue Date2006
PublisherCSIC-UPC - Instituto de Robótica e Informática Industrial (IRII)
CitationIRI-TR-06-01 (2006)
AbstractA Reinforcement Learning problem is formulated as trying to find the action policy that maximizes the accumulated reward received by the agent through time. One of the most popular algorithms used in RL is Q-Learning which uses an action-value function q(s,a) to evaluate the expectation of the maximum future cumulative reward that will be obtained from executing action a in situation s. Q-Learning, as well as conventional RL techniques, is defined for discrete environments with a finite set of states and actions. The action-value function is explicitly represented by storing values for each state-action (s,a) pair. In order to reach a good approximation of the value function all the (s,a) pairs must be experienced many times but in practical applications the amount of experience for learning to take place is unfeasible. Therefore, the value function must be generalized to infer in situations never experienced so far. The generalization problem has been widely treated in the field of machine learning. Supervised learning directly treats this issue and many generalization techniques have been developed in this field. Any of the representations used in supervised learning could, in principle, be applied to RL. But there are some important issues to take into account that make good generalization in RL very hard to achieve. One of the most remarkable is that the value function is learned while represented. In this work we propose a RL approach that uses a new function representation of the Q function that allows good generalization by capturing function regularities into decision rules. The representation is a kind of Decision List where each rule configures a subspace of the state-action space and provides an approximation of the Q function in its covered region. Rule selection for action evaluation is given by the rule with both, good accuracy in the estimation and high confidence in the related statistics.
Publisher version (URL)http://www.iri.upc.edu/publications/show/811
URIhttp://hdl.handle.net/10261/30078
Appears in Collections:(IRII) Informes y documentos de trabajo
Files in This Item:
File Description SizeFormat 
Generalization in reinforcement.pdf628,19 kBAdobe PDFThumbnail
View/Open
Show full item record
Review this work
 


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.