2019-08-18T02:34:56Z
https://digital.csic.es/dspace-oai/request
oai:digital.csic.es:10261/96699
2019-06-10T14:44:59Z
com_10261_106
com_10261_4
col_10261_1241
A competitive strategy for function approximation in Q-learning
Agostini, Alejandro
Celaya, Enric
Trabajo presentado al 22nd IJCAI celebrado en Barcelona del 16 al 22 de julio de 2011.
In this work we propose an approach for generalization in continuous domain Reinforcement Learning that, instead of using a single function approximator, tries many different function approximators in parallel, each one defined in a different region of the domain. Associated with each approximator is a relevance function that locally quantifies the quality of its approximation, so that, at each input point, the approximator with highest relevance can be selected. The relevance function is defined using parametric estimations of the variance of the q-values and the density of samples in the input space, which are used to quantify the accuracy and the confidence in the approximation, respectively. These parametric estimations are obtained from a probability density distribution represented as a Gaussian Mixture Model embedded in the input-output space of each approximator. In our experiments, the proposed approach required a lesser number of experiences for learning and produced more stable convergence profiles than when using a single function approximator.
2014-05-14T11:08:20Z
2014-05-14T11:08:20Z
2011
2014-05-14T11:08:20Z
ComunicaciĆ³n de congreso
International Joint Conference on Artificial Intelligence 2: 1146-1151 (2011)
http://hdl.handle.net/10261/96699
eng
Postprint
http://ijcai.org/papers11/contents.php
openAccess
AAAI Press