2019-12-16T09:37:28Z
http://digital.csic.es/dspace-oai/request
oai:digital.csic.es:10261/30568
2019-06-10T14:58:41Z
com_10261_106
com_10261_4
col_10261_359
Porta, Josep M.
Vlassis, Nikos
Spaan, Matthijs T. J.
Poupart, Pascal
2010-12-17T13:28:59Z
2010-12-17T13:28:59Z
2006
Journal of Machine Learning Research 7: 2329-2367 (2006)
1532-4435
http://hdl.handle.net/10261/30568
http://dx.doi.org/10.13039/501100000780
We propose a novel approach to optimize Partially Observable Markov Decisions Processes (POMDPs) defined on continuous spaces. To date, most algorithms for model-based POMDPs are restricted to discrete states, actions, and observations, but many real-world problems such as, for instance, robot navigation, are naturally defined on continuous spaces. In this work, we demonstrate that the value function for continuous POMDPs is convex in the beliefs over continuous state spaces, and piecewise-linear convex for the particular case of discrete observations and actions but still continuous states. We also demonstrate that continuous Bellman backups are contracting and isotonic ensuring the monotonic convergence of value-iteration algorithms. Relying on those properties, we extend the PERSEUS algorithm, originally developed for discrete POMDPs, to work in continuous state spaces by representing the observation, transition, and reward models using Gaussian mixtures, and the beliefs using Gaussian mixtures or particle sets. With these representations, the integrals that appear in the Bellman backup can be computed in closed form and, therefore, the algorithm is computationally feasible. Finally, we further extend PERSEUS to deal with continuous action and observation sets by designing effective sampling approaches.
eng
openAccess
Planning under uncertainty
Continuous state space
Continuous action space
Continuous observation space
Point-based value iteration for continuous POMDPs
artÃculo