English   español  
Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/18061
logo share SHARE   Add this article to your Mendeley library MendeleyBASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Should I Trust my Teammates? An experiment in Heuristic Multiagent Reinforcement Learning

AutorBianchi, Reinaldo; López de Mántaras, Ramón
Palabras claveReinforcement Learning
Multiagent Learning
Fecha de publicación2009
CitaciónIJCAI'09, W12: Grand Challenges for Reasoning from Experiences, Los Angeles, California, July 11, 2009, p.p.:11-15.
ResumenTrust and reputation are concepts that have been traditionally studied in domains such as electronic markets, e-commerce, game theory and bibliometrics, among others. More recently, researchers started to investigate the benefits of using these concepts in multi-robot domains: when one robot has to decide if it should cooperate with another one to accomplish a task, should the trust in the other be taken into account? This paper proposes the use of a trust model to define when one agent can take an action that depends on other agents of his team. To implement this idea, a Heuristic Multiagent Reinforcement Learning algorithm is modified to take into account the trust in the other agents, before selecting an action that depends on them. Simulations were made in a robot soccer domain, which extends a very well known one proposed by Littman by expanding its size, the number of agents and by using heterogeneous agents. Based on the results it is possible to show the performance of a team of agents can be improved even when using very simple trust models.
Aparece en las colecciones: (IIIA) Comunicaciones congresos
Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
WS12_IJCAI_09_Bi_RLM.pdf111,16 kBAdobe PDFVista previa
Mostrar el registro completo

NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.