English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/155293
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:


Managing different sources of uncertainty in a BDI framework in a principled way with tractable fragments

AuthorsBauters, Kim; McAreavey, Kevin; Liu, Weiru; Hong, Jun; Godo, Lluis ; Sierra, Carles
KeywordsIntelligent agents
Intelligent systems
Issue Date2017
PublisherAI Access Foundation
CitationJournal of Artificial Intelligence Research 58: 731- 775 (2017)
AbstractThe Belief-Desire-Intention (BDI) architecture is a practical approach for modelling large-scale intelligent systems. In the BDI setting, a complex system is represented as a network of interacting agents - or components - each one modelled based on its beliefs, desires and intentions. However, current BDI implementations are not well-suited for modelling more realistic intelligent systems which operate in environments pervaded by different types of uncertainty. Furthermore, existing approaches for dealing with uncertainty typically do not offer syntactical or tractable ways of reasoning about uncertainty. This complicates their integration with BDI implementations, which heavily rely on fast and reactive decisions. In this paper, we advance the state-of-the-art w.r.t. handling different types of uncertainty in BDI agents. The contributions of this paper are, first, a new way of modelling the beliefs of an agent as a set of epistemic states. Each epistemic state can use a distinct underlying uncertainty theory and revision strategy, and commensurability between epistemic states is achieved through a stratification approach. Second, we present a novel syntactic approach to revising beliefs given unreliable input. We prove that this syntactic approach agrees with the semantic definition, and we identify expressive fragments that are particularly useful for resource-bounded agents. Third, we introduce full operational semantics that extend Can, a popular semantics for BDI, to establish how reasoning about uncertainty can be tightly integrated into the BDI framework. Fourth, we provide comprehensive experimental results to highlight the usefulness and feasibility of our approach, and explain how the generic epistemic state can be instantiated into various representations. © 2017 AI Access Foundation.
Identifiersdoi: doi:10.1613/jair.5287
issn: 1076-9757
uri: http://jair.org/papers/paper5287.html
Appears in Collections:(IIIA) Artículos
Files in This Item:
File Description SizeFormat 
JAIR(2017)58_731-75.pdf576,23 kBAdobe PDFThumbnail
Show full item record
Review this work

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.