English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/130599
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Collaborative Judgement

AuthorsAndrejczuk, Ewa; Rodríguez-Aguilar, Juan Antonio ; Sierra, Carles
KeywordsRanking algorithm
Self assessment
Object rankings
Network security
Data privacy
Issue Date26-Oct-2015
CitationLecture Notes in Computer Science, 18th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2015; Bertinoro; Italy; 26 October 2015 through 30 October 2015; vol. 9387: 631-639, 2015
AbstractIn this paper we introduce a new ranking algorithm, called Collaborative Judgement (CJ), that takes into account peer opinions of agents and/or humans on objects (e.g. products, exams, papers) as well as peer judgements over those opinions. The combination of these two types of information has not been studied in previous work in order to produce object rankings. We apply CJ to the use case of scientific paper assessment and we validate it over simulated data. The results show that the rankings produced by our algorithm improve current scientific paper ranking practice based on averages of opinions weighted by their reviewers’ self-assessments. © Springer International Publishing Switzerland 2015.
Identifiersdoi: 10.1007/978-3-319-25524-8_46
issn: 03029743
isbn: 978-331925523-1
Appears in Collections:(IIIA) Comunicaciones congresos
Files in This Item:
File Description SizeFormat 
accesoRestringido.pdf15,38 kBAdobe PDFThumbnail
Show full item record
Review this work

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.