Por favor, use este identificador para citar o enlazar a este item:
http://hdl.handle.net/10261/131930
COMPARTIR / EXPORTAR:
SHARE BASE | |
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE | |
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.contributor.author | Zaidi, Nayyar A. | - |
dc.contributor.author | Carman, Mark J. | - |
dc.contributor.author | Cerquides, Jesús | - |
dc.contributor.author | Webb, Geoffrey I. | - |
dc.date.accessioned | 2016-05-10T11:27:43Z | - |
dc.date.available | 2016-05-10T11:27:43Z | - |
dc.date.issued | 2014-12-14 | - |
dc.identifier | doi: 10.1109/ICDM.2014.53 | - |
dc.identifier | issn: 15504786 | - |
dc.identifier | isbn: 978-1-4799-4303-6 | - |
dc.identifier.citation | IEEE International Conference on Data Mining, ICDM, 14th IEEE International Conference on Data Mining, ICDM 2014; Shenzhen; China; 14 December 2014 through 17 December 2014. Proceedings. pp. 1097-1101. | - |
dc.identifier.uri | http://hdl.handle.net/10261/131930 | - |
dc.description.abstract | We propose an alternative parameterization of Logistic Regression (LR) for the categorical data, multi-class setting. LR optimizes the conditional log-likelihood over the training data and is based on an iterative optimization procedure to tune this objective function. The optimization procedure employed may be sensitive to scale and hence an effective pre-conditioning method is recommended. Many problems in machine learning involve arbitrary scales or categorical data (where simple standardization of features is not applicable). The problem can be alleviated by using optimization routines that are invariant to scale such as (second-order) Newton methods. However, computing and inverting the Hessian is a costly procedure and not feasible for big data. Thus one must often rely on first-order methods such as gradient descent (GD), stochastic gradient descent (SGD) or approximate second-order such as quasi-Newton (QN) routines, which are not invariant to scale. This paper proposes a simple yet effective pre-conditioner for speeding-up LR based on naive Bayes conditional probability estimates. The idea is to scale each attribute by the log of the conditional probability of that attribute given the class. This formulation substantially speeds-up LR's convergence. It also provides a weighted naive Bayes formulation which yields an effective framework for hybrid generative-discriminative classification. © 2014 IEEE. | - |
dc.description.sponsorship | This research has been supported by the Australian Research Council (ARC) under grant DP140100087 and Asian Office of Aerospace Research and Development, Air Force Office of Scientific Research under contract FA23861214030. | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.rights | closedAccess | - |
dc.subject | Stochastic gradient descent | - |
dc.subject | Pre-conditioning | - |
dc.subject | Logistic regression | - |
dc.subject | Discriminative-generative learning | - |
dc.subject | Classification | - |
dc.subject | Weighted naive bayes | - |
dc.title | Naive-Bayes Inspired Effective Pre-Conditioner for Speeding-Up Logistic Regression | - |
dc.type | comunicación de congreso | - |
dc.identifier.doi | 10.1109/ICDM.2014.53 | - |
dc.date.updated | 2016-05-10T11:27:43Z | - |
dc.description.version | Peer Reviewed | - |
dc.language.rfc3066 | eng | - |
dc.relation.csic | Sí | - |
dc.type.coar | http://purl.org/coar/resource_type/c_5794 | es_ES |
item.fulltext | No Fulltext | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.cerifentitytype | Publications | - |
item.grantfulltext | none | - |
item.openairetype | comunicación de congreso | - |
Aparece en las colecciones: | (IIIA) Comunicaciones congresos |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | |
---|---|---|---|---|
accesoRestringido.pdf | 15,38 kB | Adobe PDF | Visualizar/Abrir |
CORE Recommender
Page view(s)
354
checked on 24-abr-2024
Download(s)
104
checked on 24-abr-2024
Google ScholarTM
Check
Altmetric
Altmetric
NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.