English   español  
Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/131930
logo share SHARE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Naive-Bayes Inspired Effective Pre-Conditioner for Speeding-Up Logistic Regression

AutorZaidi, Nayyar A.; Carman, Mark J.; Cerquides, Jesús ; Webb, Geoffrey I.
Palabras claveStochastic gradient descent
Logistic regression
Discriminative-generative learning
Weighted naive bayes
Fecha de publicación14-dic-2014
EditorInstitute of Electrical and Electronics Engineers
CitaciónIEEE International Conference on Data Mining, ICDM, 14th IEEE International Conference on Data Mining, ICDM 2014; Shenzhen; China; 14 December 2014 through 17 December 2014. Proceedings. pp. 1097-1101.
ResumenWe propose an alternative parameterization of Logistic Regression (LR) for the categorical data, multi-class setting. LR optimizes the conditional log-likelihood over the training data and is based on an iterative optimization procedure to tune this objective function. The optimization procedure employed may be sensitive to scale and hence an effective pre-conditioning method is recommended. Many problems in machine learning involve arbitrary scales or categorical data (where simple standardization of features is not applicable). The problem can be alleviated by using optimization routines that are invariant to scale such as (second-order) Newton methods. However, computing and inverting the Hessian is a costly procedure and not feasible for big data. Thus one must often rely on first-order methods such as gradient descent (GD), stochastic gradient descent (SGD) or approximate second-order such as quasi-Newton (QN) routines, which are not invariant to scale. This paper proposes a simple yet effective pre-conditioner for speeding-up LR based on naive Bayes conditional probability estimates. The idea is to scale each attribute by the log of the conditional probability of that attribute given the class. This formulation substantially speeds-up LR's convergence. It also provides a weighted naive Bayes formulation which yields an effective framework for hybrid generative-discriminative classification. © 2014 IEEE.
Identificadoresdoi: 10.1109/ICDM.2014.53
issn: 15504786
isbn: 978-1-4799-4303-6
Aparece en las colecciones: (IIIA) Comunicaciones congresos
Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
accesoRestringido.pdf15,38 kBAdobe PDFVista previa
Mostrar el registro completo

Artículos relacionados:

NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.