Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/131930
COMPARTIR / EXPORTAR:
logo share SHARE BASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE

Invitar a revisión por pares abierta
Campo DC Valor Lengua/Idioma
dc.contributor.authorZaidi, Nayyar A.-
dc.contributor.authorCarman, Mark J.-
dc.contributor.authorCerquides, Jesús-
dc.contributor.authorWebb, Geoffrey I.-
dc.date.accessioned2016-05-10T11:27:43Z-
dc.date.available2016-05-10T11:27:43Z-
dc.date.issued2014-12-14-
dc.identifierdoi: 10.1109/ICDM.2014.53-
dc.identifierissn: 15504786-
dc.identifierisbn: 978-1-4799-4303-6-
dc.identifier.citationIEEE International Conference on Data Mining, ICDM, 14th IEEE International Conference on Data Mining, ICDM 2014; Shenzhen; China; 14 December 2014 through 17 December 2014. Proceedings. pp. 1097-1101.-
dc.identifier.urihttp://hdl.handle.net/10261/131930-
dc.description.abstractWe propose an alternative parameterization of Logistic Regression (LR) for the categorical data, multi-class setting. LR optimizes the conditional log-likelihood over the training data and is based on an iterative optimization procedure to tune this objective function. The optimization procedure employed may be sensitive to scale and hence an effective pre-conditioning method is recommended. Many problems in machine learning involve arbitrary scales or categorical data (where simple standardization of features is not applicable). The problem can be alleviated by using optimization routines that are invariant to scale such as (second-order) Newton methods. However, computing and inverting the Hessian is a costly procedure and not feasible for big data. Thus one must often rely on first-order methods such as gradient descent (GD), stochastic gradient descent (SGD) or approximate second-order such as quasi-Newton (QN) routines, which are not invariant to scale. This paper proposes a simple yet effective pre-conditioner for speeding-up LR based on naive Bayes conditional probability estimates. The idea is to scale each attribute by the log of the conditional probability of that attribute given the class. This formulation substantially speeds-up LR's convergence. It also provides a weighted naive Bayes formulation which yields an effective framework for hybrid generative-discriminative classification. © 2014 IEEE.-
dc.description.sponsorshipThis research has been supported by the Australian Research Council (ARC) under grant DP140100087 and Asian Office of Aerospace Research and Development, Air Force Office of Scientific Research under contract FA23861214030.-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.rightsclosedAccess-
dc.subjectStochastic gradient descent-
dc.subjectPre-conditioning-
dc.subjectLogistic regression-
dc.subjectDiscriminative-generative learning-
dc.subjectClassification-
dc.subjectWeighted naive bayes-
dc.titleNaive-Bayes Inspired Effective Pre-Conditioner for Speeding-Up Logistic Regression-
dc.typecomunicación de congreso-
dc.identifier.doi10.1109/ICDM.2014.53-
dc.date.updated2016-05-10T11:27:43Z-
dc.description.versionPeer Reviewed-
dc.language.rfc3066eng-
dc.relation.csic-
dc.type.coarhttp://purl.org/coar/resource_type/c_5794es_ES
item.fulltextNo Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
item.grantfulltextnone-
item.openairetypecomunicación de congreso-
Aparece en las colecciones: (IIIA) Comunicaciones congresos
Ficheros en este ítem:
Fichero Descripción Tamaño Formato
accesoRestringido.pdf15,38 kBAdobe PDFVista previa
Visualizar/Abrir
Show simple item record

CORE Recommender

Page view(s)

354
checked on 24-abr-2024

Download(s)

104
checked on 24-abr-2024

Google ScholarTM

Check

Altmetric

Altmetric


NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.