English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/30547
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Neural learning methods yielding functional invariance

AuthorsRuiz de Angulo, Vicente ; Torras, Carme
KeywordsNeural learning
Functional invariance
Input noise addition
Issue Date2004
CitationTheoretical Computer Science 320(1): 111-121 (2004)
AbstractThis paper investigates the functional invariance of neural network learning methods incorporating a complexity reduction mechanism, such as a regularizer. By functional invariance we mean the property of producing functionally equivalent minima as the size of the network grows, when the smoothing parameters are fixed. We study three different principles on which functional invariance can be based, and try to delimit the conditions under which each of them acts. We find out that, surprisingly, some of the most popular neural learning methods, such as weight-decay and input noise addition, exhibit this interesting property.
DescriptionA preliminary version of this work was presented at the International Conference on Arti cial Neural Networks (ICANN'01), Vienna, August 2001.
Publisher version (URL)http://dx.doi.org/10.1016/j.tcs.2004.03.046
Appears in Collections:(IRII) Artículos
Files in This Item:
File Description SizeFormat 
Neural learning methods.pdf155,62 kBAdobe PDFThumbnail
Show full item record
Review this work

Related articles:

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.