English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/30522
Share/Impact:
Statistics
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:
Title

Architecture-independent approximation of functions

AuthorsRuiz de Angulo, Vicente ; Torras, Carme
KeywordsLearning
Function approximation
Network size
Robots
Issue Date2001
PublisherMassachusetts Institute of Technology
CitationNeural Computation 13(5): 1119-1135 (2001)
AbstractWe show that minimizing the expected error of a feedforward network over a distribution of weights results in an approximation that tends to be independent of network size as the number of hidden units grows. This minimization can be easily performed, and the complexity of the resulting function implemented by the network is regulated by the variance of the weight distribution. For a fixed variance, there is a number of hidden units above which either the implemented function does not change or the change is slight and tends to zero as the size of the network grows. In sum, the control of the complexity depends on only the variance, not the architecture, provided it is large enough.
Publisher version (URL)http://dx.doi.org/10.1162/08997660151134352
URIhttp://hdl.handle.net/10261/30522
DOI10.1162/08997660151134352
ISSN0899-7667
Appears in Collections:(IRII) Artículos
Files in This Item:
File Description SizeFormat 
doc1.pdf781,68 kBAdobe PDFThumbnail
View/Open
Show full item record
Review this work
 

Related articles:


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.