English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/84656
logo share SHARE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:


Neocortical frame-free vision sensing and processing through scalable Spiking ConvNet hardware

AuthorsCamuñas-Mesa, L. ; Perez-Carrasco, J. A.; Zamarreño-Ramos, Carlos ; Serrano-Gotarredona, Teresa ; Linares-Barranco, Bernabé
Issue Date2010
PublisherInstitute of Electrical and Electronics Engineers
CitationInternational Joint Conference on Neural Networks (IJCNN): 1-8 (2010)
AbstractThis paper summarizes how Convolutional Neural Networks (ConvNets) can be implemented in hardware using Spiking neural network Address-Event-Representation (AER) technology, for sophisticated pattern and object recognition tasks operating at mili second delay throughputs. Although such hardware would require hundreds of individual convolutional modules and thus is presently not yet available, we discuss methods and technologies for implementing it in the near future. On the other hand, we provide precise behavioral simulations of large scale spiking AER convolutional hardware and evaluate its performance, by using performance figures of already available AER convolution chips fed with real sensory data obtained from physically available AER motion retina chips. We provide simulation results of systems trained for people recognition, showing recognition delays of a few miliseconds from stimulus onset. ConvNets show good up scaling behavior and possibilities for being implemented efficiently with new nano scale hybrid CMOS/nonCMOS technologies.
Identifiersdoi: 10.1109/IJCNN.2010.5596366
isbn: 978-1-4244-6916-1
Appears in Collections:(IMSE-CNM) Libros y partes de libros
Files in This Item:
File Description SizeFormat 
accesoRestringido.pdf15,38 kBAdobe PDFThumbnail
Show full item record
Review this work

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.