English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/83167
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:
DC FieldValueLanguage
dc.contributor.authorCamuñas-Mesa, L.-
dc.contributor.authorAcosta, Antonio José-
dc.contributor.authorZamarreño-Ramos, Carlos-
dc.contributor.authorSerrano-Gotarredona, Teresa-
dc.contributor.authorLinares-Barranco, Bernabé-
dc.identifier.citationIEEE Transactions on Circuits and Systems I: Regular Papers 58(4): 777-790 (2010)es_ES
dc.description.abstractThis paper describes a convolution chip for event-driven vision sensing and processing systems. As opposed to conventional frame-constraint vision systems, in event-driven vision there is no need for frames. In frame-free event-based vision, information is represented by a continuous flow of self-timed asynchronous events. Such events can be processed on the fly by event-based convolution chips, providing at their output a continuous event flow representing the 2-D filtered version of the input flow. In this paper we present a 32 × 32 pixel 2-D convolution event processor whose kernel can have arbitrary shape and size up to 32 × 32. Arrays of such chips can be assembled to process larger pixel arrays. Event latency between input and output event flows can be as low as 155 ns. Input event throughput can reach 20 Meps (mega events per second), and output peak event rate can reach 45 Meps. The chip can be configured to discriminate between two simulated propeller-like shapes rotating simultaneously in the field of view at a speed as high as 9400 rps (revolutions per second). Achieving this with a frame-constraint system would require a sensing and processing capability of about 100 K frames per second. The prototype chip has been built in 0.35 CMOS technology, occupies 4.3 × 5.4 and consumes a peak power of 200 mW at maximum kernel size at maximum input event rate.es_ES
dc.description.sponsorshipThis work was supported by EU Grant 216777 (NABAB), Spanish Grants (with support from the European Regional Development Fund) TEC2006-11730-C03-01 (SAMANTA2) and TEC2009-10639-C04-01 (VULCANO), and Andalusian Grant P06TIC01417 (Brain System). The work of C. Zamarreño-Ramos was supported by a national FPU scholarship.-
dc.publisherInstitute of Electrical and Electronics Engineerses_ES
dc.titleA 32, x, 32 pixel convolution processor chip for address event vision sensors with 155 ns event latency and 20 meps throughputes_ES
dc.description.peerreviewedPeer reviewedes_ES
Appears in Collections:(IMSE-CNM) Artículos
Files in This Item:
File Description SizeFormat 
32x32_pixel.pdf2,72 MBAdobe PDFThumbnail
Show simple item record

Related articles:

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.