English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/168891
Share/Impact:
Statistics
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

DC FieldValueLanguage
dc.contributor.authorRodríguez-Vázquez, Ángel-
dc.contributor.authorFernández-Berni, J.-
dc.contributor.authorLeñero-Bardallo, J. A.-
dc.contributor.authorVornicu, Ion-
dc.contributor.authorCarmona-Galán, R.-
dc.date.accessioned2018-08-20T09:52:15Z-
dc.date.available2018-08-20T09:52:15Z-
dc.date.issued2018-
dc.identifierdoi: 10.1109/MCAS.2018.2821772-
dc.identifierissn: 1531-636X-
dc.identifier.citationIEEE Circuits and Systems Magazine 18: 90- 107 (2018)-
dc.identifier.urihttp://hdl.handle.net/10261/168891-
dc.description.abstractCMOS Image Sensors (CIS) are key for imaging technologies. These chips are conceived for capturing optical scenes focused on their surface, and for delivering electrical images, commonly in digital format. CISs may incorporate intelligence; however, their smartness basically concerns calibration, error correction and other similar tasks. The term CVISs (CMOS VIsion Sensors) defines other class of sensor front-ends which are aimed at performing vision tasks right at the focal plane. They have been running under names such as computational image sensors, vision sensors and silicon retinas, among others. CVIS and CISs are similar regarding physical implementation. However, while inputs of both CIS and CVIS are images captured by photo-sensors placed at the focal-plane, CVISs primary outputs may not be images but either image features or even decisions based on the spatial-temporal analysis of the scenes. We may hence state that CVISs are more >intelligent> than CISs as they focus on information instead of on raw data. Actually, CVIS architectures capable of extracting and interpreting the information contained in images, and prompting reaction commands thereof, have been explored for years in academia, and industrial applications are recently ramping up. One of the challenges of CVISs architects is incorporating computer vision concepts into the design flow. The endeavor is ambitious because imaging and computer vision communities are rather disjoint groups talking different languages. The Cellular Nonlinear Network Universal Machine (CNNUM) paradigm, proposed by Profs. Chua and Roska, defined an adequate framework for such conciliation as it is particularly well suited for hardware-software co-design [1]-[4]. This paper overviews CVISs chips that were conceived and prototyped at IMS E Vision Lab over the past twenty years. Some of them fit the CNNUM paradigm while others are tangential to it. All of them employ per-pixel mixed-signal processing circuitry to achieve sensor-processing concurrency in the quest of fast operation with reduced energy budget.-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.isversionofPostprint-
dc.rightsopenAccessen_EN
dc.titleCMOS Vision Sensors: Embedding Computer Vision at Imaging Front-Ends-
dc.typeartículo-
dc.embargo.terms2019-05-29-
dc.date.updated2018-08-20T09:52:15Z-
dc.description.versionPeer Reviewed-
dc.language.rfc3066eng-
dc.relation.csic-
Appears in Collections:(IMSE-CNM) Artículos
Files in This Item:
File Description SizeFormat 
2018 IEEETCASMagazine_LeonIssue_ARV-V2.pdf2,67 MBAdobe PDFThumbnail
View/Open
Show simple item record
 


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.