English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/168891
Share/Impact:
Statistics
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Title

CMOS Vision Sensors: Embedding Computer Vision at Imaging Front-Ends

AuthorsRodríguez-Vázquez, Ángel ; Fernández-Berni, J. ; Leñero-Bardallo, J. A. ; Vornicu, Ion; Carmona-Galán, R.
Issue Date2018
PublisherInstitute of Electrical and Electronics Engineers
CitationIEEE Circuits and Systems Magazine 18: 90- 107 (2018)
AbstractCMOS Image Sensors (CIS) are key for imaging technologies. These chips are conceived for capturing optical scenes focused on their surface, and for delivering electrical images, commonly in digital format. CISs may incorporate intelligence; however, their smartness basically concerns calibration, error correction and other similar tasks. The term CVISs (CMOS VIsion Sensors) defines other class of sensor front-ends which are aimed at performing vision tasks right at the focal plane. They have been running under names such as computational image sensors, vision sensors and silicon retinas, among others. CVIS and CISs are similar regarding physical implementation. However, while inputs of both CIS and CVIS are images captured by photo-sensors placed at the focal-plane, CVISs primary outputs may not be images but either image features or even decisions based on the spatial-temporal analysis of the scenes. We may hence state that CVISs are more >intelligent> than CISs as they focus on information instead of on raw data. Actually, CVIS architectures capable of extracting and interpreting the information contained in images, and prompting reaction commands thereof, have been explored for years in academia, and industrial applications are recently ramping up. One of the challenges of CVISs architects is incorporating computer vision concepts into the design flow. The endeavor is ambitious because imaging and computer vision communities are rather disjoint groups talking different languages. The Cellular Nonlinear Network Universal Machine (CNNUM) paradigm, proposed by Profs. Chua and Roska, defined an adequate framework for such conciliation as it is particularly well suited for hardware-software co-design [1]-[4]. This paper overviews CVISs chips that were conceived and prototyped at IMS E Vision Lab over the past twenty years. Some of them fit the CNNUM paradigm while others are tangential to it. All of them employ per-pixel mixed-signal processing circuitry to achieve sensor-processing concurrency in the quest of fast operation with reduced energy budget.
URIhttp://hdl.handle.net/10261/168891
Identifiersdoi: 10.1109/MCAS.2018.2821772
issn: 1531-636X
Appears in Collections:(IMSE-CNM) Artículos
Files in This Item:
File Description SizeFormat 
2018 IEEETCASMagazine_LeonIssue_ARV-V2.pdf2,67 MBAdobe PDFThumbnail
View/Open
Show full item record
Review this work
 


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.