English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/92701
Share/Impact:
Statistics
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Title

A hierarchical vision processing architecture oriented to 3D integration of smart camera chips

AuthorsCarmona-Galán, R. ; Zarandy, A.; Rekeczky, Csaba; Földesy, P.; Rodríguez-Pérez, Alberto ; Domínguez-Matas, Carlos ; Fernández-Berni, J. ; Liñán-Cembrano, G. ; Pérez-Verdú, Belén ; Kárász, Zoltán; Suárez, Marta ; Brea, V. M.; Roska, Tamás; Rodríguez-Vázquez, Ángel
KeywordsAdapted architectures
MOPS/mW
3D integrated circuits
Vision chips
Hierarchical vision
Issue Date2013
PublisherElsevier
CitationJournal of Systems Architecture 59(10A): 908-919 (2013)
AbstractThis paper introduces a vision processing architecture that is directly mappable on a 3D chip integration technology. Due to the aggregated nature of the information contained in the visual stimulus, adapted architectures are more efficient than conventional processing schemes. Given the relatively minor importance of the value of an isolated pixel, converting every one of them to digital prior to any processing is inefficient. Instead of this, our system relies on focal-plane image filtering and key point detection for feature extraction. The originally large amount of data representing the image is now reduced to a smaller number of abstracted entities, simplifying the operation of the subsequent digital processor. There are certain limitations to the implementation of such hierarchical scheme. The incorporation of processing elements close to the photo-sensing devices in a planar technology has a negative influence in the fill factor, pixel pitch and image size. It therefore affects the sensitivity and spatial resolution of the image sensor. A fundamental tradeoff needs to be solved. The larger the amount of processing conveyed to the sensor plane, the larger the pixel pitch. On the contrary, using a smaller pixel pitch sends more processing circuitry to the periphery of the sensor and tightens the data bottleneck between the sensor plane and the memory plane. 3D integration technologies with a high density of through-silicon-vias can help overcome these limitations. Vertical integration of the sensor plane and the processing and memory planes with a fully parallel connection eliminates data bottlenecks without compromising fill factor and pixel pitch. A case study is presented: a smart vision chip designed on a 3D integration technology provided by MIT Lincoln Labs, whose base process is 0.15 μm FD-SOI. Simulation results advance performance improvements with respect to the state-of-the-art in smart vision chips. © 2013 Elsevier B.V. All rights reserved.
URIhttp://hdl.handle.net/10261/92701
DOI10.1016/j.sysarc.2013.03.002
Identifiersdoi: 10.1016/j.sysarc.2013.03.002
issn: 1383-7621
Appears in Collections:(IMSE-CNM) Artículos
Files in This Item:
File Description SizeFormat 
accesoRestringido.pdf15,38 kBAdobe PDFThumbnail
View/Open
Show full item record
Review this work
 

Related articles:


WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.