English   español  
Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/92701
COMPARTIR / IMPACTO:
Estadísticas
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:
Título

A hierarchical vision processing architecture oriented to 3D integration of smart camera chips

AutorCarmona-Galán, R. ; Zarandy, A.; Rekeczky, Csaba; Földesy, P.; Rodríguez-Pérez, Alberto ; Domínguez-Matas, Carlos ; Fernández-Berni, J. ; Liñán-Cembrano, G. ; Pérez-Verdú, Belén ; Kárász, Zoltán; Suárez, Marta ; Brea, V. M.; Roska, Tamás; Rodríguez-Vázquez, Ángel
Palabras claveAdapted architectures
MOPS/mW
3D integrated circuits
Vision chips
Hierarchical vision
Fecha de publicación2013
EditorElsevier
CitaciónJournal of Systems Architecture 59(10A): 908-919 (2013)
ResumenThis paper introduces a vision processing architecture that is directly mappable on a 3D chip integration technology. Due to the aggregated nature of the information contained in the visual stimulus, adapted architectures are more efficient than conventional processing schemes. Given the relatively minor importance of the value of an isolated pixel, converting every one of them to digital prior to any processing is inefficient. Instead of this, our system relies on focal-plane image filtering and key point detection for feature extraction. The originally large amount of data representing the image is now reduced to a smaller number of abstracted entities, simplifying the operation of the subsequent digital processor. There are certain limitations to the implementation of such hierarchical scheme. The incorporation of processing elements close to the photo-sensing devices in a planar technology has a negative influence in the fill factor, pixel pitch and image size. It therefore affects the sensitivity and spatial resolution of the image sensor. A fundamental tradeoff needs to be solved. The larger the amount of processing conveyed to the sensor plane, the larger the pixel pitch. On the contrary, using a smaller pixel pitch sends more processing circuitry to the periphery of the sensor and tightens the data bottleneck between the sensor plane and the memory plane. 3D integration technologies with a high density of through-silicon-vias can help overcome these limitations. Vertical integration of the sensor plane and the processing and memory planes with a fully parallel connection eliminates data bottlenecks without compromising fill factor and pixel pitch. A case study is presented: a smart vision chip designed on a 3D integration technology provided by MIT Lincoln Labs, whose base process is 0.15 μm FD-SOI. Simulation results advance performance improvements with respect to the state-of-the-art in smart vision chips. © 2013 Elsevier B.V. All rights reserved.
URIhttp://hdl.handle.net/10261/92701
DOI10.1016/j.sysarc.2013.03.002
Identificadoresdoi: 10.1016/j.sysarc.2013.03.002
issn: 1383-7621
Aparece en las colecciones: (IMSE-CNM) Artículos
Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
accesoRestringido.pdf15,38 kBAdobe PDFVista previa
Visualizar/Abrir
Mostrar el registro completo
 

Artículos relacionados:


NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.