2020-01-23T23:20:08Z
http://digital.csic.es/dspace-oai/request
oai:digital.csic.es:10261/135874
2016-08-26T00:52:48Z
com_10261_90
com_10261_4
col_10261_469
Carmona-Galán, R.
Fernández-Berni, J.
Rodríguez-Vázquez, Ángel
2016-08-25T07:22:26Z
2016-08-25T07:22:26Z
2016
International Workshop on Cellular Nanoscale Networks and their Applications (CNNA), Dresden, Germany, in August 2016
http://hdl.handle.net/10261/135874
Speeding up algorithm execution can be achieved by
increasing the number of processing cores working in parallel.
Of course, this speedup is limited by the degree to which the
algorithm can be parallelized. Equivalently, by lowering the
operating frequency of the elementary processors, the algorithm
can be realized in the same amount of time but with measurable
power savings. An additional result of parallelization
is that using a larger number of processors results in a more
efficient implementation in terms of GOPS/W. We have found
experimental evidence for this in the study of massively parallel
array processors, mainly dedicated to image processing. Their
distributed architecture reduces the energy overhead dedicated to
data handling, thus resulting in a power efficient implementation
eng
openAccess
Parallel processing
Cellular Processing Array
Multicore processing
Computational efficiency
Experimental Evidence of Power Efficiency due to Architecture in Cellular Processor Array Chips
comunicación de congreso