2024-03-29T04:34:28Zhttp://digital.csic.es/dspace-oai/requestoai:digital.csic.es:10261/1358742021-06-10T15:20:44Zcom_10261_90com_10261_4col_10261_469
00925njm 22002777a 4500
dc
Carmona-Galán, R.
author
Fernández-Berni, J.
author
Rodríguez-Vázquez, Ángel
author
2016
Speeding up algorithm execution can be achieved by
increasing the number of processing cores working in parallel.
Of course, this speedup is limited by the degree to which the
algorithm can be parallelized. Equivalently, by lowering the
operating frequency of the elementary processors, the algorithm
can be realized in the same amount of time but with measurable
power savings. An additional result of parallelization
is that using a larger number of processors results in a more
efficient implementation in terms of GOPS/W. We have found
experimental evidence for this in the study of massively parallel
array processors, mainly dedicated to image processing. Their
distributed architecture reduces the energy overhead dedicated to
data handling, thus resulting in a power efficient implementation
International Workshop on Cellular Nanoscale Networks and their Applications (CNNA), Dresden, Germany, in August 2016
http://hdl.handle.net/10261/135874
Parallel processing
Cellular Processing Array
Multicore processing
Computational efficiency
Experimental Evidence of Power Efficiency due to Architecture in Cellular Processor Array Chips