Sie sind auf Seite 1von 1

Automatic parallelization, also auto parallelization, autoparallelization, or pa rallelization, the last one of which implies automation when used

in context, re fers to converting sequential code into multi-threaded or vectorized (or even bo th) code in order to utilize multiple processors simultaneously in a shared-memo ry multiprocessor (SMP) machine. The goal of automatic parallelization is to rel ieve programmers from the tedious and error-prone manual parallelization process . Though the quality of automatic parallelization has improved in the past sever al decades, fully automatic parallelization of sequential programs by compilers remains a grand challenge due to its need for complex program analysis and the u nknown factors (such as input data range) during compilation.[1] The programming control structures on which autoparallelization places the most focus are loops, because, in general, most of the execution time of a program ta kes place inside some form of loop. There are two main approaches to paralleliza tion of loops: pipelined multi-threading and cyclic multi-threading.[2] If we have, for example, a loop that on each iteration applies a hundred operati ons, and it runs for a thousand iterations, we can imagine a big grid of 100 col umns x 1000 rows, a total of 100,000 operations. Cyclic multi-threading assigns each row to a different thread. Pipelined multi-threading assigns each column to a different thread.

Das könnte Ihnen auch gefallen