Sie sind auf Seite 1von 3

Piotr

Holc (ph1109)
Parallel Algorithms
CW I


Execution time for approximation:

Trapezoids
1,000,000 10,000,000 100,000,000
Processors

1 0.026 0.131 1.239


2 0.014 0.066 0.628
4 0.007 0.041 0.335
8 0.005 0.030 0.165

Speedup for approximation (above):

Trapezoids
1,000,000 10,000,000 100,000,000
Processors

1 1.000 1.000 1.000


2 1.785 1.993 1.973
4 3.484 3.206 3.695
8 5.133 4.348 7.524

Efficiency for approximation (above):

Trapezoids
1,000,000 10,000,000 100,000,000
Processors

1 1.000 1.000 1.000


2 0.893 0.996 0.987
4 0.871 0.801 0.924
8 0.642 0.543 0.940

Speedup (Sp) for Pi approximation
8

5
Speedup

4 1,000,000

3 10,000,000

2 100,000,000

0
1 2 4 8
Processors


The speedup can be clearly seen in all of the three cases (1 million, 10 million, and 100
million trapezoids), as the number of processors is increased. As we increase the
problem size, we can start to clearly see an exponential increase in the speedup (best
seen for the 100 million trapezoid speedup progression). This is expected results, as
doubling the processing power, should in theory double the speedup. The reason the
100 million trapezoid execution shows this doubling best is that with a bigger problem
the cost of actually computing the approximation takes the majority of the time (rather
than communication costs which become negligible since the problem computation
takes the most time).

Ef6iciency (Ep) for Pi approximation
1.1

1.0

0.9
Ef6iciency

0.8 1,000,000
10,000,000
0.7
100,000,000
0.6

0.5
1 2 4 8
Processors


The efficiency (fraction of time processor does useful work) can be seen as falling when
the number of processors increases. This is in part to Amdahl's law, wherein there are
parts of the computation, which cannot be parallelized. The obvious example is the
communication costs, and the barriers for synchronizing all processors. For our pi
approximation, the most efficient computation is when the problem size (number of
trapezoids) is the biggest (100 million trapezoids). The efficiency does fall but only to
0.924 (92.4% of the time the processors do something useful). This is because the time
to calculate the approximation takes longer, and waiting on synchronization with other
processors and message passing becomes negligible. The biggest efficiency decrease
should in theory occur with the smaller problem size, however, according to the graph,
this is not the case. The 10 million trapezoid problems efficiency decreases faster than
for the 1 million trapezoid problem. The reason for this is a measurement issue: the
executions were only executed once, and therefore the load of the processors that were
running the computations could have been different. In general, however, the efficiency
decreases more for smaller problems, and less for bigger ones.

Das könnte Ihnen auch gefallen