Sie sind auf Seite 1von 4

ANSWER PAST YEAR JANUARY 2013 CST 331

(1)
a)

#include <stdio.h>
#include <stdafx.h>
int main()
{
int i;
int sum = 0, sum1=0; sum2=0;
int x[] = {2,4,6,8,10,12,14,16,18,20};
int y[] = {1,3,5,7,9,11,13,15,17,19};
for (i=0; i <= 9; i++)
{
sum += x[i] * x[i];
sum1 += y[i] * y[i];
}
sum2 = sum + sum1;
printf (Cumulative sum of two array is: %d, sum2);
return 0;
}

b)

#incude <stdio.h>
#include <stdafx.h>
int main()
{
int i;
int sum = 0; sum1= 0; sum2 = 0;
int x[] = {2,4,6,8,10,12,14,16,18,20};
int y[] = {1,3,5,7,9,11,13,15,17,19};
#pragma omp parallel private (sum, sum1, sum2)
first private (sum, sum1, sum2)

c)

(2)
a)

Data parallelism or Domain decomposition: data are divided into pieces of approximately the
same size and then mapped to different processor. Each processor then works only on
the
portion of the data that is assigned to it
Task parallelism or Functional decomposition: In functional decomposition, the problem is
decomposed into a large number of smaller task and then, the task are assigned to the
processors as they become available. Processors that finish quickly are simply assigned more
work.

b)
c)
(3)
a)

Race condition: is a situation where a device or a system attempts to perform two or more
operations at the same time, but because of the nature of the device or system, the operations
must be done in the proper sequence to be done correctly.
Critical Section: a piece of code that assesses a shared resource (data structure or device) that
must not be simultaneously assessed by more than one thread of execution.
Mutual Exclusion: refers to requirement of ensuring that no two concurrent processes are at
critical section at the same time. It is a basic requirement in concurrency control, to prevent
race conditions

b)
OpenMP

Pthreads

omp_init_lock
omp_destroy_lock
omp_set_lock
Omp_unset_lock

pthread_mutex_init
pthread_mutex_destroy
pthread_mutex_lock
pthread_mutex_unlock

c)
(4)
a)

Speedup = T1 / T (P)
Where T1 : Execution time of the sequential algorithm
T(P): Execution time of the parallel algorithm with p processors
- Speed up is the ratio of the serial run time of the best sequential algorithm for solving a
problem to the time taken by the parallel algorithm to solve the same problem on p
processors.
Efficiency = Speedup / p
Where p : Number of processor
-Efficiency is the fraction of time which a processor is usefully employed

b)

Speedup:

Number of
processor
Programming
model
OpenMP
MPI

1
1

1.80
1.87

2.74
2.62

3.72
3.12

1
1

0.9
0.935

0.91
0.873

0.93
0.78

Efficiency:
Number of
processor
Programming
model
OpenMP
MPI
c)
d)

Four steps in designing parallel program:


1. Partitioning: The computation or process that is to be performed and the data operated
by this computation are decomposed into small tasks. Focus on recognizing
opportunities for parallel execution.
2. Communication: The communication required to coordinate task execution is
determined, and appropriate communication structures and algorithms are defined.
3. Agglomeration: Evaluation of performance and implementation cost; task may be
combined.
4. Mapping: assigning task to processor with the aim of maximizing processor
utilization and minimizing communication cost.
How do they affect speedup and efficiency?
-

Das könnte Ihnen auch gefallen