Sie sind auf Seite 1von 7

1

PARALLEL PROGRAMMING MODELS

General 1. The basic Von Neuman Architecture lays down the basis of sequential execution of a

program. However, this falls short when we resort to Parallel Computing. Parallel programming introduces additional sources of software complexity most of which arise from providing multiple processors with access to memory. When more than one processor is able to write concurrently to the same memory location, a race condition may arise, resulting in a chaos like situation. This can result in a program being incorrect, and requires access to memory to be synchronized. An incorrect program can also arise when one processor attempts to read a memory location before its value has been set by another processor. Again, program correctness requires that access to memory be synchronized. 2. A Parallel Programming Model is a concept of programs that can be compiled and

executed in parallel. Its value is judged on the generality, i.e. how well a program can imply a set of problems and how well it can be executed on different architectures. Aspects of Parallel Program Models 3. There are different aspects that are necessary to understand before we discuss the

various types of Program Models. These are:-

a.

Concurrency. Property of systems in which several computational processes are executing at the same time, and potentially interacting with each other.

b.

Parallelism. Computations in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently i.e. in parallel.

c. 4.

Distributed. The tasks running on different interconnected computers. With these aspects in mind, we come to two basic questions:a. b. 5. How can we write programs that run faster on a multi-core CPU? And How can we write programs that do not crash on a multi-core CPU? The answer is different Parallel Program Models that can be used for implementation.

Parallel Programming Models exist as an abstraction above hardware and memory architectures. The choice of model to be used is often a combination of available resources. There is no "best" model, although there certainly are better implementations of some models over others. Most commonly, they are divided into two main categories:a. b. Process Interaction. Problem Decomposition. 6. Process Interaction. Process interaction refers to the ways through which a

number of parallel processes communicate with each other. The most common forms of interaction are Shared Memory and Message Passing, but it can also be Implicit and hybrid. a. Shared memory (1) In shared-memory programming model, tasks share a common address space, which they read and write to asynchronously.
(2) Shared state concurrency involves the idea of mutable state (literally

memory that can be changed). This is fine as long as there is only one process / thread doing the changing.

3 (3) If there are multiple processes sharing and modifying the same memory,

the process/ threads are bound to proceed and crash the system. To protect against the simultaneous modification of shared memory, a locking mechanism is used, which may be termed as mutes, synchronized method or just a lock.
(4) An advantage of this model from the programmer's point of view is that

there is no data "ownership", so there is no need to specify explicitly the communication of data between tasks. This facilitates simplified program development. (5) A disadvantage in terms of performance is that it becomes more difficult to understand and manage data locality.

b.

Distributed Memory (1) Also termed as Message Passing, there is no shared state. All computations are done inside the processes / threads, and the only way to exchange data is through asynchronous message passing.
(2) Essentially, it consists of a set of tasks that use their own local memory

during computation. Multiple tasks can reside on the same physical machine and/or across an arbitrary number of machines.
(3) Exchange of data between tasks is through communications by sending and

receiving messages. Data transfer usually requires cooperative operations to be performed by each process. For example, a send operation must have a matching receive operation. (4) From a programming perspective, message passing implementations commonly comprise a library of subroutines that are imbedded in source code. The programmer is responsible for determining all parallelism.

c.

Implicit.

In an implicit model, no process interaction is visible to the

programmer; instead the compiler is responsible for performing it. The programming language allows a compiler or interpreter to automatically exploit the parallelism inherent to the computations expressed by some of the language's constructs.

7.

Problem Decomposition.

Any parallel program is composed of simultaneously

executing processes, and Problem Decomposition relates to the way in which these processes are identified and used. The decomposition to exploit or implement parallelism may be either by exploiting parallelism in tasks/ threads, data or implicit. a. Task Parallel Model. Also known as Threads Model, a Task-Parallel Model focuses on

processes or threads of execution and extracts parallelism therin. (1) In this model, a single process can have multiple, concurrent execution paths. The simplest analogy that can be used to describe threads is the concept of a single program that includes a number of subroutines. (2) The main program is loads and acquires all of the necessary system and user resources to run. It then performs some serial work, and then creates a number of tasks (threads) that can be scheduled and run by the OS concurrently. (3) Each thread has local data, but also shares the entire resources of the main program instead of replicating resources for each thread. (4) Threads communicate with each other through global memory (updating address locations). This requires synchronization constructs to ensure that more than one thread is not updating the same global address at any time. (5) Task parallelism is a natural way to express message-passing

communication. It is usually classified as MIMD/MPMD or MISD.

b.

Data Parallel Model (1) A data-parallel model focuses on performing operations on a data set which is usually regularly structured in an array.
(2)

A set of tasks will operate on this data, but independently on separate partitions of the data

(3)

Tasks perform the same operation on their partition of work, for example, "add 4 to every array element".

(4)

In a shared memory system, the data will be accessible to all, but in a distributed-memory system it will be divided between memories as chunks and worked on locally.

c.

Implicit.

In an implicit model, no process interaction is visible to the programmer; instead

the compiler is responsible for performing it. The programming language allows a compiler or interpreter to automatically exploit the parallelism inherent to the computations expressed by some of the language's constructs. 8. Other Categories. Other types of Parallel Program models also exit, which include

Hybrid, Single Program Multiple Data (SPMD) and Multiple Program Multiple Data (MPMD). These are briefly described below:a. Hybrid (1) In this model, any two or more parallel programming models are combined.

7 (2)

A common example of a hybrid model is the combination of the Message Passing Model (MPI) with either the Threads Model (POSIX threads) or the Shared Memory Model (OpenMP).

(3)

Another common example of a hybrid model is combining Data Parallel with Message Passing.

b.

Single Program Multiple Data (SPMD) (1) (2) SPMD is actually a "high level" programming model that can be built upon any combination of the previously mentioned parallel programming models. A single program is executed by all tasks simultaneously. At any moment in time, tasks can be executing the same or different instructions within the same program.
(3)

SPMD programs usually have the necessary logic programmed into them to allow different tasks to branch or conditionally execute selected parts of the program, and all tasks may use different data

c.

Multiple Program Multiple Data (MPMD) (1) Like SPMD, MPMD is actually a "high level" programming model that can be built upon any combination of the previously mentioned parallel programming models.
(2)

MPMD applications typically have multiple executable object files (programs). While the application is being run in parallel, each task can be executing the same or different program as other tasks, and all tasks may use different data.

Das könnte Ihnen auch gefallen