Sie sind auf Seite 1von 22

Submitted by: Lokesh Kumar Pawar 0101CS091042 Vth Sem CSE (A)

Submitted to: Prof. Manish Ahirwar Sir Department of Computer Science UIT- RGPV Bhopal

Program in execution is process. Smallest unit of processing that can be scheduled is thread. Threads are subset of Processing. Processes do not share resources at a time but threads could.

Independent process cannot affect or be affected by the execution of another process. Cooperating process can affect or be affected by the execution of another process Advantages of process cooperation  Information sharing  Computation speed-up  Modularity  Convenience Dangers of process cooperation  Data corruption, deadlocks, increased complexity  Requires processes to synchronize their processing

Exchange of data between two or more separate, independent processes/threads. Mostly used in Multiprocessor system or Distributed system.
Process 1 data Process 2

sender

receiver

Data Transfer Sharing Data Event notification Resource Sharing and Synchronization Process Control etc.

In distributed computing, two or more processes engage in IPC using a protocol agreed upon by the processes. A process may be a sender at some points during a protocol, a receiver at other points. When communication is from one process to a single other process, the IPC is said to be a unicast, e.g., Socket communication. When communication is from one process to a group of processes, the IPC is said to be a multicast.

P2

P2 m

P3

...
m m

P4

P1

P1

unicast

multicast

1k pieces puzzle Takes 10 hours

Process 1

Process 2 Shared memory

Process 3

Orange and green share the puzzle on the same table Takes 6 hours (not 5 due to communication & contention)

Processors shares common memory location. When one process changes the memory, all the other processes see the modification. Fast local communication We have to provide synchronization methods Permit fast bidirectional communication among any number of processes.

Advantages:
 Globally shared memory provides user-friendly programming

perspective to programming.

 Ease of programming for complex communication

patterns; or for dynamic communication patterns


 Lower communication overhead, better use of BW for

small items.  memory mapping implements protection in hardware

Disadvantage:
 Lack of scalability (adding processors changes the traffic

requirement of the Interconnect).


Not easy to build big ones.
 The interconnect is usually custom built: expensive!!

 Writing correct shared memory parallel programs is not

straight forward.

An inter-process communication mechanism Data transfer plus synchronization


Process 0 Data May I Send? Data Data Data Data Data Data Data Data

Process 1

Yes

Time

System calls, not language constructs


message

Messages can be passed in one direction at a time


 One process is the sender and the other is the receiver

Message passing can be bidirectional


 Each process can act as either a sender or a receiver

Messages can be blocking or non blocking


 Blocking requires the receiver to notify the sender

when the message is received  Non blocking enables the sender to continue with other processing

In a Message passing system there are no shared variables. IPC facility provides two operations for fixed or variable sized message:
 send(message)  receive(message)

If processes P and Q wish to communicate, they need to:


 establish a communication link  exchange messages via send and receive

Comparably slower than shared memory

Pros Scalable, Flexible Cons Someone says it s more difficult than DSM

A standard message passing specification for the vendors to implement Context: distributed memory parallel computers
 Each processor has its own memory and cannot access the memory of

other processors  Any data to be shared must be explicitly transmitted from one to another

Most message passing programs use the single program multiple data (SPMD) model
 Each processor executes the same set of instructions  Parallelization is achieved by letting each processor operation a different

piece of data  MIMD (Multiple Instructions Multiple Data)

Small
 Many programs can be written with only 6 basic

functions

Large
 MPI s extensive functionality from many functions

Scalable
 Point-to-point communication

Flexible
 Don t need to rewrite parallel programs across

platforms

How many people are working? What is my role? How to send and receive data?

Communication speed Communication link Numbers of process to be communicate Read write operation on a particular time. Process are synchronized or not

Advantages The hardware can be simpler Communication explicit =>


 simpler to understand  focuses attention on costly aspect of parallel

computation

Synchronization is associated with messages


 reduces the potential for errors introduced by incorrect

synchronization

Easier to implement sender-initiated communication models, which may have some advantages in performance

Das könnte Ihnen auch gefallen