Sie sind auf Seite 1von 91

Interprocess synchronization

Concurrent Processes
Concurrency refers to parallel execution of a program The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. a process is cooperating if it can affect or be affected by the other processes executing in the system.

Why cooperate?
Cooperating is required due to several reasons Information sharing: Computation speedup Modularity Convenience

Forms of inter process interaction


Three forms of explicit inter process interaction Inter process synchronization set of protocols and mechanisms used to preserve system integrity and consistency when concurrent process share resources . shared data can be corrupted if it executed concurrently Inter process signaling - exchange of timing signals among the concurrent process or threads used to coordinate the collective progress Inter process communication-concurrent cooperative process must communicate with each other for exchanging data

Need for inter process synchronization


Concurrent access to shared data may result in data inconsistency . Cooperative process must synchronize with each other when they are to shared resources or shared variables((variables that can be referenced by more than 1 process) . Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.

Void echo() { chin = getchar(); chout = chin; putchar(chout); } 1. Process P1 invokes the echo procedure and is interrupted immediately after getchar returns its value and stores it in chin. At this point, the most recently entered character, x, is stored in variable chin.

2. Process P2 is activated and invokes the echo procedure, which runs to


conclusion, inputting and then displaying a single character, y, on the screen. 3.Process P1 is resumed. By this time, the value x has been overwritten in chin and therefore lost. Instead, chin contains y, which is transferred to chout and displayed.

Race condition
A situation where several processes access and manipulate some shared data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition. To prevent race conditions, concurrent processes must be synchronized. To guard against the race condition we need to ensure that only one process at a time accessing shared data

Suppose that two processes, P1 and P2, share the global variable a. At some point in its execution, P1 updates a to the value 1, and at some point in its execution, P2 updates a to the value 2. Thus, the two tasks are in a race to write variable a.

the process that updates last determines the final value of a.

The Critical-Section problem


Each process has a segment of code, called a critical section, in which the shared data is accessed. when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. The critical-section problem is to design a protocol that the processes can use to cooperate.

A Critical Section Environment contains:


Entry Section Each process must request permission to enter its critical section. Code requesting entry into the critical section. Critical Section any one time. Exit Section others in. Remainder Section section. Rest of the code A FTER the critical The end of the critical section, releasing or allowing Code in which only one process can execute at

General structure of process Pi

A solution to the critical-section problem must satisfy the three requirements: 1 Mutual Exclusion If process Pi is executing in its critical section, then no other processes can be executing in their critical sections 2.Progress If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely

3.Bounded Waiting A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted We can make no assumption concerning the relative speed of the n processes.

Mutual exclusion
A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing. This is called Mutual Exclusion. while one process executes the shared variable, all other processes desiring to do so at the same time should be kept waiting .When that process has finished executing the shared variable, one of the processes waiting should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously.

mutual exclusion needs to be enforced only when processes access shared modifiable data . when processes are performing operations that do not conflict with one another they should be allowed to proceed concurrently. A mutual exclusion policy would ensure that only one process at a time is in its critical section.

For instance, two processes, P1 and P2, are running on the same computer. Process P1 enters its critical section, and before it is done process P2 is scheduled to run. P2 runs until it reaches its critical section, but may go no further until P1 has exited its critical section. In this case, P2 is blocked until P1 leaves its critical section.

Requirements for Mutual Exclusion


Any solution to mutual exclusion problem should meet these requirements: 1.Only one process allowed in the CS. 2.No assumption regarding the relative speeds or the number of CPUs. 3.A process is in its CS for a finite time only. 4.Process requesting access to CS should not wait indefinitely. 5. A process waiting to enter CS cannot be blocking a process in CS or any other processes.

Mutual Exclusion Algorithms Two-Process Solutions


Algorithms that are applicable to only 2 processes at a time Algorithm 1 Process are numbered 0 and 1 Process share a common integer variable turn initialized to 0 (or 1) If turn==i, then process Pi is allowed to execute in its critical section

Structure of Process Pi
do { while ( turn ! = i ) ; critical section turn = j ; remainder section } while (1);

Pi is waiting if Pj is in CS: mutual exclusion is satisfied Progress requirement is not satisfied since it requires strict alternation of CS For example, if turn == 0 and P1 is ready to enter its critical section, P1 cannot do so, even though Po may be in its remainder section. It does not contain sufficient info. about the state of each process, remembers only which process is allowed to enter its critical section

Algorithm 2
Replace the variable turn with the array boolean flag[2] ; Elements of the array are initialized to false Pi ready to enter its critical section if (flag [i] == true) Structure of process pi in algorithm2 do { flag[ i ] = true; while ( flag[ j ] ) ; critical section flag [ i ] = false; remainder section } while (1);

Here process Pi first sets f l a g [il to be true , signaling that it is ready to enter its critical section. Then, Pi checks to verify that process Pj is not also ready to enter its critical section. If Pj were ready, then Pi would wait until Pj exit from critical section (i.e until flag [ j I was false). At this point, Pi would enter the critical section. On exiting the critical section, Pi would set flag [il to be false, allowing the other process (if it is waiting) to enter its critical section. the mutual-exclusion requirement is satisfied. But the progress requirement is not met.

Algorithm 3(Petersons Solution)


Combining Algorithm1 & Algorithm2 The processes share two variables: int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical section. if turn == i, then process Pi; is allowed to execute in its critical section The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!

Structure of process Pi in algorithm3 do { flag[i] = TRUE; turn = j while (flag[j] && turn == j); critical section flag[i] = FALSE; remainder section } while (TRUE);

To enter the critical section, process Pi , first sets flag[i] to be true and then sets turn to the value j, thereby asserting that if the other process wishes to enter the critical section, it can do so. If both processes try to enter at the same time, turn will be set to both i and j at roughly the same time. Only one of these assignments will last; the other will occur, but will be overwritten enter its critical section first. This Algorithm met all three requirements of CS problem immediately. The current value of turn decides which of the two processes is allowed to

Bakery Algorithm
The purpose of this algorithm is to provide mutually exclusive access to CS by a collection of N processes. The basic idea is that of a bakery; customers take numbers, and whoever has the lowest number gets service next. Before entering its critical section, a process receives a number (like in a bakery). Holder of the smallest number enters the critical section. This algorithm cannot guarantee that 2 processes do not receive the same number If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served

Datastructures are boolean choosing[n]; int number[n]; Data structures are initialized to false and 0, respectively. Here, choosing[i] is true if Pi is choosing a number. The number that Pi will use to enter the critical section is in number[i];

do { choosing[i] = true; number[i] = max(number[0],.. , number[n 1]) +1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((number[j] != 0) && (number[j],j) < number[i],i))) ; } critical section number[i] = 0; remainder section } while (1);

Each process i executes two phases before entering its ``critical section'', in which it can access the resource. First, it chooses a ``ticket number'' greater than zero. To do this, it first sets a bit called choosing[i], to tell other processes it is in the choosing phase. Then it loops through all process indices and finds the maximum ticket number held by any process. This done, it picks a number[i] for itself that is greater the the maximum value it saw, and clears its choosing bit. Now we select which process goes into the critical section. Pi waits until it has the lowest number of all the processes waiting to enter the critical section. If two processes have the same number, the one with the smaller name Now Pi is no longer interested in entering its critical section, so it sets number[i] to 0.

Hardware implementation to mutual exclusion

Synchronization Hardware
To solve a critical section problem, we have software and hardware solution.

Drawbacks of software solutions are: Processes that are requesting to enter their critical section are busy waiting (consuming processor time needlessly). If critical sections are long, it would be more efficient to block processes that are waiting. Software solutions are very delicate. So we introduced hardware solutions to critical section problem. Hardware features can make the programming task easier and improve system efficiency. In uniprocessor system, it could disable the interrupts.ie., Currently running code would execute without preemption. But this solution is not feasible for multiprocessor system.

Disabling Interrupts
The simplest solution is to have each process disable all interrupts just after entering its CS and re-enable them just before leaving it. With interrupts disabled, the processor can not switch to another process. enter_region() { disable_interrupts;} leave_region() { enable_interrupts;}

This approach is generally unattractive because it is unwise to give user processes the power to turn off interrupts. Suppose one of them did it, and never turned them on again .That can be the end of the system. if the system has more than one processor ,this method will fail again since the process can disable the interrupts of the processor it is being executed by.

Test and set instruction


Test and set (TS) instuction is for direct hardware support for mutual exclusion Shared data: boolean lock = false; Process Pi do { while (TestAndSet(lock)) ; critical section lock = false; remainder section }

Semaphore

Semaphore
Synchronization tool that does not require busy waiting A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait and signal. These operations were originally termed P for wait and V for signal.

The definition of wait is wait (s) { while ( s<=0); s-} The definition of signal () is as follows signal (s) { s++ ; } When one process modifies semaphore value no other process can simultaneously modify the same semaphore value

Mutual exclusion implementation with semaphores


do { wait(mutex); critical section signal (mutex) ; remainder section } while (true) ;

Semaphore Implementation with no Busy waiting


With each semaphore there is an associated waiting queue. In addition, two simple utility operations: block() suspends the process that invokes it. Wakeup() resumes the execution of a blocked process P.

When a process executes the wait() operation and finds the semaphore value is not positive it must wait. instead of busy waiting The process can block itself. The block operation places a process into a waiting queue associated with the semaphore, and the state is changed to waiting state. The control is transferred to the CPU scheduler which selects another process to execute . A process that is blocked ,waiting on semaphore S should be restarted when some other executes signal() operation The process is restarted by a wakeup () operation, which changes the process from the waiting state to the ready state. The process is then placed in the ready queue.

wait(S) { S.value --; if S.value < 0 { add this process to waiting queue; block(); } signal(S){ S.value++; if S.value 0 {remove a process P from waiting queue; wakeup(P); } }

implementation
To implement semaphores under this definition, we define a semaphore as a "C" struct: typedef struct { int value; struct process *list; } semaphore; Each semaphore has an integer value and a list of processes l i s t. When a process must wait on a semaphore, it is added to the list of processes. A signal () operation removes one process from the list of waiting processes and awakens that process.

Types of semaphores
Counting semaphore integer value can range over an unrestricted domain Binary semaphore integer value can range only between 0 and 1; Binary semaphore can be simpler to implement Can implement a counting semaphore S as a binary semaphore

Binary semaphores
waitB(S): if (S.value = 1) S.value := 0; else { block this process place this process in S.queue } signalB(S): if (S.queue is empty) S.value := 1; else { remove a process P from S.queue place this process P on ready list }

Classical Problems of Synchronization

Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem

Bounded-Buffer Problem(producer consumer problem)


The buffer is an intermediate storage of N slots. A data item can be stored in each slot The producer process inserts data items into the buffer, in the first available empty slot The consumer removes data items from the full slots

eg: (1) print program produces characters that are consumed by the printer driver. (2) an assembler produces object modules that are consumed by a loader. To allow producer and consumer process to run concurrently Buffer of items that can be filled by producer and emptied by the customer must be available Producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced, consumer must wait until an item is produced
coded by the appln. programmer with the use of shared memory

unbounded-buffer, places no practical limit on the size of the buffer Consumer may have to wait for new items, but the producer can always produce new items bounded-buffer assumes that there is a fixed buffer size Consumer must wait if the buffer is empty, and the producer must wait if the buffer is full

Bounded buffer problem using semaphores


Three semaphores mutex empty full The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the value 1 The empty and full semaphores count the number of empty and full buffers Empty is initialized to the value n and full is initialized to the value 0

Structure of the producer process


while (true) { // generate an new item into nextp wait (empty); wait (mutex); // add the nextp to the buffer . signal (mutex); signal (full); }

Structure of the consumer process


Do { wait (full) ; wait (mutex) ; ... remove an item from buffer to nextc ... signal (mutex) ; signal (empty) ; ... consume the item in nextc }while (1);

READERS-WRITERS PROBLEM
Some of these processes may want only to read the content of the shared object, whereas others may want to update(ie, read and write) the shared object. We distinguish between these two types of processes by referring to those processes that are interested in only reading as readers, and to the rest as writers. a number of readers can use the shared data s concurrently but writers must be granted exclusive access to share data

EXAMPLE: AIRLINE RESERVATION


Readers are those who want flight information ;they dont modify it. So many readers to be active at the same time. No need of mutual exclusion. Writers are those who are making reservations on a particular flight. Enforce mutual exclusion if there are groups of readers and a writer, or if there are several writers. Because they are modifying existing data in the database. Of course the system must be fair when it enforces its policy to avoid indefinite postponement of readers or writers

Solution 1 : Readers have priority; If a reader is in CS any number of readers could enter irrespective of any writer waiting to enter CS. Solution 2: If a writer wants CS as soon as the CS is available writer enters it. starvation can occur on both solution.first case writer may starve second case reader may starve

Two semaphores

Mutex-intialized to 1 Wrt-intialized to 1 One integer variable readcount intialized to 0 Wrt is common to both and writer The mutex semaphore is used to ensure mutual exclusion when the variable readcount is updated. The readcount variable keeps track of how many processes are currently reading the object.

The semaphore wrt functions as a mutual-exclusion semaphore for the writers.

It is also used by the first or last reader that enters or exits the critical section.

It is not used by readers who enter or exit while other readers are in their

critical sections. if a writer is in the critical section And n readers are waiting, then one reader is queued on wrt , and n - 1 readers are queued on mutex. Also, when a writer executes signal (wrt),we may resume the execution of either the waiting readers or a single waiting writer. The selection is made by the scheduler

Structure of reader process


do { wait( mutex );/*Allow 1 reader in entry*/ readcount = readcount + 1; if readcount == 1 then wait(wrt ); /* 1st reader locks writer */ signal( mutex ); /* reading is performed */ wait( mutex ); readcount = readcount - 1; if readcount == 0 then signal(wrt ); /*last reader frees writer */ signal( mutex ); } while(TRUE);

Structure of writer process


do { wait( wrt ); /* writing is performed signal( wrt ); } while(TRUE);

*/

Dining-Philosophers Problem

Consider 5 philosophers who spend their lives thinking and eating. The philosophers share a common circular table, surrounded by 5 chairs, each belonging to one philosopher. In the centre of the table, there is a bowl with food, and the table is laid with 5 single forks A fork is placed in between each philosopher, and as such, each philosopher has one fork to his or her left and one fork to his or her right.

It is assumed that a philosopher must eat with two forks. The philosopher can only use the fork on his or her immediate left or right. Obviously, he cannot pickup a fork that is already in the hands of the neighbor. When a hungry philosopher has both of his forks at the same time, he eats without releasing them.

When he is finished eating, he puts down both of his forks, and starts thinking again.

This is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free manner.

solution
represent each chopstick with a semaphore. she tries to grab a chopstick by executing a wait() operation on that semaphore. she releases her chopstick by executing the signal() operation on the appropriate semaphores. Thus, the shared data are Semaphore chopstick[5]; all the elements of chopsticks are initialized to 1, indicating that they are free.

The structure of Philosopher i : While (true) { wait ( chopstick[i] ); // Pick left fork wait ( chopStick[ (i + 1) % 5] ); // Pick right fork // Eat signal ( chopstick[i] ); // Put back left fork signal (chopstick[ (i + 1) % 5] ); // Put back right fork // Think }

Monitors
high-level synchronization tool Monitor is a predecessor of the class concept. Monitor is defined as collection of procedures variable data structures that are all grouped together in a special kind of module or package The monitor ensures that only one process can execute a monitor operation at any time.

characteristics of a monitor
A process enters the monitor by invoking one of its procedures The local data variables are accessible only by the monitor's procedures and not by any external procedure. Only one process may be executing in the monitor at a time; any other process that has invoked the monitor is blocked, waiting for the monitor to become available.

General monitor structure


monitor monitor-name { variable declarations procedure P1 () {. . .} procedure P2 () {. . .} procedure Pn () { . . .} .. initialization of monitor data

Monitor supports synchronization by the use of condition variable They are accessible only with in the monitor To allow a process to wait within the monitor, a condition variable must be declared, as condition x, y; Condition variable can only be used with the operations wait and signal. The operation x.wait(); means that the process invoking this operation is suspended until another process invokes x.signal(); The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect. X.wait makes calling process wait on xs queue X.signal will wakeups one process from the xs queue

Schematic View of a Monitor

Monitor With Condition Variables

INTER-PROCESS COMMUNICATION

Inter-Process Communication

Cooperating processes require an IPC(inter process communication) mechanism that will allow them to exchange data and info.

IPC based on use of shared variables i.e variables that can be referenced by more than 1 process

2 fundamental models of IPC


Shared memory Message passing

Message Passing

Shared memory

Shared memory systems


shared Memory.-A region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. Shared memory region resides in the address space of the process creating the shared-memory segment Process that wish to communicate using this shared-memory segment must attach it to their address space. Processor also responsible for ensuring that they are not writing to the same location simultaneously.

Message-Passing Systems
Message-passing facility provides at least 2 operations Send(message) Receive(message) Every data item, sent by a sender, is copied from the sender process address space to the kernel space from where a receiver process copies the data item into its own address space

For example If p0 want to send a message to p1,p0 executing send system call copies the message into buffer and wait for the execution of receive call by process p1. When p1 executes a receive system call os delivers message to p1 Message size is fixed or variable. Communication link must exist between process P and Q to send and receive messages from each other There are several ways to establish the communication link Physical using shared memory,hardware bus etc

Methods for logically implementing a link Direct or Indirect communication Synchronous or asynchronous communication Automatic or explicit buffering Send by copy or send by reference Fixed or variable sized messages

Naming
Processes that want to communicate must have a way to refer to each other. They can use either direct or indirect communication. Direct communication With direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme ,thesend and receive primitives are defined as: Send(p,message)-send a message to P receive (Q , message) -Receive a message from process Q.

A communication link in this scheme has the following properties: A link is established automatically between every pair of processes that want to communicate . The processes need to know only each othersidentity to communicate. A link is associated with exactly two processes. Exactly one link exists between each pair of processes.

Communication is symmetric or asymmetric Symmetric Both the sender and the receiver processes must name the other to communicate Asymmetric Only the sender names the recipient; the recipient is not required to name the sender send (P,message) send a message to process P receive (id,message) receive a message from any process, the variable id is set to the name of the process with which communication has taken place.

Indirect Communication
With indirect communication, the messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification. In this scheme, Two processes can communicate only if they share a mailbox.a process can communicate with some other process via a number of different mailboxes.

In this scheme, a communication link has the following properties: A link is established between a pair of processes only if both members of the pair have a shared mailbox.

A link may be associated with many processes. A number of different links may exist between each pair of communicating processes, with each link corresponding to one mailbox.

buffering
Whether the communication is direct or indirect, messages exchanged by communicating processes reside in a temporary queue. this queue canbe implemented in three ways: Zero capacity: The queue has maximum length 0; thus, the link cannot have any messages waiting in it. In this case, the sender must block untilthe recipient receives the message. Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it.

If the queue is not full when a new message is sent, it is placed in the queue and the sender can continue execution without waiting. If the queue is full, the sender must block until space is available in the queue. Unbounded capacity: The queue has potentially infinite length; thus, any number of messages can wait in it. The sender never blocks.

The zero-capacity case is sometimes referred to as a message system with no buffering; the other cases are referred to as automatic buffering.

Message length

Message length either fixed or variable Fixed size message has low overhead by allowing related buffers to fixed size Problem message will be in different lengths . fitting larger message in smaller chunks is an overhead For example short message waste the buffer space and long message must be split and sent in installments Alternative approach is variable length In this case dynamically creating buffers to fit the size of each indiviual message Allocation of variable size is costly

Das könnte Ihnen auch gefallen