Sie sind auf Seite 1von 10

Producerconsumer problem The producerconsumer problem (also known as the bounded-buffer problem) is a classic example of a multi-process synchronization problem.

The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer's job is to generate a piece of data, put it into the buffer and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer. The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The solution can be reached by means of inter-process communication, typically using semaphores. An inadequate solution could result in a deadlock where both processes are waiting to be awakened. The problem can also be generalized to have multiple producers and consumers. Implementations Inadequate implementation To solve the problem, a less than perfect programmer might come up with a solution shown below. In the solution two library routines are used, sleep and wakeup. When sleep is called, the caller is blocked until another process wakes it up by using the wakeup routine. The global variable itemCount holds the number of items in the buffer. int itemCount = 0; procedure producer() { while (true) { item = produceItem(); if (itemCount == BUFFER_SIZE) { sleep(); } putItemIntoBuffer(item); itemCount = itemCount + 1; if (itemCount == 1) { wakeup(consumer); } } } procedure consumer() { while (true) { if (itemCount == 0) { sleep(); } item = removeItemFromBuffer(); itemCount = itemCount - 1; if (itemCount == BUFFER_SIZE - 1) { wakeup(producer); } consumeItem(item); }

} The problem with this solution is that it contains a race condition that can lead to a deadlock. Consider the following scenario: 1. 2. 3. 4. 5. The consumer has just read the variable itemCount, noticed it's zero and is just about to move inside the if block. Just before calling sleep, the consumer is interrupted and the producer is resumed. The producer creates an item, puts it into the buffer, and increases itemCount. Because the buffer was empty prior to the last addition, the producer tries to wake up the consumer. Unfortunately the consumer wasn't yet sleeping, and the wakeup call is lost. When the consumer resumes, it goes to sleep and will never be awakened again. This is because the consumer is only awakened by the producer when itemCount is equal to 1. The producer will loop until the buffer is full, after which it will also go to sleep.

6.

Since both processes will sleep forever, we have run into a deadlock. This solution therefore is unsatisfactory. An alternative analysis is that if the programming language does not define the semantics of concurrent accesses to shared variables (in this case itemCount) without use of synchronization, then the solution is unsatisfactory for that reason, without needing to explicitly demonstrate a race condition. Using semaphores Semaphores solve the problem of lost wakeup calls. In the solution below we use two semaphores, fillCount and emptyCount, to solve the problem. fillCount is the number of items already in the buffer and available to be read, while emptyCount is the number of available spaces in the buffer where items could be written. fillCount is incremented and emptyCount decremented when a new item is put into the buffer. If the producer tries to decrement emptyCount when its value is zero, the producer is put to sleep. The next time an item is consumed, emptyCount is incremented and the producer wakes up. The consumer works analogously. semaphore fillCount = 0; // items produced semaphore emptyCount = BUFFER_SIZE; // remaining space procedure producer() { while (true) { item = produceItem(); down(emptyCount); putItemIntoBuffer(item); up(fillCount); } } procedure consumer() { while (true) { down(fillCount); item = removeItemFromBuffer(); up(emptyCount); consumeItem(item); } } The solution above works fine when there is only one producer and consumer. With multiple producers sharing the same memory space for the item buffer, or multiple consumers sharing the same memory space, this solution contains a serious race condition that could result in two or more processes reading or writing into the same slot at the same time. To understand how this is possible, imagine how the procedure putItemIntoBuffer() can be implemented. It could contain two actions, one determining the next available slot and the other writing into it. If the procedure can be executed concurrently by multiple producers, then the following scenario is possible: 1. 2. 3. Two producers decrement emptyCount One of the producers determines the next empty slot in the buffer Second producer determines the next empty slot and gets the same result as the first producer

4.

Both producers write into the same slot

To overcome this problem, we need a way to make sure that only one producer is executing putItemIntoBuffer() at a time. In other words we need a way to execute a critical section with mutual exclusion. To accomplish this we use a binary semaphore called mutex. Since the value of a binary semaphore can be only either one or zero, only one process can be executing between down(mutex) and up(mutex). The solution for multiple producers and consumers is shown below. semaphore mutex = 1; semaphore fillCount = 0; semaphore emptyCount = BUFFER_SIZE; procedure producer() { while (true) { item = produceItem(); down(emptyCount); down(mutex); putItemIntoBuffer(item); up(mutex); up(fillCount); } } procedure consumer() { while (true) { down(fillCount); down(mutex); item = removeItemFromBuffer(); up(mutex); up(emptyCount); consumeItem(item); } } Notice that the order in which different semaphores are incremented or decremented is essential: changing the order might result in a deadlock. Using monitors The following pseudo code shows a solution to the producerconsumer problem using monitors. Since mutual exclusion is implicit with monitors, no extra effort is necessary to protect the critical section. In other words, the solution shown below works with any number of producers and consumers without any modifications. It is also noteworthy that using monitors makes race conditions much less likely than when using semaphores.[citation needed][clarification needed (see Talk page)] monitor ProducerConsumer { int itemCount condition full; condition empty; procedure add(item) { while (itemCount == BUFFER_SIZE) { wait(full); } putItemIntoBuffer(item); itemCount = itemCount + 1; if (itemCount == 1) { notify(empty); } } procedure remove() {

while (itemCount == 0) { wait(empty); } item = removeItemFromBuffer(); itemCount = itemCount - 1; if (itemCount == BUFFER_SIZE - 1) { notify(full); } return item; } } procedure producer() { while (true) { item = produceItem() ProducerConsumer.add(item) } } procedure consumer() { while (true) { item = ProducerConsumer.remove() consumeItem(item) } } Note the use of while statements in the above code, both when testing if the buffer is full or empty. With multiple consumers, there is a race condition where one consumer gets notified that an item has been put into the buffer but another consumer is already waiting on the monitor so removes it from the buffer instead. If the while was instead an if, too many items might be put into the buffer or a remove might be attempted on an empty buffer. Without semaphores or monitors The producerconsumer problem, particularly in the case of a single producer and single consumer, strongly relates to implementing a FIFO or a communication channel. The producerconsumer pattern can provide highly efficient data communication without relying on semaphores, mutexes, or monitors for data transfer. Use of those primitives can give performance issues as they are expensive to implement. Channels and Fifo's are popular just because they avoid the need for endto-end atomic synchronization. A basic example coded in C is shown below. Note that:

Atomic read-modify-write access to shared variables is avoided: each of the two Count variables is updated by a single thread only. These variables are only incremented. This remains correct when their value wraps-around on integer overflow. This compact example should be refined for an actual implementation by adding a memory barrier between the line that accesses the buffer and the line that updates the Count variable. This example does not put threads to sleep which might be OK depending on system context. The sched_yield is just to behave nice and could be removed. Thread libraries typically require semaphores or condition variables to control the sleep/wakeup of threads. In a multi-processor environment, thread sleep/wakeup would occur much less frequently than passing of data tokens, so avoiding atomic operations on data passing is beneficial.

volatile unsigned int produceCount, consumeCount; TokenType buffer[BUFFER_SIZE]; void producer(void) { while (1) { while (produceCount - consumeCount == BUFFER_SIZE) sched_yield(); // buffer is full

buffer[produceCount % BUFFER_SIZE] = produceToken(); // memory_barrier; produceCount += 1; } } void consumer(void) { while (1) { while (produceCount - consumeCount == 0) sched_yield(); // buffer is empty consumeToken( buffer[consumeCount % BUFFER_SIZE]); // memory_barrier; consumeCount += 1; } } Examples Example in C++ /** C++ Producer Consumer using C++11 thread facilities To compile: g++ -std=c++11 <program name> -pthread -lpthread -o pc */ #include <iostream> #include <sstream> #include <vector> #include <stack> #include <thread> #include <mutex> #include <atomic> #include <condition_variable> #include <chrono> using namespace std; // print function for "thread safe" printing using a stringstream void print(ostream& s) { cout << s.rdbuf(); cout.flush(); s.clear(); } // // const const const const Constants int int int int num_producers = 5; num_consumers = 10; producer_delay_to_produce = 10; consumer_delay_to_consume = 30;

// in milliseconds // in milliseconds // in milliseconds - max time that a // consumer can wait for a product to be // When producers has produced this // they will stop to produce // Maximum number of products that can be

const int consumer_max_wait_time = 200; produced. const int max_production = 10; quantity const int max_products = 10; stored // // // Variables

atomic<int> num_producers_working(0); consumers stack<int> products; our products mutex xmutex; program will cry condition_variable is_not_full; between condition_variable is_not_empty; between // // //

// When there's no producer working the // will stop, and the program will stop. // The products stack, here we will store // Our mutex, without this mutex our // to indicate that our stack is not full // the thread operations // to indicate that our stack is not empty // the thread operations

Functions

// Produce function, producer_id will produce a product void produce(int producer_id) { unique_lock<mutex> lock(xmutex); int product; is_not_full.wait(lock, [] { return products.size() != max_products; }); product = products.size(); products.push(product); print(stringstream() << "Producer " << producer_id << " produced " << product << "\n"); is_not_empty.notify_all(); } // Consume function, consumer_id will consume a product void consume(int consumer_id) { unique_lock<mutex> lock(xmutex); int product; if(is_not_empty.wait_for(lock, chrono::milliseconds(consumer_max_wait_time), [] { return products.size() > 0; })) { product = products.top(); products.pop(); print(stringstream() << "Consumer " << consumer_id << " consumed " << product << "\n"); is_not_full.notify_all(); } } // Producer function, this is the body of a producer thread void producer(int id) { ++num_producers_working; for(int i = 0; i < max_production; ++i) { produce(id); this_thread::sleep_for(chrono::milliseconds(producer_delay_to_produce)); }

print(stringstream() << "Producer " << id << " has exited\n"); --num_producers_working; } // Consumer function, this is the body of a consumer thread void consumer(int id) { // Wait until there is any producer working while(num_producers_working == 0) this_thread::yield(); while(num_producers_working != 0 || products.size() > 0) { consume(id); this_thread::sleep_for(chrono::milliseconds(consumer_delay_to_consume)); } print(stringstream() << "Consumer " << id << " has exited\n"); } // // Main // int main() { vector<thread> producers_and_consumers; // Create producers for(int i = 0; i < num_producers; ++i) producers_and_consumers.push_back(thread(producer, i)); // Create consumers for(int i = 0; i < num_consumers; ++i) producers_and_consumers.push_back(thread(consumer, i)); // Wait for consumers and producers to finish for(auto& t : producers_and_consumers) t.join(); } Example in Java import java.util.Stack; import java.util.concurrent.atomic.AtomicInteger; /** * 1 producer and 3 consumers producing/consuming 10 items */ public class ProducerConsumer { Stack<Integer> items = new Stack<Integer>(); final static int NO_ITEMS = 10; public static void main(String args[]) { ProducerConsumer pc = new ProducerConsumer(); Thread t1 = new Thread(pc.new Producer()); Consumer consumer = pc.new Consumer(); Thread t2 = new Thread(consumer); Thread t3 = new Thread(consumer); Thread t4 = new Thread(consumer); t1.start(); try {

Thread.sleep(100); } catch (InterruptedException e1) { e1.printStackTrace(); } t2.start(); t3.start(); t4.start(); try { t2.join(); t3.join(); t4.join(); } catch (InterruptedException e) { e.printStackTrace(); } } class Producer implements Runnable { public void produce(int i) { System.out.println("Producing value" + i); items.push(new Integer(i)); } public void run() { int i = 0; // produce 10 items while (i++ < NO_ITEMS) { synchronized (items) { produce(i); items.notifyAll(); } try { // sleep for some time, Thread.sleep(10); } catch (InterruptedException e) { } } } } class Consumer implements Runnable { //consumed counter to allow the thread to stop AtomicInteger consumed = new AtomicInteger(); public void consume() { if (!items.isEmpty()) { System.out.println("Consuming " + items.pop()); consumed.incrementAndGet(); } } private boolean theEnd() { return consumed.get() >= NO_ITEMS; } public void run() { while (!theEnd()) { synchronized (items) { while (items.isEmpty() && (!theEnd())) { try { items.wait(10); } catch (InterruptedException e) { Thread.interrupted();

} } consume(); } } } } } Example in Objective-C (ARC enabled) #import <Foundation/Foundation.h> #define QUEUE_SIZE 5 @implementation NSMutableArray (Queue) -(id)pop { if(self.count == 0){return nil;} id obj = self[0]; [self removeObjectAtIndex:0]; return obj; } -(void)push:(id)obj { [self addObject:obj]; } @end @interface ProducerConsumer : NSObject { NSMutableArray *queue; dispatch_semaphore_t fillCount; dispatch_semaphore_t emptyCount; int produce_counter; } @end @implementation ProducerConsumer - (id)init { self = [super init]; if(!self){return nil;} queue = [[NSMutableArray alloc] init]; fillCount = dispatch_semaphore_create(0); emptyCount = dispatch_semaphore_create(QUEUE_SIZE); return self; } -(void)producer:(int)producer_id { for(int i=0;i<10;i++){ @autoreleasepool { NSNumber *val; @synchronized(self){ val = [NSNumber numberWithInt:produce_counter++]; } dispatch_semaphore_wait(emptyCount, DISPATCH_TIME_FOREVER); @synchronized(self){ NSLog(@"Producer %d produced %@", producer_id, val); [queue push:val];

} dispatch_semaphore_signal(fillCount); } } } -(void)produce_end { for(int i=0;i<100 /* equal or more than maximum consumer */;i++){ dispatch_semaphore_signal(fillCount); } } -(void)consumer:(int)consumer_id { while(true){ @autoreleasepool { sleep(1); dispatch_semaphore_wait(fillCount, DISPATCH_TIME_FOREVER); id val; @synchronized(self){ val = [queue pop]; NSLog(@"Consumer %d consumed %@", consumer_id, val); } dispatch_semaphore_signal(emptyCount); if(val == nil){ break; } } } } @end int main(int argc, const char * argv[]) { ProducerConsumer *producerConsumer = [[ProducerConsumer alloc] init]; dispatch_group_t group_producer = dispatch_group_create(), group_consumer dispatch_group_create(); @autoreleasepool { for(int i=0;i<3;i++){ dispatch_group_async(group_producer, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [producerConsumer producer:i]; }); } for(int i=0;i<3;i++){

dispatch_group_async(group_consumer,dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_ DEFAULT, 0), ^{ [producerConsumer consumer:i]; }); } dispatch_group_wait(group_producer, DISPATCH_TIME_FOREVER); [producerConsumer produce_end]; dispatch_group_wait(group_consumer, DISPATCH_TIME_FOREVER); NSLog(@"ProducerConsumer finished."); } return 0; }

Das könnte Ihnen auch gefallen