Sie sind auf Seite 1von 12

Child Processes, Parent Process and process shiblings A Process can create another process by CreateProcess() system call

the former is called the parent process and the latter is called child of the parent. All immediate children of a processes are called sibling processes. All processes have their own private address spaces whether it is a child or a parent. An address space of a process contains its program and data. A child(subprocess) may be able to obtain resources(like CPU time, memory, disk files, I/O devices) directly from OS or may inherit a subset of the parent's resources. Possibilities regarding a child process when it is created: With respect to it's execution with parent: i) Parent process executes concurrently(in parallel) with the children. ii) Parent waits until some or all it's children is terminated. With respect to it's address space with parent: i) Child has same address space- Child process duplicates the address space(code and data section) of it's parent ii) Child has different address space- Child process loads a new program into it's address space. In terms of Resource sharing (like open files which need to be written): Parent and children share all resources. Children share subset of the parent's resources. Parent and child share no resources. Process Tree A Process may create several other processes in an OS. The creating process is called the parent process and all the newly created processes are called children of the parent. The children which can again create other processes which forms a process tree in memory. Processes Creation:fork() and CreateProcess() system call: Both fork() and CreateProcess() system calls create a new child process when calling from the parent process. fork() is defined in UNIX where as CreateProcess() is defined in Win32 API. Return values of fork(): fork() returns the process identifier(pid) of the newly created process. i.e pid=fork(). It is a unique number across the system. Number of times fork() is executed there will run different copies of the same program and each call will return the pid of the child process. There can be three possibilities of its return value: i) If pid==0 then it is a child process. Child process can see its value as 0. ii) If pid>0 it is a parent process. Actually it is the pid of the child process it has just created. iii) If pid<0 ; an error occurred and fork() failed to create a child process. Difference between fork() and CreateProcess() system call: The main deference between fork() and CreateProcess() system calls is: The fork() creates a new child process by cloning the address space of the parent by default. i.e it creates a clone of the parent process having same data and code section of the parent. But in case of CreateProcess() system call we must need to explicitly specify which process(may be an external process or the parent itself) will occupy the child process's address space. Thus it doesn't create a clone of the parent by default. Then What is needed to load a new process in the child's address space other then the parent

process when fork() is just creating a copy of the parent process in the child's address space? Ans:- We have to call the exec() system call after the fork() system call in the child process to replace the process's memory space with a new program. execlp() which is a version of exec(), loads a binary executable file into the child's address space destroying the default address space occupied by the parent after calling the fork() and starts executing it.eg:- execlp(/bin/ls, ls , NULL) loads the UNIX directory listing program ls from the directory path /bin and starts executing into the child's address space. Execution sequence of fork() system call to create a process fork() is a system call defined in UNIX to create a new process.

The above diagram demonstrates the process creation using fork() system call in UNIX. First a process calls a fork() and creates a new child process and it itself waits for the termination of the child using wait() system call. It waits until and unless it does not gets a signal from the child. Child process informs about it's termination to the parent through wait() system call written in the parent. In the child process exec() system call loads a new process into the child's address space and starts executing it. After finishing it's execution it calls a exit() system call to terminate itself and sends a signal to the parent through the wait() system call.(Note: the parent was halting on the line where wait() system call is written expecting a return value(signal) from the client). After wait() returns a value the parent resumes its execution from the point where it was halted. Making a Process to wait/suspend: Using wait() system call we can change the state of a process from running to waiting. Terminating a Process: There are two types of process termination. i) A process executes it's final statement and asks the OS to delete its space by using a exit() system call. All resources allocated to it like open files, memory etc are released. ii) One process can terminate another process by using a TerminateProcess() system call. Only the parent process can terminate a child by this system call. Otherwise if it is called from children it can kill the parent and the whole process tree will collapse. [Note: wait() returns the process identifier(pid) of the child so that the parent can identify which of its children has been terminated.] When/why a parent process decides to be terminate a child?

Ans:- i) The child exceeded some of its allocated resources. ii) Task assigned to the child no longer required. iii) Parent is itself is terminating: In that case, The OS will not allow the child to continue without it's parent. Cascading termination of children occurs. What is Cascading Termination of processes? Ans:- Some OS does not allow the child to survive when the parent is terminated or killed. In such systems if the parent is to be terminated then all its children must also be terminated. This phenomenon is referred to as cascading termination. Show with an example creating a process using fork() ? Ans:#include<sys/types.h> #include<stdio.h> #include<unistd.h>/*POSIX library*/ int main(){ pid_t pid; pid=fork() /*Create a new child process [Note: The child process will be a copy of this program*/ if(pid<0){/*error occured*/ fprintf(stderr, Fork failed);/*Write a message to the standard error stream*/ return 1; } else if(pid==0){/*Child process's code here*/ execlp(/bin/ls, ls,NULL); /*Load the process ls into the child's address space*/ } else if(pid>0){/*parent process's code here*/ wait(NULL); /*parent process will wait for the children process identified by pid which is >0 to finish*/ printf(Children is finished);/*Give a message for child's termination*/ } How can you create a process in Windows OS ? Ans:- A Process can be created from a user program(Say a program written in C) by using system calls defined in Win32 API(application programming interface). System Call to create a process in windows is CreateProcess(). This system call must inherit the address space of the newly created child process with another external process say for eg mspaint.exe(Microsoft paint). Independent process: A process that cannot affect or be affected by the execution of another process. Independent processes does not share data. Co-operating process: A process that can affect or be affected by the execution of another process. Cooperating processes can share data with other processes. Advantages and Reasons for process cooperation: i) Information sharing: Several users may which to share the same information e.g. a shared file. The O/S needs to provide a way of allowing concurrent access.

ii) Computation Speed-up: Some problems can be solved quicker by sub-dividing it into smaller tasks that can be executed in parallel on several processors. iii) Modularity: The solution of a problem is structured into parts with well-defined interfaces, and where the parts run in parallel. iv) Convenience: A user may be running multiple processes to achieve a single goal, or where a utility may invoke multiple components, which interconnect via a pipe structure that attaches the stdout of one stage to stdin of the next etc. Q. State and Explain Producer consumer problem. Ans:- A very simple example of two cooperating processes. The problem is called the Producer Consumer problem and it uses two processes: Producer Process: It produces information that will be consumed by the consumer. Consumer Process: It consumes information produced by the producer. Both processes run concurrently. Here the problem(issue) is that, i) The consumer needs to wait if the producer has not produced any data and ii) In case of Bounded Buffer the producer needs to wait if there is no space in the buffer.(In case of Unbounded buffer it can continue to produce items) // Shared data: const int BUFFERSIZE = 5 int item, in, out; // number of items in buffer is at most 5 int buffer[BUFFERSIZE]; // both the consumer and producer are currently looking at buffer element 0. in = 0; out = 0; // Producer Process . Producer(){ while (true) { while ((in+1) % BUFFERSIZE == out) //Since the buffer is a circular queue, that's why (in + 1) % BUFFERSIZE==out otherwise it would be out=BUFFERSIZE-1 to check wheather buffer is full or not; { //Buffer is full // Do nothing //Loop inside continuously and wait for the Consumer process to consume an item from the buffer. } buffer[BUFFERSIZE] = producedItem; // Producer produces a random item producedItem and store it to the buffer for consumption by the Consumer process. in = (in + 1) % BUFFERSIZE;//Since the buffer is a circular queue, that's why in = (in + 1) % BUFFERSIZE otherwise it would be in=in+1; } } / / Consumer process Consumer() { while (true) {

while (in = = out) { //Buffer is empty Consumer waits for the Producer to produce an item. //Do nothing. } consumedItem = buffer [out]; // Consumer consumes an item from the out position of the buffer and stores it in a local variable consumedItem. out = (out + 1) % BUFFERSIZE;//Since the buffer is a circular queue, that's why out = (out + 1) % BUFFERSIZE otherwise it would be out = out + 1; } } The producer basically just checks to see if there is any space in which to put a newly produced item (outer while loop). If there is no space, then it waits, until there is some space. The consumer waits while the buffer is empty. If there is something/ it grabs the next item and consumes it. One drawback with this solution is that there is one element of the buffer that is wasted.

Interprocess communication
Processes have their own address space. In principle, one process neither shares it's address space nor it can modify the data and code section of a another process. But sometimes processes need to share information to achieve computation speed up by parallelism. IPC is a mechanism for cooperating processes to communicate and to synchronize their actions. IPC facility basically provides two operations: i) send(message)- Where message size is fixed or variable ii) receive(message). If Process P and Q want to communicate then : 1) Establish a communication link between them 2) Exchange message via send/receive primitives. Q How Links are established? Ans:- i) Physical (e.g., shared memory, hardware bus, network) ii) Logical (e.g., Logical properties) IPC Implementation Questions: How are links established? Can a link be associated with more than two processes? How many links can there be between every pair of communicating processes? What is the capacity of a link? Is the size of a message that the link can accommodate fixed or variable? Is a link unidirectional or bi-directional? Models of IPC:1) Shared memory 2) Message passing 1) Shared Memory:- A common memory region is established by cooperating processes. One process will create a memory portion which other processes (if permitted) can access. A process creates a shared memory segment using shmget() system call. One process writes into this region and another reads from this region. Thus the message is passed to the other process.
int shmget(key_t key, size_t size, int shmflg); Where The key argument is a access value associated with the semaphore ID. The size argument is the size in bytes of the requested shared memory. The shmflg argument specifies the initial access permissions(read/write) and creation control flags. Shmget() returns the

segment_id number of the region.

2) Message passing:- In message passing mechanism cooperating processes exchange information and synchronize their actions without sharing the same address space. Processes can be residing on different computers connected through a network. So, It is useful in distributed computing environment. Message passing is useful if small amount of data is exchanged.

Q. Why shared memory is faster than message passing technique? Ans:- In message passing systems are implemented using system calls, from establishing communication link between processes to transferring data. Thus it requires more time and needs kernel intervention. Where as in shared memory, system calls are required only to establish shared memory region. Data transfer is done by user level routines without any kernel intervention. Thus it is faster.

Message Passing Systems


Direct Communication:Symmetric direct addressing: Processes must name each other explicitly. Primitives : send (P, message) send a message to process P receive(Q, message) receive a message from process Q Properties of communication link: Links are established automatically. A link is associated with exactly one pair of communicating processes. Between each pair there exists exactly one link. The link may be unidirectional, but is usually bi-directional. Process Q Process P while (TRUE) { while (TRUE) { produce an item receive ( Q, item ) send ( P, item ) consume item;

Asymmetric direct addressing: Only the sender names the recipient, recipient not required to name the sender - need not know the sender. Primitives : send ( P, message )- send message to process P. receive ( id, message ): receive from any process, id set to sender. Disadvantage of direct communications : i) Limited modularity - changing the name of a process means changing every sender and receiver process to match. ii) Need to know process names. Indirect Communication:Messages are directed and received from mailboxes (also referred to as ports). Each mailbox has a unique id.(Eg- POSSIX message queue) Processes can communicate only if they share a mailbox. Properties of communication link: Link established only if processes share a common mailbox A link may be associated with many processes. Allows one-to-many, many-to-one, many-tomany communications one-to-many : any of several processes may receive from the mailbox e.g. a broadcast of some sort. many-to-one : many processes sending to one receiving process. e.g. a server providing service to a collection of processes like a file server process, network server,mail server etc. Many-to-many : multiple senders requesting service and a pool of receiving servers offering service server farm Each pair of processes may share several communication links. Link may be unidirectional or bi-directional. Q. Discuss Mail Box technique. Mention some operations and primitives on mailboxes. Ans: Mailbox is an IPC technique in OS by which one process can put(means Post or Send) a message block in an area and another process removes it. Example is the Mobile phone LCD's multiline display task in which the time and list of phone number displayed on a single display and user can delete the entries. Primitives are defined as: send(A, message) send a message to mailbox A. receive(A, message) receive a message from mailbox A. Operations: create a new mailbox. send and receive messages through mailbox. destroy a mailbox. Mailbox Ownership: Mailbox may be owned by a Process or by OS. Process mailbox ownership : Mailbox is part of process's address space. Only the process which is the owner of the mailbox may receive messages from the mailbox. other processes may send to the mailbox mailbox can be created with the process and destroyed when the process dies

-process sending to a dead processs mailbox will need to be signalled or through separate create_mailbox and destroy_mailbox system calls. possibly declare variables of type mailbox System mailbox ownership : Mailboxes have their own independent existence, not attached to any process. Dynamic connection to a mailbox by processes: for send and/or receive. Blocking(Synchronous) and Nonblocking(Asynchronous) Communication Q. Discuss blocking and non blocking primitives. Ans:- Primitives: send() and receive(). Synchronized : send() and receive() operations are blocking: sender is suspended(blocked) until receiving process does a corresponding receive() receiver suspended until a message is sent() for it to receive. Properties : processes are tightly synchronized in that case both sent() and receive() are blocking the system is called rendezvous. Eg. rendezvous Ada. effective confirmation of receipt for sender. at most one message can be outstanding for any process pair - no buffer space problems easy to implement, with low overhead. Disadvantages : Sending process might want to continue after its send operation without waiting for confirmation of receipt Receiving process might want to do something else if no message is waiting to be received Asynchronous : send() and receive() operations non-blocking. sender continues to send when no corresponding receive outstanding. receiver continues to receive when no message has been sent. Properties : messages need to be buffered until they are received.- Amount of buffer space to allocate can be problematic.- A process running amok could clog the system with messages if not careful often very convenient rather than be forced to wait - particularly for senders. can increase concurrency some awkward kernel decisions avoided e.g. whether to swap a waiting process out to disc or not. Buffering: - Sender process rather than sending messages to the receiver directly(in synchronized systems) it sends to a temporarily memory buffer space. When the receiver gets ready it can take messages from the buffer. The number of messages that can reside in a link temporarily can be: Zero capacity - Queue length 0. Sender must wait(block) until receiver ready to take the message. Bounded capacity - Finite length queue. Messages can be queued as long as queue not full otherwise sender will have to wait. A bounded buffer can be implemented as a circular queue. Unbounded capacity- Any number of messages can be queued. Sender never delayed(blocked).

Threads
A thread is a single sequential flow of control written in a process. A process contains at least one thread. If there is no thread defined in a process then the process itself is executed as one thread by OS. Threads executes independently inside a process to cooperatively solve a problem defined in a process. Q What threads share? Ans:- A thread shares : i) code section ii) Data section iii) OS resources like open files signals etc. With other threads executing in the OS. Thus a process can be single-threaded or multithreaded(containing multiple threads). Q What are the advantage/benefits of using thread? Ans:- 1) Responsiveness:- When processes are executed as multiple threads one part of the process can be blocking say writing some data to a file while another part may be interacting with the user by showing an image. 2) Resource Sharing:-When using multiple threads instead of separate processes multiple threads share a single address space, all open files, and other resources thus it saves memory. 3) Economy:- Allocating resources like memory for a new process is a costly but threads share resources of the process. So it is more economical to create a thread and doing context-switching on it. 4) Scalability:- Threads improve the performance (throughput, computational speed) of a program. Multithreading is beneficial in multiprocessor architecture where different threads can be running on different processors thus increasing parallelism. 5) Potential Simplicity:- Multiple threads may reduce the complexity of some applications which are suited to be implemented using threads. Q. Discuss different types of threads? Ans:- TYPES OF THREADS: User-Level Threads:- User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes.

Advantages: The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are: User-level threads does not require modification to operating systems. Simple Representation:Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space. Simple Management:This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel. Fast and Efficient:Thread switching is not much more expensive than a procedure call. Disadvantages: There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads. User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there

are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks.

Kernel-Level Threads:In this method, the kernel knows about and manages the threads. No runtime system is needed in this case. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system. In addition, the kernel also maintains the traditional process table to keep track of processes. Operating Systems kernel provides system call to create and manage threads. Advantages:

Because kernel has full knowledge of all threads, Scheduler may decide to
give more time to a process having large number of threads than process having small number of threads. block. Disadvantages:

Kernel-level threads are especially good for applications that frequently

The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads. require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.

Since kernel must manage and schedule threads as well as processes. It

Different Thread models (Relationships between user thread and kernel thread) Many-to-One: Many user level threads map to one kernel thread. Adv:- Thread management is done by thread library in user space so is efficient. Dis adv:- 1)If one user thread while communicating with kernel thread blocks other user threads from communicating with the kernel thread. 2)Only one thread can access kernel thread. Multiple threads cannot run in parallel on multiple processors. Eg: Green threads in Solaris,GNU Portable thread.

One-to-One: Each user thread is mapped to one kernel thread. Adv:1) Allows multiple use threads to communicate with kernel threads running in separate processors. 2) No blocking system call concept exists here. Dis adv:-Creating a user thread needs to create the corresponding kernel thread to

communicate.

Many-to-Many: Many user threads can be mapped to communicate with smaller or equal number of kernel threads. Adv: Developer can create as many user threads as needed without depending on how many kernel thread exists. Dis adv: True concurrency is not attained since kernal can schedule one user thread at a time.

Q Discuss Windows and Linux Threads. How threads are created in these 2 environments? Ans:- Linux Threads: In linux threads are created using clone() system call. Linux does not differentiate between threads and processes, all are called tasks. Clone() system call takes 4 parameters to indicate whether the new task will share the: i) file system info. ii) same memory space iii) signal handlers iv) set of open files with the parent. If no parameters is passed it will act as fork() system call and a new task will be created with separate address space. Windows XP Threads: Windows XP implements Win32 API Library for creating threads. Win Xp supports creating one-to-one mapping between user and kernel threads since Win XP OS can run on multicore processors. Threads are created in Win32 API by CreateThread() function. The parent thread can be waited for the child thread to finish by WaitForSingleObject() function. Q. What is thread affinity? Ans:- In a multiple processor system if a thread executing in a processor migrates to another processor the contents of cache memory for the first processor must be invalidated and the cache memory of the second processor needs to be loaded. This process wastes time. The affinity of a thread to adhere to execute on a single processor and not change to another processor for avoiding the cost of invalidating cache memory is called thread affinity. Q. How a thread is cancelled? Ans:- A thread can be cancelled in 2 different ways: i) Asynchronous: One thread immediately

terminates the target thread. ii) Deferred Termination: The target thread can terminate itself when required by time to time self checking itself whether it should be terminated. Q. What is thread pooling? Ans:- Thread pooling is a procedure of creating multiple threads when a process is first started and store them in a pool of memory. All initially sits in waiting state. When a job arise an available thread can be activated to service the job. It is maintained heavily in Web Server environments where creating a new thread for each new request can finish system resources like memory, CPU time while threads servicing the previous requests will finish it's job and will sit idle and all these will be unused. Adv: 1) Servicing a request with an existing thread is usually faster than wasting time to crreate a new thread. 2) Thread pooling can limit the number of threads existing in a system thus it makes management of threads easy. Thread Libraries: Most commonly used thread libraries used now a days are as follows, i) POSIX Pthread: Pthreads are POSIX standard threads. POSIX standard provides API for creation and management of Pthreads. Numerous operating system designs like Solaris, Linux, Mac OS X, True64UNIX implement Pthread. Creating Pthread: pthread_create() function call creates a Pthread. Waiting for the child Pthread to finish: pthread_join() function call is invoked by the parent Pthread to wait for the child to finish. Terminating: pthread_exit() ii) Win 32: Windows XP implements Win32 API Library for creating threads. Win Xp supports creating one-to-one mapping between user and kernel threads since Win XP OS can run on multicore processors. Threads are created in Win32 API by CreateThread() function. The parent thread can be waited for the child thread to finish by WaitForSingleObject() function. iii) Java: In Java programs run on JVM which is over OS.A thread is created using 2 ways: a) Implementing the Runnable interface in a class. b) Extending the Thread class in a sub class.

Das könnte Ihnen auch gefallen