Sie sind auf Seite 1von 50

http://quiz.geeksforgeeks.

org/gate-corner-2/
http://quiz.geeksforgeeks.org/gate-cs-notes/
http://quiz.geeksforgeeks.org/cpu-scheduling/

Operating System | Memory Management |


Partition Allocation Method
In operating system, following are four common memory management techniques.
Single contiguous allocation: Simplest allocation method used by MS-DOS.
All memory (except some reserved for OS) is available to
a process.
Partitioned allocation: Memory is divided in different blocks
Paged memory management: Memory is divided in fixed sized units called
page frames, used in virtual memory environment.
Segmented memory management: Memory is divided in different segments (a
segment is logical grouping of process' data or code)
In this management, allocated memory does'nt have to
be contiguous.

Most of the operating systems (for example Windows and Linux) use Segmentation with Paging. A
process is divided in segments and individual segments have pages.
In Partition Allocation, when there are more than one partition freely available to accommodate a
processs request, a partition must be selected. To choose a particular partition, a partition allocation
method is needed. A partition allocation method is considered better if it avoids internal fragmentation.
Below are the various partition allocation schemes :
1. First Fit: In the first fit, partition is allocated which is first
sufficient from the top of Main Memory.
2. Best Fit Allocate the process to the partition which is first
smallest sufficient partition among the free available partition.
3. Worst Fit Allocate the process to the partition which is largest
sufficient among the freely available partitions available in
the main memory.
4. Next Fit Next fit is similar to the first fit but it will search
for the first sufficient partition from the last allocation point.

Is Best-Fit really best?


Although, best fit minimizes the wastage space, it consumes a lot of processor time for searching the
block which is close to required size. Also, Best-fit may perform poorer than other algorithms in some
cases. For example, see below exercise.

Exercise: Consider the requests from processes in given order 300K, 25K, 125K and 50K. Let there be
two blocks of memory available of size 150K followed by a block size 350K.
Which of the following partition allocation schemes can satisfy above requests?
A) Best fit but not first fit.
B) First fit but not best fit.
C) Both First fit & Best fit.
D) neither first fit nor best fit.
Solution: Let us try all options.
Best Fit:
300K is allocated from block of size 350K. 50 is left in the block.
25K is allocated from the remaining 50K block. 25K is left in the block.
125K is allocated from 150 K block. 25K is left in this block also.
50K cant be allocated even if there is 25K + 25K space available.
First Fit:
300K request is allocated from 350K block, 50K is left out.
25K is be allocated from 150K block, 125K is left out.
Then 125K and 50K are allocated to remaining left out partitions.
So, first fit can handle requests.
So option B is the correct choice.
Operating System | Page Replacement Algorithms

In a operating system that uses paging for memory management, page replacement algorithm are needed
to decide which page needed to be replaced when new page comes in. Whenever a new page is referred
and not present in memory, page fault occurs.
Page Fault
A page fault is a type of interrupt, raised by the hardware when a running program accesses a memory
page that is mapped into the virtual address space, but not loaded in physical memory.
Page Replacement Algorithms
First In First Out
This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all
pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be
replaced page in the front of the queue is selected for removal.
For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots > 3 Page Faults.
when 3 comes, it is already in memory so > 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. >1 Page Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 >1 Page
Fault.

Beladys anomaly
Beladys anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we
consider reference string
3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total
page faults, but if we increase slots to 4, we get 10 page faults.
Optimal Page replacement
In this algorithm, pages are replaced which are not used for the longest duration of time in the future.
Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page faults
0 is already there so > 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.
>1 Page fault.
0 is already there so > 0 Page fault..
4 will takes place of 1 > 1 Page Fault.
Now for the further page reference string > 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as operating system cannot know future
requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement
algorithms can be analyzed against it.

Least Recently Used


In this algorithm page will be replaced which is least recently used.
Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page slots empty.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page faults
0 is already their so > 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used >1 Page fault
0 is already in memory so > 0 Page fault.
4 will takes place of 1 > 1 Page Fault
Now for the further page reference string > 0 Page fault because they are already available in the
memory.
Below are previous year GATE Questions
1. Which of the following page replacement algorithms suffers from Beladys anomaly?
(A) FIFO
(B) LRU
(C) Optimal Page Replacement
(D) Both LRU and FIFO
Answer: (A)

Explanation: Beladys anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement algorithm.
See the example given on Wiki Page.
In computer storage, Bldy's anomaly is the phenomenon in which increasing the number of page
frames results in an increase in the number of page faults for certain memory access patterns. This
phenomenon is commonly experienced when using the First in First Out (FIFO) page replacement
algorithm. Lszl Bldy demonstrated this in 1969.[1]
In common computer memory management, information is loaded in specific sized chunks. Each chunk
is referred to as a page. Main memory can only hold a limited number of pages at a time. It requires a
frame for each page it can load. A page fault occurs when a page is not found, and might need to be
loaded from disk into memory.
When a page fault occurs and all frames are in use, one must be cleared to make room for the new page.
A simple algorithm is FIFO: whichever page has been in the frames the longest is the one that is cleared.
Until Bldy's anomaly was demonstrated, it was believed that an increase in the number of page frames
would always result in the same number or fewer page faults.
2. Page fault occurs when
(A) When a requested page is in memory
(C) When a page is corrupted
Answer: (B)

(B) When a requested page is not in memory


(D) When an exception is thrown

Explanation: Page fault occurs when a requested page is mapped in virtual address space but not present
in memory.
3. Assume that there are 3 page frames which are initially empty. If the page reference string is 1, 2, 3, 4,
2, 1, 5, 3, 2, 4, 6, the number of page faults using the optimal replacement policy is__________.
(A) 5
(B) 6
(C) 7
(D) 8
Answer: (C)
Explanation: In optimal page replacement replacement policy, we replace the place which is not used for
longest duration in future.
Given three page frames.
Reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6
Initially, there are three page faults and entries are 1 2 3
Page 4 causes a page fault and replaces 3 (3 is the longest distant in future), entries become 1 2 4
Total page faults = 3+1 = 4
Pages 2 and 1 don't cause any fault.
5 causes a page fault and replaces 1, entries become 5 2 4
Total page faults = 4 + 1 = 5

3 causes a page fault and replaces 1, entries become 3 2 4


Total page faults = 5 + 1 = 6
3, 2 and 4 don't cause any page fault.
6 causes a page fault.
Total page faults = 6 + 1 = 7
4. Consider the virtual page reference string
1, 2, 3, 2, 4, 1, 3, 2, 4, 1

On a demand paged virtual memory system running on a computer system that main memory size of 3
pages frames which are initially empty. Let LRU, FIFO and OPTIMAL denote the number of page
faults under the corresponding page replacements policy. Then
(A) OPTIMAL < LRU < FIFO
(B) OPTIMAL < FIFO < LRU
(C) OPTIMAL = LRU
(D) OPTIMAL = FIFO
Answer: (B)
The OPTIMAL will be 5, FIFO 6 and LRU 9.
(http://www.geeksforgeeks.org/operating-systems-set-5/)
5. A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed
number of frames to a process. Consider the following statements:
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference.
Which one of the following is TRUE?
(A) Both P and Q are true, and Q is the reason for P
(B) Both P and Q are true, but Q is not the reason for P.
(C) P is false, but Q is true
(D) Both P and Q are false
Answer: (B)
P is true. Increasing the number of page frames allocated to process may increases the no. of page faults
(See Beladys Anomaly).
Q is also true, but Q is not the reason for-P as Beladys Anomaly occurs for some specific patterns of
page references.
6. A process has been allocated 3 page frames. Assume that none of the pages of the process are available
in the memory initially. The process makes the following sequence of page references (reference
string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1
If optimal page replacement policy is used, how many page faults occur for the above reference string?
(A) 7
(B) 8
(C) 9
(D) 10
Answer: (A)
Explanation: Optimal replacement policy looks forward in time to see which frame to
replace on a page fault.
1 23 -> 1,2,3 //page faults
173
->7
143 ->4
153 -> 5

163 -> 6
Total=7
So Answer is A

7. Consider the data given in above question.


Least Recently Used (LRU) page replacement policy is a practical approximation to optimal page
replacement. For the above reference string, how many more page faults occur with LRU than with the
optimal page replacement policy?
(A) 0
(B) 1
(C) 2
(D) 3
Answer: (C)
Explanation: LRU replacement policy: The page that is least recently used is being Replaced.
Given String: 1, 2, 1, 3, 7, 4, 5, 6, 3, 1
123 // 1 ,2, 3 //page faults
173 ->7
473 ->4
453 ->5
456 ->6
356 ->3
316 ->1
Total 9
In above (http://quiz.geeksforgeeks.org/gate-gate-cs-2007-question-82/), In optimal Replacement total
page faults=7
Therefore 2 more page faults
Answer is C
8. A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently
Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the
total number of page faults that will occur while processing the page reference string given below?
4, 7, 6, 1, 7, 6, 1, 2, 7, 2
(A) 4
(B) 5
(C) 6
(D) 7
Answer: (C)
Explanation: What is a Page fault ? An interrupt that occurs when a program requests data that is not
currently in real memory. The interrupt triggers the operating system to fetch the data from a virtual
memory and load it into RAM.
Now, 4, 7, 6, 1, 7, 6, 1, 2, 7, 2 is the reference string, you can think of it as data requests made by a
program.
Now the system uses 3 page frames for storing process pages in main memory. It uses the Least Recently
Used (LRU) page replacement policy.
[ ] - Initially page frames are empty.i.e. no process pages in main memory.
[ 4 ] - Now 4 is brought into 1st frame (1st page fault)
Explanation: Process page 4 was requested by the program, but it was not in the main memory(in form of
page frames),which resulted in a page fault, after that process page 4 was brought in the main memory by
the operating system.
[ 4 7 ] - Now 7 is brought into 2nd frame
(2nd page fault) - Same explanation.

[ 4 7 6 ] - Now 6 is brought into 3rd frame


(3rd page fault)
[ 1 7 6 ] - Now 1 is brought into 1st frame, as 1st frame was least recently used(4th page fault).
After this 7, 6 and 1 are already present in the frames hence no replacements in pages.
[ 1 2 6 ] - Now 2 is brought into 2nd frame, as 2nd frame was least recently used(5th page fault).
[ 1 2 7 ] -Now 7 is brought into 3rd frame, as 3rd frame was least recently used(6th page fault).
Hence, total number of page faults(also called pf) are 6. Therefore, C is the answer.
9. The optimal page replacement algorithm will select the page that
(A) Has not been used for the longest time in the past.
(B) Will not be used for the longest time in the future.
(C) Has been used least number of times.
(D) Has been used most number of times.
Answer: (B)
Explanation: The optimal page replacement algorithm will select the page whose next occurrence
will be after the longest time in future. For example, if we need to swap a page and there are two
options from which we can swap, say one would be used after 10s and the other after 5s, then the
algorithm will swap out the page that would be required 10s later. Thus, B is the correct choice.
10. Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access
pattern, increasing the number of page frames in main memory will
(A) always decrease the number of page faults
(B) always increase the number of page faults
(C) sometimes increase the number of page faults
(D) never affect the number of page faults
Answer: (C)
Explanation: Incrementing the number of page frames doesnt always decrease the page faults
(Beladys Anomaly)
11. A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin
with. The system first accesses 100 distinct pages in some order and then accesses the same 100
pages but now in the reverse order. How many page faults will occur?
(A) 196
(B) 192
(C) 197
(D) 195
Answer: (A) (http://www.geeksforgeeks.org/operating-systems-set-7/)
Explanation:
Access to 100 pages will cause 100 page faults. When these pages are accessed in reverse order, the
first four accesses will not cause page fault. All other access to pages will cause page faults. So total
number of page faults will be 100 + 96.

http://quiz.geeksforgeeks.org/operating-systems-memory-management-question-1/
http://quiz.geeksforgeeks.org/operating-systems-memory-management-question-7/
http://quiz.geeksforgeeks.org/gate-gate-cs-2014-set-1-question-43/
http://quiz.geeksforgeeks.org/gate-gate-cs-2012-question-42/
http://quiz.geeksforgeeks.org/gate-gate-cs-2007-question-56/
http://quiz.geeksforgeeks.org/gate-gate-cs-2007-question-82/
http://quiz.geeksforgeeks.org/gate-gate-cs-2007-question-83/
http://quiz.geeksforgeeks.org/gate-gate-cs-2014-set-3-question-30/
http://quiz.geeksforgeeks.org/gate-gate-cs-2002-question-23/
http://quiz.geeksforgeeks.org/gate-gate-cs-2001-question-21/
http://quiz.geeksforgeeks.org/gate-gate-cs-2010-question-24/

Operating System | User Level thread Vs Kernel


Level thread
User level thread
User thread are implemented by users.
OS doesnt recognized user level
threads.
Implementation of User threads is easy.

Kernel level thread


kernel threads are implemented by OS.
Kernel threads are recognized by OS.
Implementation of Kernel thread is
complicated.
Context switch time is more.

Context switch time is less.


Context switch requires no hardware
Hardware support is needed.
support.
If one user level thread perform blocking If one kernel thread perform blocking
operation then entire process will be
operation then another thread can continue
blocked.
execution.
Example : Java thread, POSIX threads. Example : Window Solaris.
1. Consider the following statements about user level threads and kernel level threads. Which one of the
following statement is FALSE?
(A) Context switch time is longer for kernel level threads than for user level threads.
(B) User level threads do not need any hardware support.
(C) Related kernel level threads can be scheduled on different processors in a multi-processor system.
(D) Blocking one kernel level thread blocks all related threads.
Answer: (D)
Since kernel level threads are managed by kernel, blocking one thread doesnt cause all related
threads to block. Its a problem with user level threads.
Threads
Despite of the fact that a thread must execute in process, the process and its associated threads are
different concept. Processes are used to group resources together and threads are the entities
scheduled for execution on the CPU.
A thread is a single sequence stream within in a process. Because threads have some of the properties
of processes, they are sometimes called lightweight processes. In a process, threads allow multiple
executions of streams. In many respect, threads are popular way to improve application through
parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the
threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be
in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack.
Since thread will generally call different procedures and thus a different execution history. This is
why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU

utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack
space. Threads are not independent of one other like processes as a result threads shares with other
threads their code section, data section, OS resources also known as task, such as open files and
signals.
Processes Vs Threads
As we mentioned earlier that in many respect threads operate in the same way as that of processes.
Some of the similarities and differences are:
Similarities
Like processes threads share CPU and only one thread active (running) at a time.
Like processes, threads within a processes, threads within a processes execute sequentially.
Like processes, thread can create children.
And like process, if one thread is blocked, another thread can run.
Differences
Unlike processes, threads are not independent of one another.
Unlike processes, all threads can access every address in the task .
Unlike processes, thread are design to assist one other. Note that processes might or might not
assist one another because processes may originate from different users.
Why Threads?
Following are some reasons why we use threads in designing operating systems.
1. A process with multiple threads make a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
Threads are cheap in the sense that
1. They only need a stack and storage for registers therefore, threads are cheap to create.
2. Threads use very little resources of an operating system in which they are working. That is,
threads do not need new address space, global data, program code or operating system resources.
3. Context switching are fast when working with threads. The reason is that we only have to save
and/or restore PC, SP and registers.
But this cheapness does not come free - the biggest drawback is that there is no protection between
threads.
User-Level Threads
User-level threads implement in user-level libraries, rather than via systems calls, so thread switching
does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows
nothing about user-level threads and manages them as if they were single-threaded processes.
Advantages:
The most obvious advantage of this technique is that a user-level threads package can be
implemented on an Operating System that does not support threads. Some other advantages are
User-level threads does not require modification to operating systems.
Simple Representation:
Each thread is represented simply by a PC, registers, stack and a small control block, all stored
in the user process address space.
Simple Management:
This simply means that creating a thread, switching between threads and synchronization
between threads can all be done without intervention of the kernel.
Fast and Efficient:
Thread switching is not much more expensive than a procedure call.

Disadvantages:
There is a lack of coordination between threads and operating system kernel. Therefore, process
as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It
is up to each thread to relinquish control to other threads.
User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise,
entire process will blocked in the kernel, even if there are runable threads left in the processes.
For example, if one thread causes a page fault, the process blocks.
Kernel-Level Threads
In this method, the kernel knows about and manages the threads. No runtime system is needed in this
case. Instead of thread table in each process, the kernel has a thread table that keeps track of all
threads in the system. In addition, the kernel also maintains the traditional process table to keep track
of processes. Operating Systems kernel provides system call to create and manage threads.
The implementation of general structure of kernel-level thread is
<DIAGRAM>
Advantages:
Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a
process having large number of threads than process having small number of threads.
Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of
times slower than that of user-level threads.
Since kernel must manage and schedule threads as well as processes. It require a full thread
control block (TCB) for each thread to maintain information about threads. As a result there is
significant overhead and increased in kernel complexity.
Advantages of Threads over Multiple Processes
Context Switching Threads are very inexpensive to create and destroy, and they are
inexpensive to represent. For example, they require space to store, the PC, the SP, and the generalpurpose registers, but they do not require space to share memory information, Information about
open files of I/O devices in use, etc. With so little context, it is much faster to switch between
threads. In other words, it is relatively easier for a context switch using threads.
Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for
example, sharing code section, data section, Operating System resources like open file etc.
Disadvantages of Threads over Multiprocesses
Blocking The major disadvantage if that if the kernel is single threaded, a system call of one
thread will block the whole process and CPU may be idle during the blocking period.
Security Since there is, an extensive sharing among threads there is a potential problem of
security. It is quite possible that one thread over writes the stack of another thread (or damaged
shared data) although it is very unlikely since threads are meant to cooperate on a single task.
Application that Benefits from Threads
A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a
multi-threaded process. In general, any program that has to do more than one task at a time could

benefit from multitasking. For example, a program that reads input, process it, and outputs could have
three threads, one for each task.
Application that cannot Benefit from Threads
Any sequential process that cannot be divided into parallel task will not benefit from thread, as they
would block until the previous one completes. For example, a program that displays the time of the
day would not benefit from multiple threads.
Resources used in Thread Creation and Process Creation

When a new thread is created it shares its code section, data section and operating system resources
like open files with other threads. But it is allocated its own stack, register set and a program counter.
The creation of a new process differs from that of a thread mainly in the fact that all the shared
resources of a thread are needed explicitly for each process. So though two processes may be running
the same piece of code they need to have their own copy of the code in the main memory to be able to
run. Two processes also do not share other resources with each other. This makes the creation of a
new process very costly compared to that of a new thread.
Context Switch
To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock
generates interrupts periodically. This allows the operating system to schedule all processes in main
memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock
interrupt occurs, the interrupt handler checks how much time the current running process has used. If
it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different
process to run. Each switch of the CPU from one process to another is called a context switch.
Major Steps of Context Switching
The values of the CPU registers are saved in the process table of the process that was running just
before the clock interrupt occurred.
The registers are loaded from the process picked by the CPU scheduler to run next.
In a multiprogrammed uniprocessor computing system, context switches occur frequently enough that
all processes appear to be running concurrently. If a process has more than one thread, the Operating
System can use the context switching technique to schedule the threads so they appear to execute in
parallel. This is the case if threads are implemented at the kernel level. Threads can also be
implemented entirely at the user level in run-time libraries. Since in this case no thread scheduling is
provided by the Operating System, it is the responsibility of the programmer to yield the CPU
frequently enough in each thread so all threads in the process can make progress.
Action of Kernel to Context Switch Among Threads
The threads share a lot of resources with other peer threads belonging to the same process. So a
context switch among threads for the same process is easy. It involves switch of register set, the
program counter and the stack. It is relatively easy for the kernel to accomplished this task.
Action of kernel to Context Switch Among Processes

Context switches among processes are expensive. Before a process can be switched its process
control block (PCB) must be saved by the operating system. The PCB consists of the following
information:
The process state.
The program counter, PC.
The values of the different registers.
The CPU scheduling information for the process.
Memory management information regarding the process.
Possible accounting information for this process.
I/O status information of the process.
When the PCB of the currently executing process is saved the operating system loads the PCB of the
next process that has to be run on CPU. This is a heavy task and it takes a lot of time.

Multi threading models


Many operating systems support kernel thread and user thread in a combined way. Example of such
system is Solaris. Multi threading model are of three types.
Many to many model.
Many to one model.
one to one model.
Many to Many Model
In this model, we have multiple user threads multiplex to same or lesser number of kernel level threads.
Number of kernel level threads are specific to the machine, advantage of this model is if a user thread is
blocked we can schedule others user thread to other kernel thread. Thus, System doesnt block if a
particular thread is blocked.

Many to One Model


In this model, we have multiple user threads mapped to one kernel thread. In this model when a user
thread makes a blocking system call entire process blocks. As we have only one kernel thread and only
one user thread can access kernel at a time, so multiple threads are not able access multiprocessor at the
same time.

One to One Model


In this model, one to one relationship between kernel and user thread. In this model multiple thread can
run on multiple processor. Problem with this model is that creating a user thread requires the
corresponding kernel thread.

fork() in C
fork() system call is used to create a new process. Newly created process becomes child of
the caller process. It take no parameters and return integer value. Below are different
values returned by fork().
Negative Value: creation of a child process was unsuccessful.
Zero: Returned to the newly created child process.
Positive value: Returned to parent or caller. The value contains process ID of newly
created child process.
Examples:
1) Output of below program.
#include <stdio.h>
#include <sys/types.h>
int main()
{
pid_t pid = fork();
if (pid == 0)
printf("Child process created\n");
else
printf("Parent process created\n");
return 0;
}
Output:
Parent process created
Child process created

In the above code, a child process is created, fork() returns 0 in the child process and positive integer to
the parent process.

2) Calculate number of times hello is printed.


#include <stdio.h>
#include <sys/types.h>
int main()
{
fork();
fork();
fork();
printf("hello\n");
return 0;
}
Output:
hello
hello
hello
hello
hello
hello
hello
hello

Number of times hello printed is equal to number of process created. Total Number of Processes = 2n
where n is number of fork system calls. So here n=3, 23 = 8
Let us put some label names for the three lines:
fork (); // Line 1
fork (); // Line 2
fork (); // Line 3

L1
/
L2

// There will be 1 child process created by line 1

\
L2

// There will be 2 child processes created by line 2

/ \ / \
L3 L3 L3 L3 // There will be 4 child processes created by line 3
So there are total eight processes (new child processes and one original process).
Please note that the above programs dont compile in Windows environment.
fork() vs exec()
The fork system call creates a new process. The new process created by fork() is copy of the current
process except the returned value. The exex system call replaces the current process with a new program.

Exercise:
1) A process executes the following code
for (i = 0; i < n; i++)
fork();
The total number of child processes created is: (GATE CS 2008)
(A) n
(B) 2^n 1
(C) 2^n
(D) 2^(n+1) 1;
See this for solution.
Answer: (B)
Explanation:
F0
/

F1

// There will be 1 child process created by first fork


\
F1

// There will be 2 child processes created by second fork


\
/ \
F2
F2 F2
F2 // There will be 4 child processes created by third fork
/ \
/ \ / \ / \
...............
// and so on
/

If we sum all levels of above tree for i = 0 to n-1, we get 2^n 1. So there will be 2^n 1 child
processes.
--------fork() and Binary Tree
Given a program on fork() system call.
#include <stdio.h>
#include <unistd.h>
int main()
{
fork();
fork() && fork() || fork();
fork();

printf("forked\n");
return 0;
}
How many processes will be spawned after executing the above program?
A fork() system call spawn processes as leaves of growing binary tree. If we call fork() twice, it will
spawn 22 = 4 processes. All these 4 processes forms the leaf children of binary tree. In general if we are
level l, and fork() called unconditionally, we will have 2l processes at level (l+1). It is equivalent to
number of maximum child nodes in a binary tree at level (l+1).
As another example, assume that we have invoked fork() call 3 times unconditionally. We can represent
the spawned process using a full binary tree with 3 levels. At level 3, we will have 23 = 8 child nodes,
which corresponds to number of processes running.
A note on C/C++ logical operators:
The logical operator && has more precedence than ||, and have left to right associativity. After executing
left operand, the final result will be estimated and execution of right operand depends on outcome of left
operand as well as type of operation.
In case of AND (&&), after evaluation of left operand, right operand will be evaluated only if left
operand evaluates to non-zero. In case of OR (||), after evaluation of left operand, right operand will be
evaluated only if left operand evaluates to zero.
Return value of fork():
The man pages of fork() cites the following excerpt on return value,
On success, the PID of the child process is returned in the parent, and 0 is returned in the child. On
failure, -1 is returned in the parent, no child process is created, and errno is set appropriately.
A PID is like handle of process and represented as unsigned int. We can conclude, the fork() will return a
non-zero in parent and zero in child. Let us analyse the program. For easy notation, label each fork() as
shown below,
#include <stdio.h>
int main()
{
fork(); /* A */
( fork() /* B */ &&
fork() /* C */ ) || /* B and C are grouped according to precedence */
fork(); /* D */

fork(); /* E */

printf("forked\n");
return 0;
}
The following diagram provides pictorial representation of fork-ing new processes. All newly created
processes are propagated on right side of tree, and parents are propagated on left side of tree,
in consecutive levels.

The first two fork() calls are called unconditionally.


At level 0, we have only main process. The main (m in diagram) will create child C1 and both will
continue execution. The children are numbered in increasing order of their creation.
At level 1, we have m and C1 running, and ready to execute fork() B. (Note that B, C and D named as
operands of && and || operators). The initial expression B will be executed in every children and parent
process running at this level.
At level 2, due to fork() B executed by m and C1, we have m and C1 as parents and, C2 and C3 as
children.
The return value of fork() B is non-zero in parent, and zero in child. Since the first operator is &&,
because of zero return value, the children C2 and C3 will not execute next expression (fork()- C). Parents
processes m and C1 will continue with fork() C. The children C2 and C3 will directly execute fork()
D, to evaluate value of logical OR operation.

At level 3, we have m, C1, C2, C3 as running processes and C4, C5 as children. The expression is now
simplified to ((B && C) || D), and at this point the value of (B && C) is obvious. In parents it is non-zero
and in children it is zero. Hence, the parents aware of outcome of overall B && C || D, will skip
execution of fork() D. Since, in the children (B && C) evaluated to zero, they will execute fork() D.
We should note that children C2 and C3 created at level 2, will also run fork() D as mentioned above.
At level 4, we will have m, C1, C2, C3, C4, C5 as running processes and C6, C7, C8 and C9 as child
processes. All these processes unconditionally execute fork() E, and spawns one child.
At level 5, we will have 20 processes running. The program (on Ubuntu Maverick, GCC 4.4.5) printed
forked 20 times. Once by root parent (main) and rest by children. Overall there will be 19 processes
spawned.
A note on order of evaluation:
The evaluation order of expressions in binary operators is unspecified. For details read the
post Evaluation order of operands. However, the logical operators are an exception. They are guaranteed
to evaluate from left to right.
2) Consider the following code fragment:
if (fork() == 0)
{
a = a + 5;
printf("%d,%d\n", a, &a);
}
else
{
a = a 5;
printf("%d, %d\n", a, &a);
}
Let u, v be the values printed by the parent process, and x, y be the values printed by the
child process. Which one of the following is TRUE? (GATE-CS-2005)
(A) u = x + 10 and v = y
(B) u = x + 10 and v != y
(C) u + 10 = x and v = y
(D) u + 10 = x and v != y
See this for solution.
Answer (C)

fork() returns 0 in child process and process ID of child process in parent process. In Child (x), a = a + 5
In Parent (u), a = a 5; Therefore x = u + 10. The physical addresses of a in parent and child must be
different. But our program accesses virtual addresses (assuming we are running on an OS that uses virtual
memory). The child process gets an exact copy of parent process and virtual address of a doesnt change
in child process. Therefore, we get same addresses in both parent and child.
3) Predict output of below program.
#include <stdio.h>
#include <unistd.h>
int main()
{
fork();
fork() && fork() || fork();
fork();
printf("forked\n");
return 0;
}
See this for solution

References:
http://www.csl.mtu.edu/cs4411.ck/www/NOTES/process/fork/create.html

Operating System | Paging


Paging is a memory management scheme that eliminates the need for contiguous allocation of physical
memory. This scheme permits the physical address space of a process to be non contiguous.

Logical Address or Virtual Address (represented in bits): An address generated by the CPU

Logical Address Space or Virtual Address Space( represented in words or bytes): The set of all
logical addresses generated by a program

Physical Address (represented in bits): An address actually available on memory unit

Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses

Example:

If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G = 230)

If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27
bits

If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M = 220)

If Physical Address Space = 16 M words = 24 * 220 words, then Logical Address = log2 224 = 24
bits

The mapping from virtual to physical address is done by the memory management unit (MMU) which is
a hardware device and this mapping is known as paging technique.

The Physical Address Space is conceptually divided into a number of fixed-size blocks, called
frames.

The Logical address Space is also splitted into fixed-size blocks, called pages.

Page Size = Frame Size

Let us consider an example:

Physical Address = 12 bits, then Physical Address Space = 4 K words

Logical Address = 13 bits, then Logical Address Space = 8 K words

Page size = frame size = 1 K words (assumption)

Address generated by CPU is divided into

Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number

Page offset(d): Number of bits required to represent particular word in a page or page size of
Logical Address Space or word number of a page or page offset.

Physical Address is divided into

Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number.

Frame offset(d): Number of bits required to represent particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.

The hardware implementation of page table can be done by using dedicated registers. But the usage of
register for the page table is satisfactory only if page table is small. If page table contain large number of
entries then we can use TLB(translation Look-aside buffer), a special, small, fast look up hardware cache.

The TLB is associative, high speed memory.

Each entry in TLB consists of two parts: a tag and a value.

When this memory is used, then an item is compared with all tags simultaneously.If the item is
found, then corresponding value is returned.

Main memory access time = m


If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)

Operating Systems | Segmentation


A Memory Management technique in which memory is divided into variable sized chunks which can be
allocated to processes. Each chunk is called a Segment.
A table stores the information about all such segments and is called Segment Table.

Segment Table: It maps two dimensional Logical address into one dimensional Physical address.
Its each table entry has

Base Address: It contains the starting physical address where the segments reside in memory.

Limit: It specifies the length of the segment.

Translation of Two dimensional Logical Address to one dimensional Physical Address.

Address generated by the CPU is divided into:

Segment number (s): Number of bits required to represent the segment.

Segment offset (d): Number of bits required to represent the size of the segment.

Advantages of Segmentation:

No Internal fragmentation.

Segment Table consumes less space in comparison to Page table in paging.

Disadvantage of Segmentation:

As processes are loaded and removed from the memory, the free memory space is broken into
little pieces, causing External fragmentation.

Operating System| Bankers Algorithm

The bankers algorithm is a resource allocation and deadlock avoidance algorithm that tests for safety by
simulating the allocation for predetermined maximum possible amounts of all resources, then makes an
s-state check to test for possible activities, before deciding whether allocation should be allowed to
continue.
Following Data structures are used to implement the Bankers Algorithm:
Let n be the number of processes in the system and m be the number of resources types.
Available :

It is a 1-d array of size m indicating the number of available resources of each type.

Available[ j ] = k means there are k instances of resource type Rj

Max :

It is a 2-d array of size n*m that defines the maximum demand of each process in a
system.

Max[ i, j ] = k means process Pi may request at most k instances of resource type


Rj.

Allocation :

It is a 2-d array of size n*m that defines the number of resources of each type
currently allocated to each process.

Allocation[ i, j ] = k means process Pi is currently allocated k instances of resource


type Rj

Need :

It is a 2-d array of size n*m that indicates the remaining resource need of each
process.

Need [ i, j ] = k means process Pi currently allocated k instances of resource type


Rj

Need [ i, j ] = Max [ i, j ] Allocation [ i, j ]

Allocationi specifies the resources currently allocated to process Pi and Needi specifies the additional
resources that process Pi may still request to complete its task.
Bankers algorithm consist of Safety algorithm and Resource request algorithm

Safety Algorithm

Resource-Request Algorithm
Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k instances of
resource type Rj. When a request for resources is made by process Pi, the following actions are taken:

Example:

Considering a system with five processes P0 through P4 and three resources types A, B, C. Resource
type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time t0 following
snapshot of the system has been taken:

Question1. What will be the content of the Need matrix?


Need [i, j] = Max [i, j] Allocation [i, j]
So, the content of Need Matrix is:

Question2. Is the system in safe state? If Yes, then what is the safe sequence?
Applying the Safety algorithm on the given system,

Question3. What will happen if process P1 requests one additional instance of resource type A and
two instances of resource type C?

We must determine whether this new system state is safe. To do so, we again execute Safety algorithm on
the above data structures.

Hence the new system state is safe, so we can immediately grant the request for process P1 .
Gate question:
http://quiz.geeksforgeeks.org/gate-gate-cs-2014-set-1-question-41/
Reference:
Operating System Concepts 8th Edition by Abraham Silberschatz, Peter B. Galvin, Greg Gagne

Readers-Writers Problem | Set 1 (Introduction


and Readers Preference Solution)
Consider a situation where we have a file shared between many people.

If one of the people tries editing the file, no other person should be reading or writing at the same
time, otherwise changes will not be visible to him/her.

However if some person is reading the file, then others may read it at the same time.

Precisely in OS we call this situation as the readers-writers problem


Problem parameters:

One set of data is shared among a number of processes

Once a writer is ready, it performs its write. Only one writer may write at a time

If a process is writing, no other process can read it

If at least one reader is reading, no other process can write

Readers may not write and only read


Solution when Reader has the Priority over Writer

Here priority means, no reader should wait if the share is currently opened for reading.
Three variables are used: mutex, wrt, readcnt to implement solution
1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion when readcnt is
updated i.e. when any reader enters or exit from the critical section and semaphore wrt is used by
both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the critical section,
initially 0
Functions for sempahore :
wait() : decrements the semaphore value.
signal() : increments the semaphore value.
Writer process:

1. Writer requests the entry to critical section.


2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not allowed, it keeps
on waiting.
3. It exits the critical section.
do {

// writer requests for critical section


wait(wrt);
// performs the write
// leaves the critical section
signal(wrt);

} while(true);

Reader process:
1. Reader requests the entry to critical section.
2. If allowed:
o it increments the count of number of readers inside the critical section. If this reader is the
first reader entering, it locks the wrt semaphore to restrict the entry of writers if any reader
is inside.
o It then, signals mutex as any other reader is allowed to enter while others are already
reading.
o After performing reading, it exits the critical section. When exiting, it checks if no more
reader is inside, it signals the semaphore wrt as now, writer can enter the critical section.
3. If not allowed, it keeps on waiting.
do {
// Reader wants to enter the critical section
wait(mutex);
// The number of readers has now increased by 1
readcnt++;
//
//
//
if

there is atleast one reader in the critical section


this ensure no writer can enter if there is even one reader
thus we give preference to readers here
(readcnt==1)
wait(wrt);

// other readers can enter while this current reader is inside


// the critical section
signal(mutex);
// current reader performs reading here

wait(mutex);

// a reader wants to leave

readcnt--;
// that is, no reader is left in the critical section,
if (readcnt == 0)
signal(wrt);
// writers can enter
signal(mutex); // reader leaves
} while(true);

Thus, the mutex wrt is queued on both readers and writers in a manner such that preference is given to
readers if writers are also there. Thus, no reader is waiting simply because a writer has requested to enter
the critical section.

Whats difference between Priority Inversion and Priority Inheritance ?


Both of these concepts come under Priority scheduling in Operating System. But are they same ?
In one line, Priority Inversion is a problem while Priority Inheritance is a solution. Literally, Priority
Inversion means that priority of tasks get inverted and Priority Inheritance means that priority of tasks
get inherited. Both of these phenomena happen in priority scheduling. Basically, in Priority Inversion,
higher priority task (H) ends up waiting for middle priority task (M) when H is sharing critical section
with lower priority task (L) and L is already in critical section. Effectively, H waiting for M results in
inverted priority i.e. Priority Inversion. One of the solution for this problem is Priority Inheritance. In
Priority Inheritance, when L is in critical section, L inherits priority of H at the time when H starts
pending for critical section. By doing so, M doesnt interrupt L and H doesnt wait for M to finish. Please
note that inheriting of priority is done temporarily i.e. L goes back to its old priority when L comes out of
critical section.
More details on these can be found here.
Priority Inversion : What the heck !

Let us first put priority inversion in the context of the Big Picture i.e. where does this come from.
In Operating System, one of the important concepts is Task Scheduling. There are several Scheduling
methods such as First Come First Serve, Round Robin, Priority based scheduling etc. Each scheduling
method has its pros and cons. As you might have guessed, Priority Inversion comes under Priority based
Scheduling. Basically, its a problem which arises sometimes when Priority based scheduling is used by
OS. In Priority based scheduling, different tasks are given different priority so that higher priority tasks
can intervene lower priority tasks if possible. So, in a priority based scheduling, if lower priority task (L)
is running and if a higher priority task (H) also needs to run, the lower priority task (L) would be
preempted by higher priority task (H). Now, suppose both lower and higher priority tasks need to share a
common resource (say access to the same file or device) to achieve their respective work. In this case,
since theres resource sharing and task synchronization is needed, several methods/techniques can be
used for handling such scenarios. For for sake of our topic on Priority Inversion, let us mention a

synchronization method say mutex. Just to recap on mutex, a task acquires mutex before entering critical
section (CS) and releases mutex after exiting critical section (CS). While running in CS, a task access this
common resource. More details on this can be referred here. Now, say both L and H shares a common
Critical Section (CS) i.e. same mutex is needed for this CS.
Coming to our discussion of priority inversion, let us examine some scenarios.
1) L is running but not in CS ; H needs to run; H preempts L ; H starts running ; H relinquishes or
releases
control
;
L
resumes
and
starts
running
2) L is running in CS ; H needs to run but not in CS; H preempts L ; H starts running ; H relinquishes
control
;
L
resumes
and
starts
running.
3) L is running in CS ; H also needs to run in CS ; H waits for L to come out of CS ; L comes out of CS ;
H enters CS and starts running
Please note that the above scenarios dont show the problem of any Priority Inversion (not even scenario
3). Basically, so long as lower priority task isnt running in shared CS, higher priority task can preempt it.
But if L is running in shared CS and H also needs to run in CS, H waits until L comes out of CS. The idea
is that CS should be small enough so that it doesnt result in H waiting for long time while L was in CS.
Thats why writing CS code requires careful consideration. In any of the above scenarios, priority
inversion (i.e. reversal of priority) didnt occur because the tasks are running as per the design.
Now let us add another task of middle priority say M. Now the task priorities are in the order of L < M <
H. In our example, M doesnt share the same Critical Section (CS). In this case, the following sequence
of task running would result in Priority Inversion problem.
4) L is running in CS ; H also needs to run in CS ; H waits for L to come out of CS ; M interrupts L and
starts running ; M runs till completion and relinquishes control ; L resumes and starts running till the end
of
CS
;
H
enters
CS
and
starts
running.
Note that neither L nor H share CS with M.
Here, we can see that running of M has delayed the running of both L and H. Precisely speaking, H is of
higher priority and doesnt share CS with M; but H had to wait for M. This is where Priority based
scheduling didnt work as expected because priorities of M and H got inverted in spite of not sharing any
CS. This problem is called Priority Inversion. This is what the heck was Priority Inversion ! In a system
with priority based scheduling, higher priority tasks can face this problem and it can result in unexpected
behavior/result. In general purpose OS, it can result in slower performance. In RTOS, it can result in
more severe outcomes. The most famous Priority Inversion problem was what happened at Mars
Pathfinder.
If we have a problem, there has to be solution for this. For Priority Inversion as well, therere different
solutions such as Priority Inheritance etc. This is going to be our next article

Process Management
Question 1
Consider the following code fragment:
if (fork() == 0)
{ a = a + 5; printf("%d,%d\n", a, &a); }
else { a = a 5; printf("%d, %d\n", a, &a); }
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process.
Which one of the following is TRUE?
A) u = x + 10 and v = y
B) u = x + 10 and v != y
C) u + 10 = x and v = y
D) u + 10 = x and v != y
Question 2
The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the
old value of x in y without allowing any intervening access to the memory location x. consider the
following implementation of P and V functions on a binary semaphore .
void P (binary_semaphore *s) {
unsigned y;
unsigned *x = &(s->value);
do {
fetch-and-set x, y;
} while (y);
}
void V (binary_semaphore *s) {
S->value = 0;
}
Which one of the following is true?
A) The implementation may not work if context switching is disabled in P.
B) Instead of using fetch-and-set, a pair of normal load/store can be used.
C) The implementation of V is wrong.
D) The code does not implement a binary semaphore.
Question 2 Explanation:
Let us talk about the operation P(). It stores the value of s in x, then it fetches the old value of x, stores it
in y and sets x as 1. The while loop of a process will continue forever if some other process doesn't
execute V() and sets the value of s as 0. If context switching is disabled in P, the while loop will run
forever as no other process will be able to execute V().
Question 3
Three concurrent processes X, Y, and Z execute three different code segments that access and update
certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process
Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores
c, d, and a before entering the respective code segments. After completing the execution of its code
segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are
binary semaphores initialized to one. Which one of the following represents a deadlockfree order of
invoking the P operations by the processes? (GATE CS 2013).

A)
B)
C)
D)

X: P(a)P(b)P(c) Y:P(b)P(c)P(d) Z:P(c)P(d)P(a)


X: P(b)P(a)P(c) Y:P(b)P(c)P(d) Z:P(a)P(c)P(d)
X: P(b)P(a)P(c) Y:P(c)P(b)P(d) Z:P(a)P(c)P(d)
X: P(a)P(b)P(c) Y:P(c)P(b)P(d) Z:P(c)P(d)P(a)

Question 3 Explanation:
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b and
process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock. Imagine a
situation process X has acquired b, process Y has acquired c and process Z has acquired a. There is
circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired a and b,
process Y has acquired c. X and Y circularly waiting for each other. See
http://www.eee.metu.edu.tr/~halici/courses/442/Ch5%20Deadlocks.pdf Consider option A) for example
here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now X is
blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly one
can figure out that for B) completion order is Z,X then Y. This question is duplicate of
http://geeksquiz.com/gate-gate-cs-2013-question-16/
Question 4
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution? (GATE CS 2013)
A) 2
B) 1
C) 1
D) 2
Question 4 Explanation:
Processes can run in many ways, below is one of the cases in which x attains max value
Semaphore S is initialized to 2
Process W executes S=1, x=1 but it doesn't update the x variable.
Then process Y executes S=0, it decrements x, now x= -2 and
signal semaphore S=1
Now process Z executes s=0, x=-4, signal semaphore S=1
Now process W updates x=1, S=2
Then process X executes X=2
So correct option is D
Question 5
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution? (GATE CS 2013)
A) 2
B) 1
C) 1
D) 2

Answer: (D)
Explanation: Processes can run in many ways, below is one of the cases in which x attains max value
Semaphore S is initialized to 2
Process W executes S=1, x=1 but it doesn't update the x variable.
Then process Y executes S=0, it decrements x, now x= -2 and
signal semaphore S=1
Now process Z executes s=0, x=-4, signal semaphore S=1
Now process W updates x=1, S=2
Then process X executes X=2
So correct option is D
Question 6
A certain computation generates two arrays a and b such that a[i]=f(i) for 0 i < n and b[i]=g(a[i]) for 0
i < n. Suppose this computation is decomposed into two concurrent processes X and Y such that X
computes the array a and Y computes the array b. The processes employ two binary semaphores R and S,
both initialized to zero. The array a is shared by the two processes. The structures of the processes are
shown below.
Process X:
private i;
for (i=0; i < n; i++) {
a[i] = f(i);
ExitX(R, S);
}

Process Y:
private i;
for (i=0; i < n; i++) {
EntryY(R, S);
b[i]=g(a[i]);
}

Which one of the following represents the CORRECT implementations of ExitX and EntryY?
(A)
ExitX(R, S) {
P(R);
V(S);
}
EntryY (R, S) {
P(S);
V(R);
}
(B)
ExitX(R, S) {
V(R);
V(S);
}
EntryY(R, S) {
P(R);
P(S);

}
(C)
ExitX(R, S) {
P(S);
V(R);
}
EntryY(R, S) {
V(S);
P(R);
}
(D)
ExitX(R, S) {
V(R);
P(S);
}
EntryY(R, S) {
V(S);
P(R);
}

A) A

B) B

C) C

D) D

Question 6 Explanation:
The purpose here is neither the deadlock should occur nor the binary semaphores be assigned value
greater than one.
A leads to deadlock
B can increase value of semaphores b/w 1 to n
D may increase the value of semaphore R and S to 2 in some cases
Hence Option C is the answer.
http://quiz.geeksforgeeks.org/operating-systems/process-synchronization/
http://quiz.geeksforgeeks.org/cpu-scheduling/
http://quiz.geeksforgeeks.org/operating-systems/memory-management/
http://quiz.geeksforgeeks.org/operating-systems/iinput-output-systems/

A)

Commonly Asked Operating Systems Interview Questions | Set 1

What is a process and process table? What are different states of process
A process is an instance of program in execution. For example a Web Browser is a process, a shell (or
command prompt) is a process.
The operating system is responsible for managing all the processes that are running on a computer and
allocated each process a certain amount of time to use the processor. In addition, the operating system
also allocates various other resources that processes will need such as computer memory or disks. To
keep track of the state of all the processes, the operating system maintains a table known as the process
table. Inside this table, every process is listed along with the resources the processes is using and the
current state of the process.
Processes can be in one of three states: running, ready, or waiting. The running state means that the
process has all the resources it need for execution and it has been given permission by the operating
system to use the processor. Only one process can be in the running state at any given time. The
remaining processes are either in a waiting state (i.e., waiting for some external event to occur such as
user input or a disk access) or a ready state (i.e., waiting for permission to use the processor). In a real
operating system, the waiting and ready states are implemented as queues which hold the processes in
these states. The animation below shows a simple representation of the life cycle of a process (Source:
http://courses.cs.vt.edu/csonline/OS/Lessons/Processes/index.html)
What is a Thread? What are the differences between process and thread?
A thread is a single sequence stream within in a process. Because threads have some of the properties of
processes, they are sometimes called lightweight processes. Threads are popular way to improve
application through parallelism. For example, in a browser, multiple tabs can be different threads. MS
word uses multiple threads, one thread to format the text, other thread to process inputs, etc.
A thread has its own program counter (PC), a register set, and a stack space. Threads are not independent
of one other like processes as a result threads shares with other threads their code section, data section
and OS resources like open files and signals. See
http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/threads.htm for more details.
What is deadlock?
Deadlock is a situation when two or more processes wait for each other to finish and none of them ever
finish. Consider an example when two trains are coming toward each other on same track and there is
only one track, none of the trains can move once they are in front of each other. Similar situation occurs
in operating systems when there are two or more processes hold some resources and wait for resources
held by other(s).
What are the necessary conditions for deadlock?
Mutual Exclusion: There is s resource that cannot be shared.
Hold and Wait: A process is holding at least one resource and waiting for another resource which is with
some other process.
No Preemption: The operating system is not allowed to take a resource back from a process until process

gives it back.
Circular Wait: A set of processes are waiting for each other in circular form.
What is Virtual Memory? How is it implemented?
Virtual memory creates an illusion that each user has one or more contiguous address spaces, each
beginning at address zero. The sizes of such virtual address spaces is generally very high.
The idea of virtual memory is to use disk space to extend the RAM. Running processes dont need to care
whether the memory is from RAM or disk. The illusion of such a large amount of memory is created by
subdividing the virtual memory into smaller pieces, which can be loaded into physical memory whenever
they are needed by a process.
What is Thrashing?
Thrashing is a situation when the performance of a computer degrades or collapses. Thrashing occurs
when a system spends more time processing page faults than executing transactions. While processing
page faults is necessary to in order to appreciate the benefits of virtual memory, thrashing has a negative
affect on the system. As the page fault rate increases, more transactions need processing from the paging
device. The queue at the paging device increases, resulting in increased service time for a page fault
(Source: http://cs.gmu.edu/cne/modules/vm/blue/thrash.html)
What is Beladys Anomaly?

Bldys anomaly is an anomaly with some page replacement policies where increasing the number of
page frames results in an increase in the number of page faults. It occurs with First in First Out page
replacement is used. See the wiki page for an example and more details.
Differences between mutex and semphore?
See http://www.geeksforgeeks.org/mutex-vs-semaphore/

Practice Quizzes on Operating System topics

Last Minute Notes OS

OS articles

LAST MINUTE NOTES


Operating Systems: It is the interface between the user and the computer hardware.
Types of OS:

Batch OS: A set of similar jobs are stored in the main memory for execution. A job gets assigned
to the CPU, only when the execution of the previous job completes.

Multiprogramming OS: The main memory consists of jobs waiting for CPU time. The OS
selects one of the processes and assigns it the CPU time. Whenever the executing process needs to
wait for any other operation (like I/O), the OS selects another process from the job queue and
assigns it the CPU. This way, the CPU is never kept idle and the user gets the flavor of getting
multiple tasks done at once.

Multitasking OS: Multitasking OS combines the benefits of Multiprogramming OS and CPU


scheduling to perform quick switches between jobs. The switch is so quick that the user can
interact with each program as it runs

Time Sharing OS: Time sharing systems require interaction with the user to instruct the OS to
perform various tasks. The OS responds with an output. The instructions are usually given
through an input device like the keyboard.

Real Time OS : Real Time OS are usually built for dedicated systems to accomplish a specific set
of tasks within deadlines.
Threads

A thread is a light weight process and forms a basic unit of CPU utilization. A process can perform more
than one task at the same time by including multiple threads.
o A thread has its own program counter, register set, and stack
o A thread shares with other threads of the same process the code section, the data section,
files and signals.
A new thread, or a child process of a given process, can be introduced by using the fork() system call. A
process with n fork() system calls generates 2n 1 child processes.
There are two types of threads:

User threads

Kernel threads

Example : Java thread, POSIX threads.Example : Window Solaris.

User level thread


User thread are implemented by users.

Kernel level thread


kernel threads are implemented by OS.

OS doesnt recognized user level threads.


Implementation of User threads is easy.
Context switch time is less.
Context switch requires no hardware support.
If one user level thread perform blocking operation
then entire process will be blocked.

Kernel threads are recognized by OS.


Implementation of Kernel thread is complicated.
Context switch time is more.
Hardware support is needed.
If one kernel thread perform blocking operation then
another thread can continue execution.

Process:
A process is a program under execution. The value of program counter (PC) indicates the address of the
current instruction of the process being executed. Each process is represented by a Process Control Block
(PCB).

Process Scheduling:
Below are different time with respect to a process.
Arrival Time:
Time at which the process arrives in the ready queue.
Completion Time:
Time at which process completes its execution.
Burst Time:
Time required by a process for CPU execution.
Turn Around Time:
Time Difference between completion time and arrival time.
Turn Around Time = Completion Time - Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time - Burst Time

Why do we need scheduling?


A typical process involves both I/O time and CPU time. In a uniprogramming system like MS-DOS, time
spent waiting for I/O is wasted and CPU is free during this time. In multiprogramming systems, one
process can use CPU while another is waiting for I/O. This is possible only with process scheduling.

Objectives of Process Scheduling Algorithm


Max CPU utilization [Keep CPU as busy as possible]
Fair allocation of CPU.
Max throughput [Number of processes that complete their execution per time unit]
Min turnaround time [Time taken by a process to finish execution]
Min waiting time [Time a process waits in ready queue]
Min response time [Time when a process produces first response]

Different Scheduling Algorithms


First Come First Serve (FCFS): Simplest scheduling algorithm that schedules according to arrival times
of processes.

Shortest Job First(SJF): Process which have the shortest burst time are scheduled first.
Shortest Remaining Time First(SRTF): It is preemptive mode of SJF algorithm in which jobs are
schedule according to shortest remaining time.
Round Robin Scheduling: Each process is assigned a fixed time in cyclic way.
Priority Based scheduling (Non Preemptive): In this scheduling, processes are scheduled according to
their priorities, i.e., highest priority process is schedule first. If priorities of two processes match, then
schedule according to arrival time.
Highest Response Ratio Next (HRRN) In this scheduling, processes with highest response ratio is
scheduled. This algorithm avoids starvation.
Response Ratio = (Waiting Time + Burst time) / Burst time

Multilevel Queue Scheduling: According to the priority of process, processes are placed in the different
queues. Generally high priority process are placed in the top level queue. Only after completion of
processes from top level queue, lower level queued processes are scheduled.
Multi level Feedback Queue Scheduling: It allows the process to move in between queues. The idea is to
separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU
time, it is moved to a lower-priority queue.

Some useful facts about Scheduling Algorithms:


1) FCFS can cause long waiting times, especially when the first job takes too much CPU time.
2) Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a situation
when long process is there in ready queue and shorter processes keep coming.
3) If time quantum for Round Robin scheduling is very large, then it behaves same as FCFS scheduling.
4) SJF is optimal in terms of average waiting time for a given set of processes. SJF gives minimum
average waiting time, but problems with SJF is how to know/predict time of next job.

The Critical Section Problem


Critical Section: The portion of the code in the program where shared variables are accessed and/or
updated.
Remainder Section: The remaining portion of the program excluding the Critical Section.
Race around Condition: The final output of the code depends on the order in which the variables are
accessed. This is termed as the race around condition.
A solution for the critical section problem must satisfy the following three conditions:

1. Mutual Exclusion: If a process Pi is executing in its critical section, then no other process is
allowed to enter into the critical section.
2. Progress: If no process is executing in the critical section, then the decision of a process to enter a
critical section cannot be made by any other process that is executing in its remainder section. The
selection of the process cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bound on the number of times other processes can enter into
the critical section after a process has made request to access the critical section and before the
requested is granted.

Synchronization Tools
Semaphores: A semaphore is an integer variable that is accessed only through two atomic operations,
wait () and signal (). An atomic operation is executed in a single CPU time slice without any pre-emption.
Semaphores are of two types:
1. Counting Semaphore: A counting semaphore is an integer variable whose value can range over
an unrestricted domain.
2. Mutex: Binary Semaphores are called Mutex. These can have only two values, 0 or 1. The
operations wait () and signal () operate on these in a similar fashion.

Deadlock
A situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Deadlock can arise if following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: One or more than one resource are non-sharable (Only one process
can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.

3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot
the system. This is the approach that both Windows and UNIX take.
Bankers Algorithm:
This algorithm handles multiple instances of the same resource.
Example: The snapshot of the system at a given instant:

Memory Management:
These techniques allow the memory to be shared among multiple processes.
Overlays: The memory should contain only those instructions and data that are required at
a given time.
Swapping: In a multiprogramming program, the instructions that have used the time slice
are swapped out from the memory.

Memory Management Techniques:


1: Single Partition Allocation Schemes: The memory is divided into two parts. One part
is kept for use by the OS and the other for use by the users.
2: Multiple Partition Schemes:
Fixed Partition: The memory is divided into fixed size partitions.
Variable Partition: The memory is divided into variable sized partitions.
Variable partition allocation schemes:
First Fit: The arriving process is allotted the first hole of memory in which it fits
completely.
Best Fit: The arriving process is allotted the hole of memory in which it fits the best by
leaving the minimum memory empty.
Worst Fit: The arriving process is allotted the hole of memory in which it leaves the
maximum gap. Note: Best fit does necessarily give the best results for memory allocation.

1. Paging: The physical memory is divided into equal sized frames. The main memory is
divided into fixed size pages. The size of a physical memory frame is equal to the size of a
virtual memory frame.
2. Segmentation: Segmentation is implemented to give users view of memory. The
logical address space is a collection of segments. Segmentation can be implemented with
or without the use of paging.
Page Fault
A page fault is a type of interrupt, raised by the hardware when a running program
accesses a memory page that is mapped into the virtual address space, but not loaded in
physical memory.

Page Replacement Algorithms


First In First Out
This is the simplest page replacement algorithm. In this algorithm, operating system keeps
track of all pages in the memory in a queue, oldest page is in the front of the queue. When
a page needs to be replaced page in the front of the queue is selected for removal.
For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots >
3 Page Faults.
when 3 comes, it is already in memory so > 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. >1
Page Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3
>1 Page Fault.

Beladys anomaly
Beladys anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference string
3 2 1 0 3 2 4
3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we
get 10 page faults.

Optimal Page replacement


In this algorithm, pages are replaced which are not used for the longest duration of time in
the future.
Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page
faults
0 is already there so > 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of
time in the future.>1 Page fault.
0 is already there so > 0 Page fault..
4 will takes place of 1 > 1 Page Fault.
Now for the further page reference string > 0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as operating system
cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.

Least Recently Used


In this algorithm page will be replaced which is least recently used.
Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page slots
empty.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page
faults
0 is already their so > 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used >1 Page fault
0 is already in memory so > 0 Page fault.
4 will takes place of 1 > 1 Page Fault
Now for the further page reference string > 0 Page fault because they are already
available in the memory.

Print a pattern without using any loop


Given a number n, print following pattern without using any loop.
Input: n = 16
Output: 16, 11, 6, 1, -4, 1, 6, 11, 16
Input: n = 10
Output: 10, 5, 0, 5, 10

SOLUTION
/ C++ program to print pattern that first reduces 5 one
// by one, then adds 5. Without any loop an extra variable.
#include <iostream>
using namespace std;
// Recursive function to print the pattern without any extra
// variable
void printPattern(int n)
{
// Base case (When n becomes 0 or negative)
if (n ==0 || n<0)
{
cout << n << " ";
return;
}
// First print decreasing order
cout << n << " ";
printPattern(n-5);
// Then print increasing order
cout << n << " ";
}
// Driver Program
int main()
{
int n = 16;
printPattern(n);
return 0;
}
Output:
16, 11, 6, 1, -4, 1, 6, 11, 16

Das könnte Ihnen auch gefallen