Sie sind auf Seite 1von 4

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY,

ISLAMABAD.
Department of Computer Science
OPERATING SYSTEM CONCEPTS – CSC322
____________________________________________________________________________________________
CLASS ASSIGNMENT 02
Mapped to CLO2
DEADLINE : 11 NOVEMBER 2019 MARKS: 30

Instruction:
 This is a group Assignment, You can submit this in group of two.
 Submit handwritten written report.
 You can consult Book for better understanding.
 Copying assignment will be graded zero.

Q1. Can a process transition from waiting for an I/O operation to the terminated state?
Why or why not? (5)

The process from its creation to completion passes through various states. The names of
the states are arbitrary and they vary across operating system. Although the process may
be in any of the one states during execution. The number of states are five in number. A
process cannot make transition directly from waiting state to terminated state because
when a process is in waiting state it is waiting for some event to occur such as I/O
completion or reception of a signal or waits for the availability of some shared resource. As
soon as the resource gets available to the process it moves to ready state and waits for the
CPU to be assigned to the processor. Additionally, when CPU is allocated to the process it
executes its statements and then terminate its execution by executing last statement. From
above explanation it is seen that we don’t make direct transition from waiting state to
terminated state without the completion of process.

Q2. What are the differences between user-level and kernel-level threads? Under what
circumstances is one type better than the other? What is the essential cause of the difference
in cost between a context switch for kernel-level threads and a switch that occurs between
user-level threads? (5)
Differences between user and kernel level threads:-
 User level threads are implemented by users and are supported by user-level threads library
whereas kernel level threads are implemented by Operating System.
 As there is no kernel involve in user level threads so they are easier and faster to create and
manage while kernel-level threads are slower than user-level threads so they are difficult to
manage.
 Scheduling is fair in user-level threads on the other hand kernel-level threads don’t do fair
scheduling.
 POSIX Pthreads and Java threads are user-level threads while Windows, Solaris and Linux are
kernel-level threads.
Kernel level threads are better than user level threads:-
Kernel level threads are better than user level threads in multiprocessor environment because
kernel level threads can run on multiple processors simultaneously. However, user level threads
of a process will run on single processor though multiple processors are available.

User level threads are better than Kernel level threads:-


User-level threads requires less time during context switching and requires no hardware support.

If a system is based on time sharing then user level threads are better than kernel level threads. As in
time shared scenarios context switching occurs frequently. So switching context in kernel level threads is
more time consuming and we have to maintain overhead. It is almost the same as switching context in
processes. However, It is easier and less time consuming to switch context between user level threads.
Moreover, there is no overheads as compare to kernel level threads.

Difference between a context switch for kernel-level threads and a switch that
occurs between user-level threads:
Context-switch for kernel level threads:
In kernel level threads, the applications doesn’t have any control over how to
manage threads rather OS is aware of each kernel thread. Context switch takes
place when an interrupt occurs by saving the context of current thread and
restoring the context of different thread. A context switch between kernel level
threads requires only registers, stack and program counter to be changed while the
address space of multiple threads remain the same. Context switch time is pure
overhead because the system does no useful work while switching. Although
switching between two kernel level threads is faster than switching between two
processes. However, kernel threads are somewhat expensive because system calls
are required to switch contexts.
Context-switch for user level threads:

Operating system does not have any concern about user-level threads within a process. Cost of
context switch for user-level threads is much lower than kernel level threads because no system
calls are required hence there is no involvement of OS. Like kernel level threads, each user-level
thread also has its own registers, counter and stack and small thread Control Block (TCB). On the
contrary, a separate thread library such as POSIX Pthreads, Windows threads etc. are used for the
management of threads in such a way that it creates new threads, do switching and synchronization
between threads. It is more efficient due to library calls rather than system calls for kernel threads.
Flexible thread scheduling occurs in user-level code because OS doesn’t involve in scheduling the
threads.
Q3: Consider a multicore system and a multithreaded program written using the many-to-
many threading model. Let the number of user-level threads in the program be greater than
the number of processing cores in the system. Discuss the performance implications of the
following scenarios. (6)
a. The number of kernel threads allocated to the program is less than the number of processing
cores.
b. The number of kernel threads allocated to the program is equal to the number of processing
cores.
c. The number of kernel threads allocated to the program is greater than the number of processing
cores but less than the number of user-level threads.
Q4. Is it possible to have concurrency but not parallelism? Explain in detail. (5)
Concurrency and parallelism both are form of operating systems. We can use both methods and
sometimes only one to complete our task. We can select which form is better depending upon
system requirement.
We can understand this concept using single core and multicore systems. Suppose we have four
threads. If we run these four threads on single computing core, concurrency holds because the
execution of these threads will be interleaved over time. This implies that a concurrent system
supports more than one task by allowing all the tasks to make progress based on time slices.
However, this doesn’t involve parallelism because system is performing only one task at one time
as we have only one processor.
On the other hand, if we run these four processes on multicore systems then this system also
provides concurrency by allowing all these tasks to make progress by allocating every thread to
each processor. In this scenario, parallelism involves because a system is performing more than
one task simultaneously as we have multiple processors.
Thus, it is possible to have concurrency without parallelism

Q5. A system with two dual-core processors has four processors available for scheduling. A
CPU-intensive application is running on this system. All input is performed at program start-
up, when a single file must be opened. Similarly, all output is performed just before the
program terminates, when the program results must be written to a single file. Between
startup and termination, the program is entirely CPU-bound. Your task is to improve the
performance of this application by multithreading it. The application runs on a system that
uses the one-to-one threading model (each user thread maps to a kernel thread). (4)
• How many threads will you create to perform the input and output? Explain.
The application runs on a system that uses the one-to-one threading model.As we know that one
to one model maps each user level thread to a kernel thread. This leads to more concurrency than
any other model (many to one or many to many). So in this case there should be as many threads
as there will be blocking system calls. In this scenario, we have two system calls of input (opening
file) and output (Writing in file). So if one thread (input) invoke blocking system call there must
be another thread (output) which can execute on another kernel thread to use interleaving time in
useful manner.
We don’t get any benefit if we make additional threads or threads more than system calls.
Therefore, single thread creation makes sense for input and single thread for output.

• How many threads will you create for the CPU-intensive portion of the application? Explain.
Multithreading provides more benefits if it is implemented on multiprocessors rather than single
processor system. If we have “n” processors then we should have “n” threads to achieve high
efficiency because every thread would execute on each processor simultaneously. So in this
scenario we have dual core processors having four processors available, so we create 4 threads to
perform CPU intensive portion of the application. If we make fewer threads it would be waste of
processor resource.

Q6. Under what circumstances does a multithreaded solution using multiple kernel threads
provide better performance than a single-threaded solution on a single-processor system?

Multithreaded solution provide better performance in such cases when program has to wait for system
events. For instance when page fault error occurs during the normal execution of processes. This error
implies that when a program wants to access a block of memory that is not stored in the RAM then this
page fault informs the operating system that it must find the data in virtual memory, then transfer it
from the storage device to RAM. So when this interrupt occurs in one kernel thread then the kernel shift
control to another thread for execution in order to use interleaving time. On the other hand, if this
situation occurs in single threaded kernel, it will block entire system because we don’t have any
additional kernel threads to take the control. Hence, single threaded solution are not able to handle
such interrupts and perform a useful work.

Das könnte Ihnen auch gefallen