Sie sind auf Seite 1von 129

JSS MAHAVIDYAPEETHA

SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING


(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Operating System
Unit - 2

M.S.Maheshan
Asst. Professor
Dept. of IS&E
SJCE
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

PROCESS CONCEPT

•The design objectives of an OS should meet requirements such


as, optimal performance, max utilization of resources, minimum
response time while managing many users, etc.
•To meet this goal, the computational model being used in OS
has introduced the concept of an executable unit known as
process.
•This is the considered as the primary mechanism for defining
and designing of an appropriate OS.
•The execution of an individual program is sometimes referred
to as a process or task.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Process States Model

• As a process executes, it changes state


 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a
processor
 terminated: The process has finished execution

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Note: in addition, for detailed explanation of swapping and suspended state refer PC
book page-167
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

• Each process is represented in the OS by a process control


block (PCB)
• This contains many pieces of information associated with
specific process including these:
 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management information
 Accounting information
 I/O status information

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Process state: The state may be new, ready running, waiting,


halted, and so on.
•Program counter: The counter indicates the address of the
next instruction to be executed for this process.
•CPU registers: The registers vary in number and type,
depending on the computer architecture. They include index
registers, stack pointers, and general-purpose registers, plus
any condition-code information. Along with the program
counter, this state information must be saved when an interrupt
occurs, to allow the process to be continued correctly
afterwards.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•CPU-scheduling information: This information includes a


process priority, pointers to scheduling queues, and any
other scheduling parameters.
•Memory-management information: This information
may include such information as the value of the base and
limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating
system.
•Accounting information: This information includes the
amount of CPU and real time used, time limits, account
numbers, job or process numbers, and so on.
•I/O status information: This information includes the list
of I/O devices allocated to the process, a list of open files,
and so on.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

CPU Switch From Process to Process

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

THREADS

•In computer science, a thread of execution is the smallest


sequence of programmed instructions that can be managed
independently by a scheduler (typically as part of an
operating system)
•The implementation of threads and processes differs from
one operating system to another, but in most cases, a thread is
a component of a process.
•Multiple threads can exist within the same process and share
resources such as memory, while different processes do not
share these resources.

• Source: Wikipedia
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

• A thread is a basic unit of CPU utilization; it comprises a


thread ID, a program counter, a register set, and a stack
• If a process has multiple threads of control, it can perform
more than one task at a time
• Diagram below shows the difference between a traditional
single-threaded process and a multithreaded process

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

• So it can be said that if a program is having more than one


part, that can be run independently of each other, then that
program is having multiple threads.
• Just like 2 programs execute independently of each other, 2
threads also execute independently.
• So if there are 2 threads in the program, both of them can be
executed differently from each other.
• i.e., both of them are 2 separate paths of execution and each
path is independent of another.
• So in essence, it can be said that a program is logically divided
into more than one part through threads.
• Therefore, creation of many threads to execute the code of an
application process appears to be more advantageous than
creation of many processes to execute the same application
code.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Conventional Thread States

•Since thread is an alternative form of scheduling unit of


computation to the traditional notion of process, when a
thread is created in a program, it is possible for a
programmer to control it.
•Example: the execution of a thread can be suspended for a
period of time (known as sleeping) so that other thread can
continue its execution.
• Different states of threads are:
 Ready
 Running
 Waiting
 Dead
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Single Shot Threads

•Single shot threads have only 3 states:


• ready
• running
• terminated
• When thread becomes ready to run, it enters ready state.
• Once it gets CPU time, it enters running state.
• If it is preempted by a higher priority thread, it can go back
to ready state.
• When it is finalized it, it enters terminated state.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•What should be noticed here is, it does not have waiting


state.
• So what could be done before it terminates, is to make sure
that it can be restarted when a time or an event occurs.
• This type of threads are well suited for time-critical systems
where one wants to create a schedule offline.
• Single shot threads can be implemented using very little
RAM memory, and are therefore used in small systems.
• Summary: Single shot threads behaves much like an
interrupt service routine – something starts it, it is preempted
by higher-priority interrupts and when it is finished, it
terminates.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Types of Threads
• Can be classified into 2 distinct categories
 kernel threads
 user threads
• The former are managed and scheduled by kernel, whereas
the latter are managed and scheduled in userspace.
• Here when we use the term “thread”, it is referred as kernel
threads, whereas the term “fiber” is sometimes used to refer
user threads.
• The OS does not provide fibers, an application may
implement its own fibers using repeated calls to worker
functions.
• A fiber can be scheduled to run in any thread in the same
process.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

1. Kernel-Level Threads

•Some OS kernels support the notion of threads and


implement kernel-level threads.
• There are system calls to create, terminate, and check the
status of kernel-level threads and manipulate them in ways
similar to processes.
• Synchronization and scheduling may be provided by the
kernel

•Note: Diagram representation for creation and operation of kernel-level


thread, follow pg-201 of PC-book

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Creation and Operation of kernel-level Threads

1. Process issues a system call create_thread to create new


kernel-level thread.
2. New thread is created and kernel assigns an id to the
thread and allocates a thread control block (TCB).
3. TCB contains a pointer to the PCB of the corresponding
process.
4. Thread now becomes ready for “scheduling”.
5. In the running state, when the execution of thread is
interrupted due to occurrence of an event of exceeds the
quantum, kernel saves the CPU state of the interrupted
thread in its TCB.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

6. Scheduler considers the TCBs of all the ready threads and


chooses one of them to dispatch.
7. Dispatcher then checks whether the chosen thread
belongs to a different process than an interrupted thread
(this is done by dispatcher examining the PCB pointer in
the TCB)
8. If so, the process switch occurs, the dispatcher then saves
all the related information of the process, and loads the
context of the process
9. If the chosen thread and the interrupted thread belongs
to the same process, the overhead of the process
switching is redundant and hence can be avoided.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Merits and Drawbacks


1. The kernel-level thread is similar to process with the
exception that it contains less amount of state
information.
2. This similarity is helpful to the programmer in the way
that programming for threads is almost same like process.
3. This similarities facilitates to assign multiple threads
within the same process on different processors in a
multiprocessor system.
4. This leads to provide a true parallel execution.
5. This form of parallelism cannot be achieved at user-level
threads.
6. In this parallelism, whenever switching between threads
is carried out, it substantially increases overhead
7. Thus kernel-level threads increases overheads in the
kernel and more overheads in their use. ©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

2. User-Level Threads (fibers)


•These are implemented by a thread library, which are linked
to the code process and hence are implemented entirely in
user space.
•Thread library provides programmer with API for creating
and managing threads
•Programmer of the thread library writes code to synchronize
threads and to context switch them, and they all run in one
process.
•The OS (kernel) is totally unaware of the user-level threads;
it can only see the process.
•The scheduler works on processes and hence takes into
account the PCBs and then selects the ready process.
•The dispatcher finally dispatches it for execution.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Creation and Operation of user-level Threads

1. Process invokes the library function create_thread to


create a new thread.
2. New thread is created, its TCB is created by library
function and becomes ready for “scheduling”.
3. TCBs of all threads are mapped into the PCB of
corresponding process by thread library.
4. During running state, if thread invokes a library function
to synchronize its functioning with other threads, the
library functions performs “scheduling” and switches to
another targeted thread.
5. Thus giving rise to thread switch.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

6. Thus kernel remains unaware of the switching activities


between threads but is aware of process is in continuous
operation.
7. If a thread library cannot find a ready thread in the
process, it makes a system call to block itself.
8. Kernel now intervenes and blocks the process.
9. Process will be unblocked only when some event occurs
that activates one of its threads and will resume
execution.

• The scheduling of user-level thread to select a particular


thread and arrange its execution is carried out by thread
library.
• Note: Diagram representation for creation and operation of user-level
thread, follow pg-204 of PC-book
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Thread library code is a part of each process that maps the


TCB of the selected thread into the PCB of corresponding
process.
•The information in the TCBs is used by the thread library to
decide which thread should operate and accordingly
dispatches the thread.
•While dispatching the thread, CPU state of the process
should be the CPU state of the thread.
Merits and drawbacks
•Thread library avoid the related overhead in the execution of
a system call for synchronization and communication
between threads.
•Since kernel is not at all involved, the thread switching
overhead is less than that of kernel-level threads.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•The drawback is while managing threads without involving


the kernel.
•Since kernel is unaware of differentiating between thread
and process, so if thread were to block in a system call, the
kernel would then block its parent process.
•So in effect, all threads belonging to that process would get
blocked.
•In order to facilitate this implementation, OS would have to
make adequate arrangement so that non-blocking version of
each system call is available.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

3. Hybrid Threads Models

•Hybrid thread model consists of both user-level and kernel-


level threads.
•Different methods of associating user-level and kernel-level
threads give rise to 3 different combinations of methods.

Many-to-One
One-to-One
Many-to-Many

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Many-to-One
•Many user-level threads mapped to single kernel thread
•Examples:
Solaris Green Threads
GNU Portable Threads

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

One-to-One
•Each user-level thread maps to kernel thread
•Examples
Windows NT/XP/2000
Linux
Solaris 9 and later

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Many-to-Many
•Allows many user level
threads to be mapped to
many kernel threads
•Allows the operating
system to create a sufficient
number of kernel threads
•Solaris prior to version 9
•Windows NT/2000 with
the ThreadFiber package

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Threads: Priority
•Thread priorities are used by thread scheduler while
deciding to release a thread that should be allowed to run.
•Usually higher-priority threads get more CPU time than
lower-priority threads.
•Priorities are integers that specify the relative priority on one
thread to another.
•This is used to decide when to switch from one running
thread to the next. This is called context switch.
•Rules to determine when a context switch takes place is
simple:
A thread can voluntarily release control: This is done by
explicitly sleeping
 A higher-priority thread can preempt a thread.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•The setPriority() method, belonging to thread is used to set


thread’s priority
•Syntax looks like:
 final void setPriority (int level)
Value ranges from
 MIN_PRIORITY
 MAX_PRIORITY
Usually values from 1 to 10.

•To return a thread to default priority, specify NORM_PRIORITY,


which has value 5.

•final int getPriority(): returns the current priority of the


invoking thread.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Usefulness of thread priorities:

1. Enables thread scheduler to determine the order, when each


should be allowed to execute.
2. Integer values that state the relative priority of one thread
to another.
3. Facilitates context switching.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

MULTITHREADING

• The concept of multithreading enables us to write very


efficient programs.
• By breaking up a single application into multiple threads
enables one to impose great control over the modularity of the
application and the timing of application related events.
• This makes max utilization of CPU, because idle time can be
kept to minimum.
• A multithread program consists of 2 or more parts that can
run concurrently.
• Each part of such a program is a separate thread that defines a
separate path of execution.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

• Multiple threads within the same process may be allocated to


separate processors and can execute concurrently resulting in
excellent performance.
• The recent trend in system design is to place multiple
computing cores (Multicore) on a single processor chip, where
each core appears as a separate processor to the OS.
• Multithreaded approach can be efficiently implemented using
multicore system.
• Examples of Concurrent Execution on a Single-core System and
Parallel Execution on a Multicore System are as follows.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Concurrent Execution on a Single-core System

Parallel Execution on a Multicore System

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

SCHEDULING
•The term scheduling, refer to set of defined policies and
suitable mechanisms built into the OS that determines the order
in which the work has to be carried out by the computer.
•In Unit-1 we have already discussed different forms of
scheduling
• long-term scheduling
• medium-term scheduling
• short-term scheduling
•The short-term scheduler, also known as dispatcher, governs
the order of execution of runnable processes.
•This module decides which processes in the pool of ready
processes in the memory gets processor.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•The process scheduler must perform the following functions:

 Keeping track of the status of the process (i.e.,


running/ready/blocked) – traffic controller.
 Deciding which process gets the processor, when and for
how long – traffic scheduler.
 Allocation of the processor to a process – traffic controller.
 Deallocation of the processor, when the running process
exceeds its current time slice – traffic controller.

• Based on certain pre-defined criteria, the scheduler always


attempts to maximize the system performance by switching
the state of deserving processes from ready to running.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Any such change may result in the interruption of the currently


running process, or even may offer to preempt the currently
running process in favor of running process.
•So the scheduler must determine whether such significant
changes have to be made or not.
•Some of the events that enforce changes in system states and
thereby insist rescheduling include the following:
Clock – ticks : clock-time base interrupts
I/O interrupts
Operating – system calls
Sending and receiving signals
Interactive program activation
•In general, whenever one of these events occurs, the short-term
scheduler is invoked by the OS to judge the situation and take
needful action.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Scheduling Criteria

1. CPU utilization – keep the CPU as busy as possible


2. Throughput – # of processes that complete their execution
per time unit
3. Turnaround time – amount of time to execute a particular
process
4. Waiting time – amount of time a process has been waiting
in the ready queue
5. Response time – amount of time it takes from when a
request was submitted until the first response is produced,
not output (for time-sharing environment)

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Several kinds of process schedulers:

1. Preemptive
2. Nonpreemptive
•Cooperative
•Run-to-completion

•Run-to-completion schedulers in non-preemption scheduling


are the easiest to understand which implies that, once
scheduled, a selected job runs to completion.
•This process leaves the running state exactly once, and never
goes to blocked state.
•Examples are the early batch systems.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Processes in cooperative multitasking environment tell the OS


when to switch them.
•These explicitly give up the CPU to another process.
•This technique was incorporated in Apple’s original
multitasking system, earlier versions of Mac OS and in some
Java systems.
•This approach is also problematic, situations when processes
do not intend to voluntarily cooperate with one other.
•The situation appears to be more worse if the running process
happens to be executing an infinite or time-consuming loop
containing no resource requests.
•The process will never give up the processor, so all the ready
processes eager to gain the access of CPU will have to remain
waiting.
•This may lead to deadlock or starvation.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•This drawback could be resolved entirely if the OS itself could devise


some arrangement what will enforce the running process not to
continue any more at an arbitrary instant.
•The strategy of insisting that processes that are logically runnable be
temporarily suspended is known as preemptive scheduling.
•With preemptive scheduling, a running process, on other hand, may
be interrupted and moved to the ready state by the OS allowing any
other deserving process to replace it any time.
•This is accomplished by activating the scheduler whenever an event
in the system is detected.
•But suspending a running process without warning at an arbitrary
instant, allowing another process to run may lead to race condition
(discussed later) that requires additional non-remunerative actions.
•The cost of preemption may be kept low by using efficient process-
switching mechanisms.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Non-Preemptive Strategies
•FCFS Scheduling (First Come First Serve)
•SJF/SJN/SPN (Shortest Process Next)
•SRTN (Shortest Remaining Time Next)
•Priority scheduling
•Deadline scheduling
Preemptive Strategies
•Round Robin scheduling
•Multilevel Queue Scheduling
•Multilevel Feedback Queue Scheduling

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Before describing scheduling algorithms, let us identify few services


(parameters) which is taken into considerations:
1. Service time, t: The total amount of time a process needs to be in
running state before it is completed. (OR) The amount of time the
process will use the CPU to accomplish its useful work
2. Wait time, W: The time the process spends waiting in the ready
state before (response time) its first transition to running state.
3. Turnaround time, T: This is the duration of time that a process p
is present i.e., (finish time – arrival time)
4. Missed time, M: T – t. This missed time M is the same thing,
except we don not count the amount of time during which a
process p is actually running
5. Response ration, R: t/T. The response ratio represents the
fraction of the time that p is receiving service.
6. Penalty ration, P: T/t. the penalty ratio P is inverse of R.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

FCFS:
Process Burst Time
P1 24
P2 3
P3 3

•Suppose that the processes arrive in the order: P1 , P2 , P3


The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30
•Waiting time for P1 = 0; P2 = 24; P3 = 27
•Average waiting time: (0 + 24 + 27)/3 = 17

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Suppose that the processes arrive in the order:


P2 , P3 , P1
•The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

•Waiting time for P1 = 6; P2 = 0; P3 = 3


•Average waiting time: (6 + 0 + 3)/3 = 3
•Much better than previous case
•Convoy effect - short process behind long process

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

SJF:
•Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the
shortest time

•SJF is optimal – gives minimum average waiting time for a


given set of processes
The difficulty is knowing the length of the next CPU
request
Could ask the user

Note: Ref - page-232 of PC-book for tabular example

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Process Arrival Burst Time


P1 6
P2 8
P3 7
P4 3

•SJF scheduling chart

P4 P1 P3 P2

0 3 9 16 24

•Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

SRTN (Shortest Remaining Time Next):


•The SJF algorithm can be either preemptive or nonpreemptive.
•The choice arises when a new process arrives at the ready
queue while a previous process is still executing.
•The next CPU burst of the newly arrived process may be shorter
than what is left of the currently executing process.
•A preemptive SJF algorithm will preempt the currently
executing process, whereas the nonpreemptive SJF algorithm
will allow the currently running process to finish its CPU burst.
•Preemptive SJF scheduling is sometimes called Shortest
Remaining Time Next scheduling.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Processarr Arrival Time Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5

•Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3

0 1 5 10 17 26

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Priority Scheduling:

•A priority number (integer) is associated with each process


•The CPU is allocated to the process with the highest priority
(smallest integer  highest priority)
•Priority based scheduling may be,
Preemptive
Nonpreemptive
•Problem  Starvation – low priority processes may never
execute
•Solution  Aging – as time progresses increase the priority of
the process

•Note: Ref - page-236 of PC-book for tabular example

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Process arri Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Priority scheduling Gantt Chart

P2 P5 P1 P3 P4

0 1 6 16 18 19

Average waiting time = 8.2 msec

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Deadline Scheduling:
•Deadline scheduling can be considered as one form of
priority scheduling, where priorities assigned in this case is
the deadlines associated with each process.
•Deadline scheduling may be,
Preemptive
Nonpreemptive
•The system workload here consists of a combination of
available processes and these scheduling must have complete
knowledge regarding how much time is required to execute
each process.
•The scheduler will admit a process in the ready list if the
scheduler can guarantee that it will be able to meet each
deadline imposed by the process
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Note: Ref - page-237of PC-book for tabular example

•An optimal scheduling strategy in such environments is the


earliest-deadline scheduling in which the ready process with
the earliest deadline are scheduled for execution.
•The middle schedule in the tabular example is an example of
this strategy, known as the earliest deadline first scheduling.
•Another form of scheduler which is also considered to be
equal optimal is called the least laxity scheduler, that selects
the ready process with the least difference between its
deadline and service-time.
•These schedulers are found to be optimal in single-processor
systems, but not in multiprocessor environments.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Round Robin Scheduling:


•RR is designed for time-sharing systems.
•It is similar to FCFS, but preemption is added to enable the
system to switch b/w processes.
•Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
•Timer interrupts every quantum to schedule next process
Performance
•To implement RR scheduling, we keep a ready queue as FIFO
queue of processes.
•New processes are added to the tail of the ready queue.
•The CPU scheduler picks the first process from the ready queue,
sets a timer to interrupt and after 1 time quantum, and
dispatches the process.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•One of the two things will then happen,


If the process has CPU burst time less than 1 time
quantum, the process will release itself voluntarily.
The scheduler then proceeds to the next process in the
ready queue.
Otherwise, if the CPU burst of the currently running
process is longer than 1 time quantum, the timer will go off
and will cause an interrupt to the OS.
A context switch will be executed, and the process will be
put at the tail of the ready process.
The CPU scheduler will then select the next process in the
ready queue.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Process Burst Time


P1 24
P2 3
P3 3
The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
•If we use a time quantum of 4 milliseconds, then P1 gets the
first 4 msec. since it requires another 20msec, it is preempted
after first time quantum and CPU is given to next process.
•P2 does not need 4 msec, so it quits before its time quantum
expires, CPU is given to next process P3.
•Once each process has received 1 time quantum, the CPU is
returned to process P1 for additional time quantum.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Time Quantum and Context Switch Time:

•In software, we need to consider the effect of context


switching on the performance of RR scheduling.

•Note: Ref - page-195 of Galvin book for written explanation.


©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Multilevel Queue Scheduling:

•Ready queue is partitioned into separate queues, eg:


foreground (interactive)
background (batch)
•Processes are permanently in a given queue, based on some
property of the process, such as memory size, process priority
or process type.
•Each queue has its own scheduling algorithm:
foreground – RR
background – FCFS
•Scheduling must be done between the queues:
Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Each queue has absolute priority over lower-priority queues.


•No process in the batch queue, for example, could run unless
the queues for system processes, interactive processes, and
interactive editing processes were all empty.
•If an interactive editing process entered the ready queue
while a batch process was running, the batch process would
be preempted.
•Here each queue gets a certain amount of CPU time,
Time slice – which it can schedule amongst its processes;
i.e., 80% to foreground in RR basis
20% to background in FCFS basis

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Multilevel Feedback Queue Scheduling:


•The Multilevel Feedback Queue Scheduling algorithm, allows a
process to move between queues.
•The idea is to separate processes according to the
characteristics of their CPU burst.
•If a process uses too much CPU time, it will be moved to lower-
priority queue.
•In addition a process that waits too long in a lower-priority
queue may be moved to a higher priority queue.
•This form of aging prevents starvation.
•Consider example, three queues:
Q0 – with time quantum 8 milliseconds
Q1 – time quantum 16 milliseconds
Q2 – FCFS
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Scheduling
A new job enters queue Q0 which is to be served
When it gains CPU, job receives 8 milliseconds
If it does not finish in 8 milliseconds, job is moved to
queue Q1
At Q1 job is again served and receives 16 additional
milliseconds
If it still does not complete, it is preempted and moved
to queue Q2
Process in the queue Q2 are run on an FCFS basis.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•This scheduling algorithm gives highest priority to any process


with a CPU burst of 8msec or less.
•Such a process will quickly get the CPU, finish of its CPU burst,
and go off to its next I/O burst.
•Process that needs more than 8 and less than 24msec are also
served quickly, although with lower priority than shorter
processes.
•Long processes automatically sink to queue 2 and are served in
FCFS order with any CPU cycles left over from queue 0 and 1.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•In general, a Multilevel-feedback-queue scheduler defined


by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will
enter when that process needs service
•Multilevel feedback queue scheduling can be configured to
match a specific system under design.
•Unfortunately, it is also the most complex algorithm.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

INTERPROCESS SYNCHRONIZATION
Concept:
•Concurrent access to shared data may result in data
inconsistency
•Maintaining data consistency requires mechanisms to ensure
the orderly execution of cooperating processes
•Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers. We can do so by
having an integer count that keeps track of the number of full
buffers. Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by
the consumer after it consumes a buffer
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Producer:

while (true) {

/* produce an item and put in nextProduced */


while (counter == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Consumer:

while (true) {
while (counter == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;

/* consume the item in nextConsumed */


}

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Although both the producer and consumer routines shown


above are correct separately, they may not function correctly
when executed concurrently.
•As an illustration, suppose that the value of the variable
counter is currently 5 and that the producer and consumer
processes execute the statements "counter++" and "counter--"
concurrently.
•Following the execution of these two statements, the value of
the variable counter may be 4, 5, or 6!
•The only correct result, though, is counter == 5, which is
generated correctly if the producer and consumer execute
separately.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•We can show that the value of counter may be incorrect as


follows.
•Note that the statement" counter++" may be implemented as follows

•counter++ could be implemented as

register1 = counter
register1 = register1 + 1
counter = register1

•counter-- could be implemented as

register2 = counter
register2 = register2 - 1
count = register2
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Proof: Consider this execution interleaving with “count =


5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute counter = register1 {count = 6 }
S5: consumer execute counter = register2 {count = 4}

•Notice that we have arrived at the incorrect state


"counter == 4", indicating that four buffers are full, when,
in fact, five buffers are full. If we reversed the order of the
statements at S4 and S5, we would arrive at the incorrect
state "counter== 6"

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•A situation like this, where several processes access and


manipulate the same data concurrently and the outcome of
the execution depends on the particular order in which the
access takes place, is called a race condition.
•To guard against the race condition above, we need to ensure
that only one process at a time can be manipulating the
variable counter.
•To make such a guarantee, we require that the processes be
synchronized in some way.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Critical-Section Problem:
•Consider system of n processes {p0, p1, … pn-1}
•Each process has critical section segment of code
Process may be updating table, writing file, etc
When one process in critical section, no other may be in
its critical section
•Critical section problem is to design protocol to solve this, i.e.,
Each process must ask permission to enter critical
section in entry section, may follow critical section with
exit section, then remainder section
•This is especially challenging with preemptive kernels

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

General structure of process pi is:

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Solution to Critical-Section Problem must satisfy the


following:
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections
2. Progress - If no process is executing in its critical section and
there exist some processes that wish to enter their critical
section, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter
its critical section next, and this selection cannot be postponed
indefinitely
3. Bounded Waiting - There exists a bound, or limit, on the
number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its
critical section and before that request is granted.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Two general approaches are used to handle critical sections in


operating systems:
1. preemptive kernels
2. nonpreemptive kernels.

•A preemptive kernel allows a process to be preempted while it


is running in kernel mode.

•A nonpreemptive kernel does not allow a process running in


kernel mode to be preempted; a kernel-mode process will run
until it exits kernel mode, blocks, or voluntarily yields control of
the CPU

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Nonpreemptive kernel is essentially free from race conditions on


kernel data structures, as only one process is active in the kernel at
a time.
•Preemptive kernels are especially difficult to design for SMP
architectures, since in these environments it is possible for two
kernel-mode processes to run simultaneously on different
processors.
•Why, then, would anyone favor a preemptive kernel over a
nonpreemptive one?
A preemptive kernel is more suitable for real-time
programming, as it will allow a real-time process to preempt a
process currently running in the kernel.
Furthermore, a preemptive kernel may be more responsive,
since there is less risk that a kernel-mode process will run for
an arbitrarily long period before relinquishing the processor to
waiting processes
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Peterson’s Solution:
•Is a classic software-based solution to the critical-section
problem.
•Because of the way modern computer architectures perform
basic machine-language instructions, such as load and store,
there are no guarantees that Peterson's solution will work
correctly on such architectures.
• However we present the solution because it provides a good
algorithmic description of solving the critical-section problem
and illustrates some of the complexities involved in designing
software that addresses the requirements of mutual exclusion,
progress, and bounded waiting.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Peterson's solution is restricted to two processes P0 & P1.


•Consider Pi & Pj
•When presenting Pi, we use Pj to denote the other process

•The two processes share two variables:


int turn;
Boolean flag[2]

•The variable turn indicates whose turn it is to enter the


critical section
•E.g.: if turn == i, then process Pi is allowed to execute in its
critical section.

•The flag array is used to indicate if a process is ready to enter


the critical section. flag[i] = true implies that process Pi is
ready to enter its critical section.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
Algorithm for
critical section Process Pi
flag[i] = FALSE;

remainder section
} while (TRUE);

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Synchronization Hardware:
•software-based solutions such as Peterson‘s are not guaranteed
to work on modern computer architectures.
•Instead, we can generally state that any solution to the critical-
section problem requires a simple tool-a lock.
•Race conditions are prevented by requiring that critical regions
be protected by locks.
•That is, a process must acquire a lock before entering a critical
section; it releases the lock when it exits the critical section
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•We explore several more solutions to the critical-section


problem using techniques ranging from hardware to software-
based APIs available to application programmers.
•All these solutions are based on the premise of locking;
however, as we shall see, the designs of such locks can be quite
sophisticated.
•We start by presenting some simple hardware instructions that
are available on many systems and showing how they can be
used effectively in solving the critical-section problem.
•The critical-section problem could be solved simply in a
uniprocessor environment if we could prevent interrupts from
occurring.
•Unfortunately, this solution is not as feasible in a multiprocessor
environment. Disabling interrupts on a multiprocessor can be
time consuming
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

TestAndSet Instruction Solution using TestAndSet

• Definition: • Solution:
do {
boolean TestAndSet (boolean while ( TestAndSet (&lock ))
*target) ; // critical section
{ lock = FALSE;
boolean rv = *target; // remainder section
*target = TRUE; } while (TRUE);
return rv:
}

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Swap Instruction Solution using Swap

• Definition: • Solution:
do {
void Swap (boolean *a, key = TRUE;
boolean *b) while ( key == TRUE)
{ Swap (&lock, &key );
boolean temp = *a; // critical section
*a = *b; lock = FALSE;
*b = temp: // remainder section
} } while (TRUE);

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Semaphore:

•The hardware-based solutions to the critical-section problem


presented in previous slides are complicated for application
programmers to use.
•To overcome this difficulty, we can use a synchronization tool
called a Semaphore.
•A semaphore S is an integer variable that, apart from
initialization, is accessed only through two standard atomic
operations: wait () and signal ()
•The wait () operation was originally termed P.
•The signal() was originally called V.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

The definition of wait () is as follows:


wait(S)
{
while S <= 0;
// no-op
s--;
}

The definition of signal() is as follows:


signal(S)
{
S++;
}

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•All modifications to the integer value of the semaphore in the


wait () and signal() operations must be executed indivisibly.
•i.e., when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore
value.
•Operating systems often distinguished between counting and
binary semaphores.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Counting semaphore – integer value can range over an


unrestricted domain
•Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
Also known as mutex locks
•Can implement a counting semaphore S as a binary semaphore
•Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Semaphore Implementation:
•Must guarantee that no two processes can execute wait () and
signal () on the same semaphore at the same time

•Thus, implementation becomes the critical section problem


where the wait and signal code are placed in the critical
section
Could now have busy waiting in critical section
implementation
But implementation code is short
Little busy waiting if critical section rarely occupied

•Note that applications may spend lots of time in critical


sections and therefore this is not a good solution
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Semaphore Implementation with no Busy waiting:

•With each semaphore there is an associated waiting queue


•Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list

•Two operations:
block – place the process invoking the operation on the
appropriate waiting queue
wakeup – remove one of processes in the waiting queue
and place it in the ready queue

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Implementation of wait:
wait(semaphore *S)
{
S->value--;
if (S->value < 0)
{
add this process to S->list;
block();
}
}
Implementation of signal:
signal(semaphore *S)
{
S->value++;
if (S->value <= 0)
{
remove a process P from S->list;
wakeup(P);
}
}
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Deadlock and Starvation:


•Deadlock – two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes
Let S and Q be two semaphores
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);

•Starvation – indefinite blocking


A process may never be removed from the semaphore queue in which
it is suspended
•Priority Inversion – Scheduling problem when lower-priority process
holds a lock needed by higher-priority process
Solved via priority-inheritance protocol
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Classical Problems of Synchronization:


•Classical problems used to test newly-proposed synchronization
schemes

Bounded-Buffer Problem

Readers and Writers Problem

Dining-Philosophers Problem

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Bounded-Buffer Problem:

•N buffers, each can hold one item

•Semaphore mutex initialized to the value 1

•Semaphore full initialized to the value 0

•Semaphore empty initialized to the value N

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

The structure of the The structure of the


producer process consumer process
do {
do { wait (full);
// produce an item in nextp wait (mutex);
wait (empty); // remove an item from buffer to
wait (mutex); nextc
// add the item to the buffer signal (mutex);
signal (mutex); signal (empty);
signal (full); // consume the item in nextc
} while (TRUE); } while (TRUE);

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Dining-Philosophers Problem:

•Philosophers spend their lives thinking and eating


•Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a
time) to eat from bowl
Need both to eat, then release both when done
•In the case of 5 philosophers
Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Readers-Writers Problem:
•A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any
updates
Writers – can both read and write
•Problem – allow multiple readers to read at the same time
Only one single writer can access the shared data at the same
time
•Several variations of how readers and writers are treated – all
involve priorities
•Shared Data
Data set
Semaphore mutex initialized to 1
Semaphore write initialized to 1
Integer readcount initialized to 0
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

The structure of a writer The structure of a reader


process process
do { do {
wait (mutex) ;
wait (write) ; readcount ++ ;
// writing is performed if (readcount == 1)
wait (write) ;
signal (mutex)
signal (write) ;
// reading is performed
} while (TRUE); wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (write) ;
signal (mutex) ;
} while (TRUE);

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

DEADLOCK AND STARVATION

The Deadlock Problem:


•A set of blocked processes each holding a resource and
waiting to acquire a resource held by another process in the
set

•Example
System has 2 disk drives
P1 and P2 each hold one disk drive and each needs
another one

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Bridge Crossing Example

•Traffic only in one direction


•Each section of a bridge can be viewed as a resource
•If a deadlock occurs, it can be resolved if one car backs up
(preempt resources and rollback)
•Several cars may have to be backed up if a deadlock occurs
•Starvation is possible
•Note – Most OS’s do not prevent or deal with deadlocks

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

System Model:
•A system consists of a finite number of resources to be
distributed among a number of competing processes.
•The resources are partitioned into several types, each
consisting of some number of identical instances.
•Memory space, CPU cycles, files, and I/0 devices (such as
printers and DVD drives) are examples of resource types.
•If a system has two CPUs, then the resource type CPU has
two instances.
•Similarly, the resource type printer may have five instances
•If a process requests an instance of a resource type, the
allocation of any instance of the type will satisfy the request.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•A process must request a resource before using it and must


release the resource after using it.
•A process may request as many resources as it requires to
carry out its designated task
•Obviously, the number of resources requested may not
exceed the total number of resources available in the system.
•In other words, a process cannot request three printers if the
system has only two.
•Each process utilizes a resource as follows:
Request
Use
release

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Conditions for Deadlock:


Deadlock can arise if four conditions hold simultaneously.
•Mutual exclusion: only one process at a time can use a
resource
•Hold and wait: a process holding at least one resource is
waiting to acquire additional resources held by other processes
•No preemption: a resource can be released only voluntarily
by the process holding it, after that process has completed its
task
•Circular wait: there exists a set {P0, P1, …, Pn} of waiting
processes such that P0 is waiting for a resource that is held by
P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is
waiting for a resource that is held by Pn, and Pn is waiting for a
resource that is held by P0.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Resource-Allocation Graph:
•Deadlocks can be described more precisely in terms of a
directed graph called system resource-allocation Graph.
•Graph consists of a set of vertices V and a set of edges E.
•V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in
the system
R = {R1, R2, …, Rm}, the set consisting of all resource types in
the system
•request edge – directed edge Pi  Rj
•assignment edge – directed edge Rj  Pi
•Pictorially we represent each process Pi as a circle and each
resource type Rj as a rectangle
•Since resource type Rj may have more than one instance, we
represent each such instance as a dot within the rectangle
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Process

•Resource Type with 4 instances

•Pi requests instance of Rj Pi


Rj

•Pi is holding an instance of Rj Pi


Rj

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Example of a Resource Allocation Graph

The sets P, K and E:


•P == {P1, P2, P3}
•R== {R1, R2, R3, R4}
•E == {P1R1, P2R3 , R1P2,
R2P2, R2P1, R3P3, }

Resource instances:
•One instance of resource type R1
•Two instances of resource type R2
•One instance of resource type R3
•Three instances of resource type R4

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Process states:

•Process P1 is holding an instance of resource type R2 and is


waiting for an instance of resource type R1 .

•Process P2 is holding an instance of R1 and an instance of


R2 and is waiting for an instance of R3.

•Process P3 is holding an instance of R3 .

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Resource Allocation Graph Graph With A Cycle But No


With A Deadlock Deadlock

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Basic Fact is that,


•If graph contains no cycles  no deadlock
•If graph contains a cycle 
if only one instance per resource type, then deadlock
if several instances per resource type, possibility of
deadlock

Methods for Handling Deadlocks,


•Ensure that the system will never enter a deadlock state
•Allow the system to enter a deadlock state and then recover
•Ignore the problem and pretend that deadlocks never occur in
the system; used by most operating systems, including UNIX

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•To ensure that deadlocks never occur, the system can use either
the deadlock-prevention or a deadlock-avoidance scheme.

•Deadlock prevention provides a set of methods for ensuring


that at least one of the necessary conditions cannot hold.
•These methods prevent deadlocks by constraining how
requests for resources can be made.

•Deadlock-avoidance requires that the operating system be


given in advance additional information concerning which
resources a process will request and use during its lifetime.
•With this additional knowledge, it can decide for each request
whether or not the process should wait.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

DEADLOCK PREVENTION:
•For a deadlock to occur, each of the four necessary conditions
must hold.
•By ensuring that at least one of these conditions cannot hold, we
can prevent the occurrence of a deadlock.

•Mutual Exclusion – not required for sharable resources(read-


only files); must hold for non-sharable(printer) resources.
•Hold and Wait – must guarantee that whenever a process
requests a resource, it does not hold any other resources
Require process to request and be allocated all its resources
before it begins execution, or allow process to request
resources only when the process has none
Low resource utilization; starvation possible

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•No Preemption –
If a process that is holding some resources requests
another resource that cannot be immediately allocated to
it, then all resources currently being held are released
Preempted resources are added to the list of resources
for which the process is waiting
Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting

•Circular Wait – impose a total ordering of all resource types,


and require that each process requests resources in an
increasing order of enumeration

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

DEADLOCK AVOIDANCE:
•An alternative method for avoiding deadlocks is to require
additional information about how resources are to be
requested
•For example, in a system with one tape drive and one printer,
the system might need to know that process P will request
first the tape drive and then the printer before releasing both
resources, whereas process Q will request first the printer and
then the tape drive.
•With this knowledge of the complete sequence of requests
and releases for each process, the system can decide for each
request whether or not the process should wait in order to
avoid a possible future deadlock.

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•The various algorithms that use this approach differ in the


amount and type of information required.

Simplest and most useful model requires that each process


declare the maximum number of resources of each type that
it may need

The deadlock-avoidance algorithm dynamically examines


the resource-allocation state to ensure that there can never
be a circular-wait condition

Resource-allocation state is defined by the number of


available and allocated resources, and the maximum
demands of the processes
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Safe State:
•A state is safe if the system can allocate resources to each process
in some order and still avoid a deadlock
•System is in safe state if there exists a sequence <P1, P2, …, Pn>
of ALL the processes in the systems such that for each Pi, the
resources that Pi can still request can be satisfied by currently
available resources + resources held by all the Pj, with j < I
•That is:
If Pi resource needs are not immediately available, then Pi
can wait until all Pj have finished
When Pj is finished, Pi can obtain needed resources,
execute, return allocated resources, and terminate
When Pi terminates, Pi +1 can obtain its needed resources,
and so on
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Basic Facts,
•If a system is in safe state  no deadlocks
•If a system is in unsafe state  possibility of deadlock
•Avoidance  ensure that a system will never enter an
unsafe state.

Safe, Unsafe, Deadlock State

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Avoidance algorithms,

•Single instance of a resource type


Use a resource-allocation graph

•Multiple instances of a resource type


 Use the banker’s algorithm

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Resource-Allocation Graph Scheme:

•Claim edge Pi  Rj indicated that process Pi may request


resource Rj; represented by a dashed line
•Claim edge converts to request edge when a process
requests a resource
•Request edge converted to an assignment edge when the
resource is allocated to the process
•When a resource is released by a process, assignment edge
reconverts to a claim edge
•Resources must be claimed a priori in the system

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Unsafe State In Resource-Allocation Graph

Resource-Allocation Graph Algorithm

•Suppose that process Pi requests a resource Rj


•The request can be granted only if converting the
request edge to an assignment edge does not
result in the formation of a cycle in the resource
allocation graph

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Banker’s Algorithm:

•When a new process enters the system, it must declare the


maximum number of instances of each resource type that it
may need.
•When a user requests a set of resources, the system must
determine whether the allocation of these resources will leave
the system in a safe state.
•If it will, the resources are allocated; otherwise, the process
must wait until some other process releases enough resources

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Several data structures must be maintained to implement the banker‘s


algorithm,

Available. A vector of length m indicates the number of available resources of


each type. If Available[j] equals k, then k instances of resource type Rj are
available.
Max. An n x m matrix defines the maximum demand of each process.
If Max[i] [j] equals k, then process Pi may request at most k instances of
resource type Rj.
Allocation. An n x m matrix defines the number of resources of each type
currently allocated to each process.
If Allocation[i][j] equals k, then process Pi is currently allocated k instances of
resource type Rj.
Need. An n x m matrix indicates the remaining resource need of each process.
If Need[i][j] equals k, then process Pi may need k more instances of resource
type Rj to complete its task.

Note that Need[i][j] = Max[i][j] - Allocation [i][j].


©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Safety Algorithm:
Algorithm for finding out whether or not a system is in a safe state.

1. Let Work and Finish be vectors of length m and n, respectively. Initialize:


Work = Available
Finish [i] = false for i = 0, 1, …, n- 1

2. Find an i such that both:


(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4

3. Work = Work + Allocationi


Finish[i] = true
go to step 2

4. If Finish [i] == true for all i, then the system is in a safe state

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Resource-Request Algorithm:
Algorithm for determining whether requests can be safely granted.

Let Request = request vector for process Pi. If Requesti [j] = k then process Pi
wants k instances of resource type Rj

1. If Requesti  Needi ; go to step 2. Otherwise, raise error condition, since


process has exceeded its maximum claim

2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since


resources are not available

3. Pretend to allocate requested resources to Pi by modifying the state as


follows:
Available = Available – Request;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
•If safe  the resources are allocated to Pi
•If unsafe  Pi must wait, and the old resource-allocation state is
restored
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Example of Banker’s Algorithm

•5 processes P0 through P4;


•3 resource types:
 A (10 instances), B (5instances), and C (7 instances)

•Snapshot at time T0:


Allocation Max Available
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

• The content of the matrix Need is defined to be Max – Allocation

Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431

The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies
safety criteria

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

•Suppose now that process P1 requests one additional instance of resource


type A and two instances of resource type C; i.e., Request1 =(1,0,2)
•Check that Request1  Available (that is, (1,0,2)  (3,3,2)  true)
• Then we arrive at the new state as followed,
Allocation Need Available
ABC ABC ABC
P0 010 743 230
P1 302 020
P2 302 600
P3 211 011
P4 002 431
•Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies
safety requirement
•Can request for (3,3,0) by P4 be granted? – No, as resources are not
available
•Can request for (0,2,0) by P0 be granted? - No, since resulting state is
unsafe.
©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Recovery from Deadlock: Process Termination


•Abort all deadlocked processes
•Abort one process at a time until the deadlock cycle is
eliminated
•In which order should we choose to abort?
Priority of the process
How long process has computed, and how much longer
to completion
Resources the process has used
Resources process needs to complete
How many processes will need to be terminated
Is process interactive or batch?

©MSM ®
JSS MAHAVIDYAPEETHA
SRI JAYACHAMARAJENDRA COLLEGE OF ENGINEERING
(AUTONOMOUS), MYSURU – 570 006
Under Visvesvaraya Technological University, Belagavi

Recovery from Deadlock: Resource Preemption

•Selecting a victim – minimize cost

•Rollback – return to some safe state, restart process for that


state

•Starvation – same process may always be picked as victim,


include number of rollback in cost factor

©MSM ®

Das könnte Ihnen auch gefallen