Sie sind auf Seite 1von 57

CPU Scheduling

PROCESS SCHEDULERS
Processes travel various scheduling queues throughout its
entire lifetime.
Its journey from one queue to another correspondingly
changes its process state.
At this point consider that in a multiprogramming system
there are multiple programs running around the system
until it exhausts its lifetime.
Assuming that several jobs reside inside a queue in order to
address the selection process systematically a part of the
operating system, known as scheduler, performs this task.
The scheduler has three types: The long-term scheduler
or job scheduler selects processes from the secondary
storage and loads them into memory for execution.

PROCESS SCHEDULERS
The short-term scheduler or dispatcher selects
process from among the processes that are ready
to execute, and allocates the CPU to one of them.
Hence, it is also called CPU scheduler.
Finally, the mediumterm scheduler or swapper
swaps processes in and out of the memory.
The long-term scheduler executes the least among
the three schedulers.
The job scheduler is in charge of the admission of
jobs into the system.

PROCESS SCHEDULERS
It must be noted that process creation in the system
may take a while which means that the long term
scheduler is commonly prone to a lot of wait or being
idle most of the time.
The long-term scheduler controls the degree of
multiprogramming -the number of processes that will
get in memory.
The long interval in executions enable the long-term
scheduler to take more time selecting a process for
execution.
The long-term scheduler must select a good combination
of I/O-bound and CPU-bound processes in order to
maximize the, CPU and I/O devices.

PROCESS SCHEDULERS
Some systems have a medium-term scheduler.
This scheduler in quite in contrast with the long-term
scheduler.
The swapper suspends a process in memory by
temporarily removing (swaps out ) it from memory
and transferring it into a backing store.
It then replaces the swapped out process with another
job from the secondary storage.
This practice improves the performance of the
system.
The short-term scheduler must select a new process
from memory to be fed into the CPU.

PROCESS SCHEDULERS
A process may execute for only a few milliseconds before
waiting for an I/O request.
Due to the brief time frame between executions. the
short-term scheduler must be very fast in order to keep
the CPU from starving or getting idle.
Switching the CPU to from on process to another results
into overhead.
The overhead time is used to save the state of the old
process and loading the saved state for the new process.
This is known as context switch.
Since context switch time is pure overhead it must be
minimized.

CPU SCHEDULERS
Whenever the CPU becomes idle, the operating system
(particularly the CPU scheduler) must select one of the
processes in the ready queue for execution.
CPU scheduling decisions may take place under the
following four circumstances:
1. When a process switches from the running state to the
waiting state for example, I/O request invocation of
wait for the termination of one of the child processes).
2. When a process switches from the running state to the
ready state (for example, when an interrupt occurs).
3. When a process switches from the waiting state to the
ready state (for example, completion of I/O).
4. When a process terminates.

CPU SCHEDULERS
For circumstances 1 and 4, there is no choice in terms of
scheduling.
A new process (if one exists in the ready queue) must be
selected for execution.
There is a choice, however, for circumstances 2 and 3.
When scheduling takes place only under circumstances 1
and 4, the scheduling scheme is non-preemptive;
otherwise, the scheduling scheme is preemptive.
Under non-preemptive scheduling, once the CPU has
been allocated to a process, the process keeps the CPU
until it releases the CPU either by terminating or switching
states.
Preemptive scheduling incurs a cost.

CPU SCHEDULERS
Consider the case of two processes sharing data.
One may be in the midst of updating the data
when it is preempted, and the second process is
run.
The second process may try to read the data,
which are currently in an inconsistent state.
New mechanisms thus are needed to coordinate
access to shared data.

CPU SCHEDULING
ALGORITHMS
Different CPU-scheduling algorithms have

different
properties and may favor one class of processes over
another.
Many criteria have been suggested for comparing CPUscheduling algorithms.
The characteristics used for comparison can make a
substantial difference in the determination of the best
algorithm.
The criteria should include the following:
1. CPU Utilization. This measures how busy is the CPU.
CPU utilization may range from 0 to 100 percent. In a
real system, it should range from 40% (for a lightly
loaded system) to 90% (for a heavily loaded system).

CPU SCHEDULING
ALGORITHMS

2. Throughput. This is a measure of work


(number of processes completed per time unit).
For long processes, this rate may be one
progress per hour; for short transactions,
throughput might be 10 processes per second.
3. Turnaround Time. This measures how long it
takes to execute a process. Turnaround time is
the interval from the time of submission to the
time of completion. It is the sum periods spent
waiting to get into memory waiting in the ready
queue executing in the CPU and doing I/O.

CPU SCHEDULING
ALGORITHMS

4. Waiting Time. CPU scheduling algorithm does not


affect the amount of time during which a process
executes or does I/O; it affects only the amount of
time a process spends waiting in the ready queue.
Waiting time is the total amount of time a process
spends waiting in the ready queue.
5. Response Time. The time from the submission of a
request until the system makes the first response. It
is the amount of time it takes to start responding
but not the time that it takes to output that
response. The turnaround time is generally limited
by the speed of the output device.

CPU SCHEDULING
ALGORITHMS
A good CPU scheduling
algorithm maximizes CPU utilization
and throughput and minimizes turnaround time, waiting time
and response time.
In most cases, the average measure is optimized.
However, in some cases, it is desired to optimize the minimum
or maximum values, rather than the average.
For example, to guarantee that all users get good service, it
may be better to minimize the maximum response time.
For interactive systems (time-sharing systems), some analysts
suggests that minimizing the variance in the response time is
more important than averaging response time.
A system with a reasonable and predictable response may be
considered more desirable than a system that is faster on the
average , but is highly variable.

First-Come First-Served
(FCFS) Scheduling Algorithm
This is the simplest CPU-scheduling algorithm.
The process that requests the CPU first gets the
CPU first.
Example :
Consider the following set of processes that arrive
at time 0, with the length of the CPU burst given
PROCESS
BURST TIME
in milliseconds:
P1

24

P2

P3

First-Come First-Served
(FCFS) Scheduling Algorithm
If the processes arrive in the order P1, P2, P3 and
are served in FCFS order, the system gets the
result shown in the following Gantt chart:

P1
0

P2 P3
24

27

30

First-Come First-Served
(FCFS) Scheduling Algorithm
Therefore, the waiting time for each process is:
WT for P1

0-0

WT for P2

24 - 0

24

WT for P3

27 - 0

27

Average waiting time = (0 + 24 + 27) / 3


= 17ms

First-Come First-Served
(FCFS) Scheduling Algorithm
The turnaround time for each process would be:
TT for P1

24 0

24

TT for P2

27 0

27

TT for P3

30 - 0

30

Average turnaround time = (24 + 27 + 30) / 3


= 27ms

First-Come First-Served
(FCFS) Scheduling Algorithm
However, if the processes arrive in the order P3,
P2, P1, however, the results will be:

P3 P2
0

P1
6

30

First-Come First-Served
(FCFS) Scheduling Algorithm
Therefore, the waiting time for each process
would be:
WT for P1

60

WT for P2

30

WT for P3

0-0

Average waiting time = (6 + 3 + 0) / 3


= 3ms

First-Come First-Served
(FCFS) Scheduling Algorithm
With this new job sequence, what will be the value of
the turnaround time?
Comparison of the two computations of the waiting
time reveal that if the smaller burst jobs get
processed first that the large burst process then the
average wait by each job is lessened.
But because FCFS is a non-preemptive algorithm, the
computation in the given example must be done
using the sequence P1, P2, P3, and not P3, P2, P1.
If jobs with smaller burst have to wait for a long job
to finish its long burst then Convoy Effect exists.

First-Come First-Served
(FCFS) Scheduling Algorithm
This is something that is quite annoying when
smaller jobs are of higher priorities than the bigburst job that is occupying the processor.
Since, the CPU is tied to the job with a very long
burst, jobs with smaller have no choice but to
wait.
A real life parallelism would take the form of a
waiting line for a photocopying machine.
We can see that it is not nice for a one-page
memo to wait for the very long process of copying
a lengthy thesis and a moderately lengthy report.

First-Come First-Served
(FCFS) Scheduling Algorithm
This may seem unjust. Another clear analogy
involves a tricycle which must wait for a huge
convoy to finish a rigid checkpoint.
Thus, FCFS algorithm is particularly troublesome
for time-sharing systems which require frequent
interaction with different processes.

First-Come First-Served
(FCFS) Scheduling Algorithm
Example 1 :
JOB

BURST TIME

J1

2O

J2

J3

First-Come First-Served
(FCFS) Scheduling Algorithm
Example 2 :
JOB

ARRIVAL
TIME

BURST TIME

J1

J2

J3

J4

First-Come First-Served
(FCFS) Scheduling Algorithm
Example 3 :
JOB

ARRIVAL
TIME

BURST TIME

J1

J2

11

J3

J4

16

21

Shortest-Job-First (SJF)
Scheduling Algorithm
This algorithm is very much in particular with the
length of CPU burst particular process maintains.
When the CPU is available, it is assigned to the
process that has the smallest CPU burst.
If two processes have the same length of CPU
burst, FCFS scheduling is used to break the tie by
considering which job arrived first.

Shortest-Job-First (SJF)
Scheduling Algorithm
Example :
Consider the following set of processes that arrive
at time O. with the length of the CPU burst given
in milliseconds:
PROCESS

BURST TIME

P1

P2

P3

P4

Shortest-Job-First (SJF)
Scheduling Algorithm
Using SJF, the system would schedule these processes
according to the following Gantt chart:

P4
0

P1
3

P3
9

P2
16

Therefore, the waiting time for each process is:


WT for P1

3-0

WT for P2

16 - 0

16

WT for P3

9-0

Average
waiting
WT
for P4 time = (3
0 -+0 16 + 9 + 0) /04
= 7ms

24

Shortest-Job-First (SJF)
Scheduling Algorithm
The turnaround time for each process is:
TT for P1

9-0

TT for P2

24 - 0

24

TT for P3

16 - 0

16

TT for P4

3-0

Average turnaround time = (9 + 24 + 16 + 3) / 4


= 13ms

Shortest-Job-First (SJF)
Scheduling Algorithm
If the system were using the FCFS scheduling, then
the average waiting time would be 10.25 ms.
Also, under FCFS computation, the turnaround time
would be 16.25.
Although the SJF algorithm is optimal, it cannot be
implemented at the level of short-term scheduling.
There is no way to know the length of the next CPU
burst.
The only alternative is to predict the value of the
next CPU burst.

Shortest Remaining Time


First (SRTF) Algorithm
The SJF algorithm has a preemptive version
commonly referred to as shortest-remaining-timefirst.
A new process arriving may have a shorter next
CPU burst than what is left of the currently
executing process.
A preemptive SJF or-SRTF algorithm will preempt
the currently executing process.

Shortest Remaining Time


First (SRTF) Algorithm
Example :
Consider the following set of processes with the
length of the CPU burst given in milliseconds:
PROCESS

ARRIVAL
TIME

BURST TIME

P1

P2

P3

P4

Shortest Remaining Time


First (SRTF) Algorithm
If the process arrive at the ready queue at the times shown
and need the indicated burst times, then the resulting
preemptive SJF schedule is as depicted in the following Gantt:
chart:
P1 P2 P3

P2

P4

P1

0 1
2the waiting
3
6 for each process
11
Therefore,
time
is:

WT for P1

11 0 (1)

10

WT for P2

3 1 (1)

for P3
2-2
AverageWT
waiting
time = (10
+ 1 + 0 + 3) / 40
WT for P4
-3
3
=6
3.5ms

18

Shortest Remaining Time


First (SRTF) Algorithm
PROCESS

ARRIVAL
TIME

BURST TIME

P1

P2

P3

P4

P1 P2 P3 P2
0 1
2 3
P1 P2 P3 P4

P4
5

P1

9 10 11 12 13 14 15 16 17 18

P1 = 7
P2 = 4
P1 = 7
P2 = 4
P3 = 1
P1 = 7
P2 = 4
P4 = 5

P1 = 7
P4 = 5

P1 = 7

Shortest Remaining Time


First (SRTF) Algorithm
On the other hand, the turnaround time for each
process is:
TT for P1

18 - 0

18

TT for P2

61

TT for P3

3-2

TT for P4

11 - 3

Therefore, the turnaround time for each process is:


Average turnaround time = (18 + 5 + 1 + 8) / 4
= 8 ms

Shortest Remaining Time


First (SRTF) Algorithm
Nonpreemptive SJF scheduling would result in the
following schedule:
P1
0

P3
8

P2
9

P4
13

18

Therefore, the waiting time for each process is:


WT for P1

00

WT for P2

91

WT for P3

82

WT for P4

13 - 3

10

Average waiting time = (0 + 8 + 6 + 10) / 4


= 6ms

Shortest Remaining Time


First (SRTF) Algorithm
Subsequently, the turnaround time for each process
is:
TT for P1

8-0

TT for P2

13 1

12

TT for P3

9-2

TT for P4

18 - 3

15

Therefore, the turnaround time for each process is:


Average turnaround time = (8 + 12 + 7 + 15) / 4
= 8.75 ms

Priority (Prio) Scheduling


Algorithm
A priority is associated with each process, and the
CPU is allocated to the process with the highest
priority.
Equal-priority processes are scheduled in FCFS
order.
An SJF algorithm is simply a priority algorithm
where the priority () is the inverse of the next
CPU burst ().
=1/
The larger the CPU burst, the lower the priority,
and vice versa.

Priority (Prio) Scheduling


Algorithm
Example
Process

Priority

Burst Time

P1

10

P2

P3

P4

P5

Priority (Prio) Scheduling


Algorithm
Using priority algorithm, the schedule will follow the
Gantt chart below:
P2
0
19

P5
1

P1
6

P3
16

P4
18

Therefore, the waiting time for each process is:


WT for P1

60

WT for P2

00

WT for P3

16 0

16

WT for P4

18 0

18

WT for P5

1-0

Average waiting time = (6 + 0 + 16 + 18 + 1)/5


= 8.2 ms

Priority (Prio) Scheduling


Algorithm
The turnaround time for each process is:
TT for P1

16 0

16

TT for P2

10

TT for P3

18 0

18

TT for P4

19 0

19

TT for P5

6-0

Average turnaround time = (16 + 1 + 18 + 19 +


6)/5
= 12 ms

Preemptive Priority (P-Prio)


Scheduling Algorithm
Priority scheduling can either be preemptive or
nonpreemptive.
When a process arrives at the ready queue, its
priority is compared with the priority of the process
which is currently executing at the CPU.
A preemptive priority scheduling algorithm will
preempt the CPU if the priority of the newly arrived
process is higher than the currently running process.
A major problem with the priority scheduling
algorithms, whether preemptive or non-preemptive
is indefinite blocking or starvation.

Preemptive Priority (P-Prio)


Scheduling Algorithm
In a heavily loaded computer system, a steady
stream of higher-priority processes can prevent a
low-priority process from ever getting the CPU.
A solution to the problem of indefinite blockage is
aging.
Aging is the technique of gradually increasing
the priority of process that wait in the system for
a long time.

Preemptive Priority (P-Prio)


Scheduling Algorithm
Example:
Consider the following set of processes that arrive
at time 0, with the length of the CPU burst given
in milliseconds:
Process

Arrival
Time

Burst
Time

Priority

P1

P2

10

P3

18

P4

P5

Preemptive Priority (P-Prio)


Scheduling Algorithm
Example:
Using preemptive priority algorithm, the schedule
will result to the Gantt chart below:
I
0

P1 P2 P3 P4 P5
1
2
3
4
5
8

P4

P3
14

P2
31

P1
40 44

Preemptive Priority (P-Prio)


Scheduling Algorithm
Therefore, the waiting time for each process is:
WT for P1

40 1 (1)

38

WT for P2

31 2 (1)

28

WT for P3

14 3 (1)

10

WT for P4

8 4 (1)

WT for P5

55

Average waiting time = (38 + 28 + 10 + 3 + 0)/5


= 15.8 ms

Preemptive Priority (P-Prio)


Scheduling Algorithm
The turnaround time for each process is:
TT for P1

44 1

43

TT for P2

40 2

38

TT for P3

31 3

28

TT for P4

14 4

10

TT for P5

85

Average turnaround time = (43 + 38 + 28 + 10 +


3)/5
= 24.4 ms

Round-Robin (RR)
Scheduling Algorithm
This algorithm is specifically for time-sharing
systems.
A small unit of time, called a time quantum or
time slice, is defined.
The ready queue is treated as a circular queue.
The CPU scheduler goes around the ready queue,
allocating the CPU to each process for a time
interval of up to 1 time quantum.
The RR algorithm is therefore preemptive.

Round-Robin (RR)
Scheduling Algorithm
Example:
Consider the following set of processes that arrive
at time 0, with the length of the CPU burst given
in milliseconds:
Process

Burst
Time

P1

24

P2

P3

Round-Robin (RR)
Scheduling Algorithm
If the system uses time quantum of 4 ms, then the
resulting RR Gantt chart is:
-4
P1
0

-4
P2
4

P3
7

P1
10

-4
P1

14

-4
P1

18

-4
P1

22

P1
26

Therefore, the waiting time for each process is:


WT for P1

26 0 (20)

WT for P2

40

WT for P3

7-0

Average waiting time = (6 + 4 + 7)/3


= 5.67 ms

30

Round-Robin (RR)
Scheduling Algorithm
The performance of the RR algorithm depends
heavily on the size of the time quantum.
If the time quantum is too large, the RR policy
degenerates into FCFS policy.
If the time quantum is too small on the other
hand, then the effect of the context-switch time
becomes a significant overhead.
As a general rule, 80 % of the CPU burst should
be shorter than the time quantum.

Multilevel Queue Scheduling


Algorithm
This algorithm partitions the ready queue into
separate queues.
Processes are permanently assigned to one
queue, generally based on some property of the
process, such as memory size or process type.
Each queue has its own scheduling algorithm,
and there must be scheduling between queues.
In this example, no process in the batch queue
could run unless the queues for system
processes, interactive processes, and interactive
editing processes were all empty.

Multilevel Queue Scheduling


Algorithm
Higher priority
System processes

Interactive processes

Foreground Process

Interactive editing processes


Batch Process
Batch process

Student process
Lower priority

Multilevel Feedback Queue


Scheduling Algorithm
This algorithm is similar to the multilevel scheduling
algorithm except that it allows processes to move
between queues.
The idea is to separate processes with different CPUburst characteristics.
If a process uses too much CPU time, it will be
moved to a lower-priority queue.
This schemes leaves I/O-bound and interactive
processes in the high priority queues.
Similarly, a process that waits too long in a lowerpriority queue may be moved to a higher-priority
queue.

Multilevel Feedback Queue


Scheduling Algorithm
Quantum = 8ms

Quantum = 16ms

FCFS

Multilevel Feedback Queue


Scheduling Algorithm
In this example, the scheduler will first execute all
processes in the first queue.
Only when this queue is empty will the CPU execute
processes in the second queue.
If a process in the first queue does not finish in 8 ms,
it is moved to the tail end of the second queue.
If a process in the second queue did not finish in 16
ms, it is preempted also and is put into the third
queue.
Processes in the third queue are run on an FCFS
basis, only when the first and second queues are
empty.

Multilevel Feedback Queue


Scheduling Algorithm
In general, the following parameters define a multilevel
queue schedule:
1. The number of queues.
2. The scheduling algorithm for each queue.
3. The method used to determine when to upgrade a
process to 3 higher-priority queue.
4. The method used to determine when to demote a
process to a lower-priority queue.
5. The method used to determine which queue a
process will enter when that process needs service
Although the multilevel feedback queue is the most
general scheme, it is also the most complex .

Das könnte Ihnen auch gefallen