Sie sind auf Seite 1von 46

| 


|
  

‡   concerns with the
management of physical processors, specially, the
assignment of the processor to processes.
‡         are the
job scheduler, the process scheduler and the traffic
controller.
‡   !  
It creates the processes, in a non multiprogramming
environment, and it would decide which process is to
receive the processor.
‡     
In the multiprogramming environment, it decides which
of the ready processes receive the processor, at what
time, and for how long.
 |
‡ It keeps track of the status of the processes. In most
systems it is necessary to synchronize between the
processes and the jobs. The modules of the processor
management usually perform this.
‡ Fig. (1.6) illustrates the domains of the job scheduling,
the process scheduling, and the traffic controller. The job
scheduling can be viewed as a   ,
choosing which jobs will be run, while the process
scheduler can be viewed as a 
 ,
assigning processor to the processes associated with
scheduled jobs. The user views his job as a collection of
tasks that he wants the computer system to perform for
him. The user may divide his job into job steps (e.g.
compile, load, execute, etc.) the system creates
processes to do the computation of the job steps.
‡ So, the job scheduling is concerned with the
management of jobs, and the process scheduling is
concerned with the management of the processes.
‡ In the nonmultiprogramming system, no distinction
is made between process and job scheduling, where
only one job is allowed in the system at a time. In
this simple system, the job scheduling chooses one
job to run. Once the job is chosen, a process is
created and assigned to the processor.
‡ For the multiprogramming system, the job
scheduling chooses a small subset of the jobs
submitted and lets them into the system therefore;
the job scheduling creates processes for these jobs
and assign the processes some resources.
‡ The process scheduling decides which of the
processes within the subset will be assigned the
processor at what time, and how long.
 !  
The job scheduler is the "super" manager, which must do
the following:
‡ Keep track of the status of all jobs. It must note which
jobs are trying to get some service (Hold State) and the
status of all jobs being served (ready state, Running
state, or Wait State).
‡ Choose the policy by which jobs will enter the system
(i.e. go from Hold state to Ready State). This decision
must be based on some characteristics as priority,
resources required, or/and system balance.
‡ Allocate the necessary resources for the scheduled job
by use of memory, device and processor management.
‡ Deallocate these resources when the job is terminated.
"  
‡ Once the job scheduler has moved a job from the Hold State to the
Ready State, it will create one or more processes for this job. In this
time, the process scheduler decides which process gets the
processor, when and for how long?

 #        !   


$%
‡ Keeping track of the status of the process (all processes are either
running, ready, or wait). The module, which performs this function, is
called the traffic controller.
‡ Deciding which process gets the processor and how long. The
process scheduler performs this.
‡ Allocating of the processor to a process requires resting of the
processor registers to correspond to the process correct state. The
traffic controller performs this task.
‡ Deallocating of the processor, such that when the running process
exceeds its current time, or must wait for an I/O operation. This
requires that all processor state registers be saved to allow future
reallocation. The traffic controller performs this task.
&  

In summary, the processor management operates on two
levels:
‡ Assigning the processor to jobs (Macrolevel).
‡ Assigning the processor to processes (Microlevel).

‡ Once a job is scheduled, the system must perform the


functions of creating processes, Destroying processes,
and Sending messages between processes.
‡ In the multiprogramming environment, the process
scheduler may call by all modules of the system.

'  

‡ The functions of the job scheduler may be implemented


simply. For example, in the Compatible Time Sharing
System (CTSS), the job policy may consist of two levels:
‡ In the first level, admitting the first 30 users to Log-In to
the system.
‡ In the second level, a priority algorithm allows a user
with a high priority to force a low priority user to Log-Out
(i.e., termination).
‡ Here we are concerned with the factors (policies) that go
into scheduling of the job.

‡ The job scheduler must choose the jobs among the
(Hold) jobs that will be made "ready" to run. In a small
computing center, an external operator may do this
function by choosing jobs arbitrarily; as choosing his
friend's job, or short job.
‡ In the large system (e.g., OS/360), all submitted jobs
may be first stored on a secondary storage. Then, the
job scheduler examines all these submitted jobs, and
according to specific policies, it will decide which jobs will
have the system resources and hence will be run. The
key concept here is that there are more jobs wish to run
than those can be satisfied by the system resources.
Therefore, scheduling must consider a policy issue, and
this policy issue might be changed from time to time;
because its goals are subjective and contradictory.
‡ One goal is concerned with running as
many jobs as possible per day (Only run
short jobs).
‡ Another goal is concerned with keeping
the processor busy (Only computation
intensive long jobs).
‡ Another goal is concerned with fairness
to all jobs (what does "fair" mean).
The considerations must be satisfied to
determine job scheduler policies are:
‡ Availability of special limited resources;
± If the system gives preference, some users can "cheat". For example; if the
system run the job that requires a plotter, then some users will always request
the plotter.
± If the system doesn't give preference, some users will suffer from extra delay.
‡ Cost-higher rates for faster service.
‡ System commitments-processor time and memory; the more the job wants,
the longer it waits.
‡ Guaranteed service-setting a specific time limit.
‡ System balancing-mixing I/O intensive and CPU intensive.
‡ Completing the job at a specific time.

Once the job scheduler has selected a collection of jobs to run, the process
scheduler attempts to handle the Microlevel (process) scheduling. On the
other hands the job and process schedulers may interact; the process
scheduler may choose to "postpone" a process and require that it goes
through the Macrolevel (job) scheduling again in order to complete.
  |

‡ Many criteria have been suggested for comparing CPU scheduling policies
(algorithms). Criteria that are used include the following:
‡ |(()
We want to keep CPU as busy as possible. CPU utilization may range from
0 to 100%. In real systems, it should range from 40% to 90%.
‡   
The number of jobs that are completed per time unit.
‡ 
The interval from the time of submission of a process to the time of
completion.
‡ *
The sum of the periods spent waiting in the ready queue.
‡ 
The time from the submission to the first response.
 !  + 
,

In non-multiprogramming system, once a


process has been assigned a processor, it
does not release it until it is finished. In
this section, the job scheduling using a
policy to reduce average 
 will examined. The term of process
and job may be used in interchangeably in
the non-multiprogramming environment.
 !  (-++-+

.--
/

‡ Assume jobs arrive as indicated in Fig. (2.1). each


arrived job will be arrangement as First-Come-First-
Served, and the control will estimate the time would be
taken to run each job. According to First-In-First-Out
(FIFO) algorithm, the job will be run as depicted in Fig.
(2.2):

 
 a 
1 10.00 2.00 hrs
2 10.10 1.00hrs
3 10.25 0.25 hrs
Fig. (2.1) Sample Job Arrival time.
!  (-++-+

.--
/
‡ The average Turnaround Time is computed
as follows:
  O
  
 
   
   
 
   
‡ Where:
 Finish time
O : Arrival time
-./ !  --


! ,  -  

 000 000 00 00 

 00 00 "00 10 

" 02 "00 "2 "00 

  
,4 
'  (
310
 !  (  !-

Fig. (2.3) illustrates a job scheduler


algorithm that run the "Hold" job with the
shortest time first. According to this
algorithm, when job 1 arrives, it will be run.
While job 1 is running, job 2 and job 3 arrive.
After job 1 is finished, job 3 will be run
because it has shortest estimated run time
than job 2.
-."/ !   5!

! ,  -  

 000 000 00 00 

 00 2 "2 "2 

" 02 00 2 00 


,4
' · 
·
32
!  (  !-

‡ This algorithm reduces the average


turnaround time, but it does not fairer than
that of FIFO algorithm; especially for job 2.
" !  (-6$

In this algorithm, the turnaround time will be


determined using future knowledge.
According to this algorithm, at 10 AM if the
system knows that there two shortest jobs
will be arrived, it would not run job 1 and
wait to the shortest jobs. Fig. (2.4) depicts
the result of this algorithm.
-.&/ !  -
6$
! ,  -  

 000 20 "20 "20 

 00 020 20 &0 

" 02 02 020 02 


,4
' 
22
!  (-6$

‡ By this algorithm, the average turnaround


time is reduced, but the CPU was idle 0.25
hrs and probably made job 1 unhappy.
Actually, many computation centers would
prefer to leave the CPU idle during the
busy afternoon rather than start up a low
priority job.
 5!   ,
!%

‡ When a computation center uses this


algorithm, the long jobs might be not run. This
would cause that the center closes down
immediately after some years.
‡ Future knowledge is rare.
‡ Run time is usually estimated approximately.
‡ Other resources must be considered, such as
memory requirement and I/O device, etc.
&   

In the above example, the turnaround time is used to measure the scheduling
performance. In order to normalize the scheduling performance, another
parameter must be defined, which is called the weighted turnaround time (W),
where

T: the turnaround time. a

R: the actual run time.


In the above examples (2.2, 2.3, and 2.4) the average turnaround times are:
For example (2.2)     
 
    
    
Average weighted turnaround time = 
·
·
For examples (2.3) and (2.4) are 4.05 and 1.37 respectively.

Accordingly, we find that the scheduling policies that improved turnaround time
is also improved waited turnaround time. This is not always true as we will see
later.
" !   
,

The function of the job scheduler in the


multiprogramming system is how to select
jobs to be run.
" !   
$ 7

,

When two jobs are in memory and they are being


multiprogramming, but no I/O (i.e., all jobs use
CPU only), the CPU spends time slice with each
one.
|(8$9
It is the amount of CPU time spent on a job. If two
jobs are being multiprogramming, each job's CPU
headway will be equal to half of the clock time
elapsed. Multiprogramming hurts performance
based upon average turnaround time.
To explain that, assume jobs arrive as indicated in
Fig. (2.5).
-.2/ !, 

 
 a 

1 10.00 0.3 hrs

2 10.20 0.5 hrs

3 10.40 0.1 hrs

4 10.50 0.4 hrs

5 10.80 0.1 hrs


-.:/ $  --

  
   
 
    
   [   

  

 00 00 0" 0" 00

 0 0" 0; 0: 0

" 0& 0; 01 02 200

& 02 01 " 0; 00

2 0; " & 0: :00

; 20

Average turnaround time, T=0.56


Average weighted turnaround time, W=3.04
--
  


In the multiprogramming, assume that a job may


be started as it arrives. Job 1 arrived at 10 AM and
it needed to run for 0.3 hours. After 0.2 hours, job
2 arrived so the processor will be time slices
between them during the time segment 10.2
through 10.4. Thus even though job 1 had only 0.1
hours of execution left. The processor was
servicing two jobs and it took 0.2 hours to
complete job 1. the other times are verified as in
Fig. (2.7a), Fig. (2.7b) shows the results of the
FIFO algorithm with multiprogramming.
-.3/ --
<
$  
-.3!/--
< $  

   
 
    
   [   

  

1 10.0 10.0 10.4 0.4 1.33

2 10.2 10.3 11.35 1.15 2.3

3 10.4 10.8 10.65 0.25 2.5

4 10.5 10.9 11.4 0.9 2.25

5 10.8 11.3 11.1 0.3 3.0

3 11.38

Average turnaround time, T=0.6 hrs


Average weighted turnaround time, W=2.276 hrs
--
< $  

‡ As in Fig. (2.6) and Fig. (2.7b), the average


turnaround time is 0.56 without
multiprogramming and 0.6 with it. Is
multiprogramming always bad? No.
‡ To answer this question, let us study the
following example.
‡ Fig. (2.8a) and (2.8b) are for the scheduling with
multiprogramming, while Fig. (2.9a) and (2.9b)
are for scheduling without multiprogramming.
-.;/ --
<
$  
-.;!/ =  $ 

     
    
   [   

  

1 3.0 10.0 14.0 4.0 1.3

2 0.5 10.0 11.5 1.5 3.0

3 0.25 10.0 11.0 1.0 4.0

4 0.25 10.0 11.0 1.0 4.0

7.5 12.3

Average turnaround time, T=1.88 hrs


Average weighted turnaround time, W=3.1 hrs
-.1/ --
<
$  
-.1!/ =  $ 

     
    
   [   

  

1 3.0 10.0 13.0 3.0 1.0

2 0.5 13.0 13.5 3.5 7.0

3 0.25 13.5 13.75 3.75 15.0

4 0.25 13.75 14.0 4.0 16.0

14.25 39.0

Average turnaround time, T=3.5 hrs


Average weighted turnaround time, W=10.0 hrs
" !  $  
7

,

The CPU can be used for other computations


while the I/O is being handled by the I/O processor
(channel). In the non-multiprogramming
environment, the 25% of the CPU time would be
wasted waiting I/O operations. However, in the
multiprogramming environment the processor
could be assigned to another computation during
this waited time. In some time, it is possible for all
jobs to be waiting for I/O at the same time, so the
CPU may be still idle part of the time.
"" !  $  9
>7

,

In addition to the CPU time, some jobs may


need certain amount of memory. An analysis
of the jobs using FIFO scheduling, which is
depicted in Fig. (2.11). Assuming that
available memory equals 100K and there is
multiprogramming with no I/O overlap. Note,
although job 3 arrived at 10.4, it could not
start running till 11.1 because it needed 50K
of course, which not available till 11.1.
-.0/  !>  
$  9

Job No. Arrival time Run time Memory needed

 000 0"  06

 00 02  :06

" 0&0 0  206

& 020 0&  "06

2 0;0 0  306


-./ $  9

-.!/ $ 
9 
     
    
   [   

  

1 0.3 10.0 10.4 0.4 1.33

2 0.5 10.2 11.1 0.9 1.8


3 0.1 10.4 11.3 0.9 0.9
4 0.4 10.5 11.3 0.8 2.0
5 0.1 10.8 11.4 0.6 6.0
3.6 20.13
Average turnaround time, T=0.72 hrs
Average weighted turnaround time, W=4.02 hrs
As in Fig. (2.11a) and (2.11b), due to the
memory restriction, the average weighted
time is increased.
"& !  $  9
|7

,
We consider that the jobs require also
number of tape drives in addition to memory
and CPU time. The effective of this
requirement on the turnaround time can be
studied with the following example (Fig.
(2.12)). The job requirements are indicated
in Fig. (2.13a) and (2.13b). assuming that
the system has 100K memory and five tapes
drives.
-."/$ 
9 
Memory
Job No. Arrival time Run time Tape needed
needed

 000 0"  06 

 00 02  :06 "

" 0&0 0  206 "

& 020 0&  "06 

2 0;0 0  306 &


-."/$ 
9 
-."!/ $ 
9 
     
    
   [   

  

1 0.3 10.4 10.4 4.0 1.33

2 0.5 10.2 11.1 0.9 1.8


3 0.1 10.4 11.3 0.9 0.9
4 0.4 10.5 11.3 0.8 2.0
5 0.1 10.8 11.4 0.6 6.0
3.6 20.13
Average turnaround time, T=0.72 hrs
Average weighted turnaround time, W=4.02 hrs

Das könnte Ihnen auch gefallen