Sie sind auf Seite 1von 34

CHAPTER 1

INTRODUCTION
1.1 TASK MANAGEMENT
The basic terms concerning tasks and their attributes are easiest explained by an
example. A real-time system is a system that interacts with the environment by
performing pre-defined actions on events within a certain time. The action of a special
event is typically defined in a task and within a certain time forms the deadline for a task.
A real-time task can be classified as periodic or a periodic depending on its arrival pattern
and as soft or hard based on its deadline. Tasks with regular arrival times are called
periodic and tasks with irregular arrival times are a periodic tasks. Each of the hard tasks
must complete execution before some fixed time has elapsed since its request, i.e. finish
before its deadline. Soft tasks do not have any demands in time, which means that soft
tasks do not have any deadlines. Other attributes associated with a real-time task that
usually are mentioned in scheduling and task management contexts are for instance:

• Worst Case Execution Time (WCET) – is the maximum time necessary for the
processor to execute the task without interruption.
• Release time – is the time at which a task becomes ready for execution.
• Sporadic task – is and periodic tasks with a known minimum inter arrival time (MINT)
that is the minimum time between two activations.
• Precedence constraints – are constraints about the order of task execution, some tasks
must execute in a defined order.

In almost all real-time operating systems, the information about a task is stored in
a data structure, called the task control block (TCB). The TCB typically contains the task
state, the desired or required set of attributes from those defined above, a pointer to the
procedure that represents the task and a stack pointer. If the scheduling algorithm is
preemptive the TCB must contain everything that is needed to store and reload the state
for the task (registers etc). The rest of the content in the TCB varies from one RTOS to
another; many of the special features of an RTOS will affect the TCB.

1
In almost all operating systems, a task can at any point in time be in one of the
following states: Running, Ready or Waiting. The states may have different names in
different operating systems, but the semantic meaning is always the same. Additional
states exist in most operating systems, but these three states are the most important and
basic states. Only one task per processor can be in the Running state at any instance in
time, it is this task that currently uses the processor. A task that has all that is needed to
execute, but for any reason waits for another task is said to be in the Ready state. Finally
a task that misses something, a shared resource, waiting for an external event or waiting
for its release time etc is said to be in the Waiting state.

2
CHAPTER 2
SCHEDULING ALGORITHM

2.1 INTRODUCTION

Scheduling real-time systems are all about guarantee the temporal constraints
(deadlines, release times and so on). Two main approaches for real-
time scheduling exist. On one side we have off-line scheduling, where
all scheduling decisions is calculated by the system designer before
runtime and stored in a runtime dispatch table. The other approach is
on-line scheduling, where all scheduling decisions is calculated by the
scheduling algorithm at run-time. Throughout this text both the terms
on-line and off-line scheduling as well as runtime and pre-runtime
scheduling are used.

Which algorithm that is best suited depends on the scheduling problem to solve.
For instance algorithms based on the off-line scheduling approach are more deterministic,
and it is easy to prove and show that a task will meet its deadline since the methods in
some sense applies the “proof-by-construction” approach. Off-line scheduling methods
can also solve tough scheduling problems with high CPU utilization and complicated
precedence constraints. An off-line scheduler can spend long time in finding a suitable
schedule, since the system is not up and running and no deadlines will be missed during
the search, but at run-time the only scheduling mechanism we need is a simple dispatcher
that performs a table lookup. On the other hand, we need to know almost everything
about the systems timing constraints to be able to create a suitable schedule before run-
time. Algorithms based on the on-line scheduling approach in general offer higher
flexibility and are better to adopt changes in the environment, at a higher cost for
calculating on-line scheduling decisions and they cannot offer the same degree of
provable determinism. The higher flexibility offered by on-line scheduling approaches

3
cause that most existing algorithms that schedule aperiodic tasks use the on-line
scheduling approach.

Most off-line scheduling algorithms that have been implemented are based on
some kind of search technique with applied heuristics. Examples of such scheduling
algorithms are A* and IDA*, they are analyzed. Other examples are branch-and-bound
According to most practitioners working on safety-critical hard real-time systems, that
uses the off-line scheduling approach have been observed doing the schedules by hand
instead of letting a computer search for a schedule, the result (the by hand created
schedule) is often hard to verify and maintain. Examples of on-line scheduling algorithms
that handle periodic tasks are Earliest Deadline First and Rate Monotonic both introduced
in the early seventies. Most on-line scheduling algorithms use one of these two
algorithms as base algorithm. A sample of on-line algorithms that handles both periodic
and aperiodic tasks is Sporadic Server Robust Earliest Deadline and the Total Bandwidth
Server

2.2 PRIORITY INHERITANCE PROTOCOL


The priority inheritance protocol, proposed by Sha, Rajkumar and Lehoczky in
[LEH90], minimizes the blocking time of a high-priority task by increasing the priority of
the low-priority task when the high-priority task becomes blocked. A task can only hold
semaphores during the execution, i.e. when a task has finish its execution it is not allowed
to hold any semaphores.
Definition of the Priority inheritance protocol:
1. Task A executes and tries to obtain semaphore S. If semaphore S is locked, task A is
blocked because it cannot lock the semaphore, if not task A locks semaphore S. When A
unlocks S, the task with the highest priority that is blocked by A becomes ready.
2. Task A uses its assigned priority during execution unless it has locked semaphore S
and blocks higher-priority tasks. If task A blocks higher-priority tasks, it will execute
with the highest priority of the tasks that is blocked by A (A inherits the highest priority).
When A unlocks semaphore S, A will return to its original priority.

4
3. Priority inheritance is transitive. Assume three tasks A, B and C in descending priority
order. If task C blocks B and B blocks A, task C will receive task A’s priority.

Scheduling of a Real-Time Process


The below figure shows scheduling for real time process. This shows scheduling
time for each processor from processor 1 to N.

FIGURE-2.1
Execution Profile For Two Tasks:

5
Execution Profile For Two Tasks
FIGURE 2.2
Output for Five Task For Two Processor:

TABLE 2.1

Execution Profile For Five Tasks

6
FIGURE-2.2

7
FIGURE 2.3

Output For Five Processor:

TABLE 2.2

2.3 UNBOUND PRIORITY INVERSION

8
Duration of a priority inversion depends on unpredictable actions of other
unrelated tasks

FIGURE -2.4
2.4 PRIORITY INHERITANCE
Lower-priority task inherits the priority of any higher priority task pending
on a resource they share.

FIGURE -2.5

9
Typical flow for solving energy-aware scheduling for
dependent tasks.
FIGURE-2.8

10
CHAPTER 3
PROBLEM FORMULATION

Objective is to find a static schedule for the tasks in the task precedence graph on
the heterogeneous processors at particular voltage levels such that the total energy
consumption is minimized while the task precedence constraints are observed and all the
tasks meet their deadline requirements. We shall describe the power, system, and task
models in this section.

3.1 SYSTEM MODEL


Our system consists of a set of Np heterogeneous processors, {E1,E2,. . .
,ENpg},connected to a single bus. Each processor is equipped with DVS functionality.
The available discrete voltage levels of PEj are given by V (j,k), k= 1, 2... (j), where N (j)
denotes the total number of discrete Time levels of PEj. Without loss of generality, we let
N (1)= N(2)= . . .=(N)= Nv in this paper for simplicity. The power consumption and
processor frequency of PEj at voltage level V (j, k) are given by P (j. k) and f(j, k),
respectively. The power consumption of the bus is denoted by Pb. We assume that
negligible power is consumed by the processors and the bus when they are idle.

3.2 TASK MODEL


We consider a set of Nt dependent tasks fT1; T2; . . . ; TNtg that are related by
some precedence constraints as given in the task precedence graph. The amount of time
required to execute a task might vary on different processors and also voltage levels.
Suppose Ti is executed on PEj at the voltage level V (j,k), the worst-case execution time
needed to execute Ti in this case is given by t(I, j, k) while its energy consumption is
given by e(I, j, k). In addition, for a task Ti and its predecessor Tp, if they are executed on
different processors, a communication time of C (p, i) is incurred. Let d be the deadline
(the latest possible time) by which all the NT tasks in the task precedence graph must be

11
completed. In our model, when a task is assigned to a processor, we force it to run to
completion on the same processor without task migration. In addition, we consider two
scenarios depending on whether intratask voltage scaling is used.
In the first scenario, we do not allow intratask voltage scaling and the task has to
run at the same voltage level until completion. In the second scenario, we allow the task
to run at more than one voltage levels during its execution on the same processor. Then,
the total energy consumption of the Nt dependent tasks is given by where tc denotes the
total duration of time for which the bus is used to transfer data. For the scenario without
intratask voltage scaling, we define x (I, j, k) as follows:
GOH ET AL.: DESIGN OF FAST AND EFFICIENT ENERGY-AWARE GRADIENT-
BASED SCHEDULING ALGORITHMS FOR HETEROGENEOUS... 3
X(I, j, k)={1 if Ti is scheduled on PEj at the voltage level V(j, k); 0 otherwise:
On the other hand, when intratask voltage scaling is used, (xi, j, k) will denote the
fraction of Ti that is scheduled on PEj at the voltage level V (j, k).

3.2 ENERGY-AWARE HETEROGENEOUS EMBEDDED MULTIPROCESSOR


SCHEDULING ALGORITHMS:

In this section, we describe our energy-aware heterogeneous embedded


multiprocessor scheduling algorithms. The first algorithm, Energy Gradient-based
Multiprocessor Scheduling algorithm (EGMS), is designed to schedule task precedence
graphs under deadline constraints. The second algorithm, Energy Gradient-based
Multiprocessor Scheduling with Intratask Voltage scaling algorithm (EGMSIV), extends
EGMS and utilizes an LP method for intratask voltage scaling. We first define the terms
that are used in this section:
. Makespan. The makespan of a schedule is the period of time required to completely
process all the tasks.
. Slack. The slack is the time interval between the time when the execution of a task at a
processor is completed and the deadline requirement of that task.

12
3.2.1 DESIGN OF EGMS
In energy-aware scheduling of task precedence graphs on heterogeneous
embedded multiprocessor platforms, there are three main factors that affect the quality of
the solution obtained: the mapping of tasks onto processors, the ordering of the tasks, and
the selection of the voltage levels of the processors. However, the mapping of tasks to
processors is usually considered separately. During TM optimization, the TSVS
algorithm that is used has to be invoked repeatedly to obtain the best feasible and energy-
efficient schedule for every TM that is generated in the process, regardless of the quality
of the TM. In addition, only the final schedule that is generated by the TSVS for each TM
is considered during this optimization process. We feel that the TSVS process itself may
also be useful in guiding the TM optimization process toward a more energy efficient
schedule at a faster rate. This is one of the main factors that we consider in the design of
our algorithms.
Our EGMS algorithm takes into consideration TM, task ordering, and voltage
scaling in an integrated manner. A schedule is first generated based on an initial TM. In
each optimization step, we try to remap a task to a new processor and/or voltage level
such that we reduce the total energy consumption of the schedule as much as possible
while decreasing the slack/increasing the makespan as little as possible. In this way, we
are optimizing the TM as well as the TSVS at the same time based on the current
partially optimized schedule. In doing so, we hope to arrive at an optimized energy-
efficient schedule in a shorter time.

Before we describe the EGMS algorithm in detail, we define some notations that
we would be using in Fig. The pseudo code of our EGMS algorithm is presented in
Algorithm 1. We first generate an initial schedule by assigning tasks to the processors
that can complete their execution in the shortest amount of time at the highest voltage
level (lines 2-5). We then schedule the tasks using our Critical Path-based Task Ordering
algorithm (line 6). In each iteration of the while loop (lines 8-31), we first select a task to
be remapped to a new processor and/or a new voltage level such that the total energy

13
consumption is reduced and no deadlines are violated by using the
ELECTREMAPTASK() algorithm (line 9).
If such a task can be found, the current schedule is updated (lines 10-14). This
process continues until no tasks can be remapped without violating the deadlines. When
this happens, the energy consumption of the schedule cannot be reduced further. If the
current schedule is feasible and has lower energy consumption than the best schedule
obtained so far, we update the best schedule with the current schedule (lines 18- 26).

However, this best schedule may not be the global optimum schedule. In order to
obtain a better solution, we randomly reassign 50 percent of the tasks in the current
schedule to other processors at the highest voltage levels and generate a new initial
schedule (lines 28-29). The whole process of task remapping is then repeated starting
from the new initial mapping until there is no significant improvement in the energy
consumption (> 1 percent) of n successive schedules. Here, n is a user-defined parameter
that determines the terminating condition of our algorithm. It shall be noted that by
reassigning the tasks and applying the algorithm repeatedly, we try to lower the total
energy consumption further at the expense of an increase in optimization time.

14
TABLE-3.1

15
Algorithm 1: EGMS ()
1: ebest 1
2: for all Ti do /* Assign tasks to fastest Processors */
3: Mp (i) Fastest-executing processor for Ti
4: Mv (i) Maximum voltage level
5: end for
6: CPTO (Mp,Mv, e,ms) /* Schedule tasks based on initial processor and voltage
mapping */
7: numIter 0
8: while numIter < n do
9: Tselected = SELECTREMAPTASK(Pselected; Vselected;
eselected;msselected ) /* Find a task to be remapped */
10: if Tselected 6= -1 then /* Task to be remapped is
found, update current mapping */
11: Mp(i)Tselected ,Pselected
12: Mv(i)Tselected, Vselected
13: e eselected
14: ms msselected
15: else /* No tasks can be remapped without
violating deadline */
16: numIterþþ
17: if feasible schedule found then
18: if e < ebest then /* Better schedule is
found, update the best schedule found
so far */
19: if eebest< 0:99 then /* Improvement > 1% */
20: numIter 0
21: end if
22: ebest e
23: msbest ms
24: Mpbest Mp

16
25: Mvbest Mv
26: end if
27: end if
28: Randomly assign 50 percent of tasks to other processors at maximum voltage level
29: CPTO(Mp,Mv, e,ms)
30: end if
31: end while

We shall now give the complexity of our EGMS algorithm. Let CCPTO and
CSEL be the complexities of CPTO() and SELECTREMAPTASK(), respectively. (We
shall show the complexities of these two algorithms later.) Lines 2 to 5 execute in O(Nt -
Np) time. In the while loop, we repeat the steps from lines 9 to 14 until the current
schedule cannot be optimized further. Let _ be the average number of times these steps
are repeated. When we cannot optimize the current schedule further, we execute the steps
from lines 15 to 30 to generate a new initial schedule before repeating the steps from
lines 9 to 14 again. The steps from lines 15 to 30 are repeated O(n) times.
The complexity of the while loop is therefore given by O(n)- CSEL (Nt
(CCPTO)). Hence, the total complexity of EGMS is O((Nt , Np ) CCPTO ( n), CSEL ( Nt
)CCPT(O)). We shall show later that this complexity is dominated by CSEL and so it can
be given as O(n ,CSEL). Algorithm 2 shows the CPTO() algorithm that we use to
generate a schedule. Based on the given processor and voltage level mapping, we first
replace all the communication edges between two tasks that are mapped on different
processors using tasks where the execution time is equal to the communication time (line
3). For each task, we calculate the length of the critical path from that task and initialize
its start and end times (lines 4-8).
We then schedule the tasks based on their critical paths (lines 9-26). Tasks with
longer critical paths have higher priorities and are scheduled first. The makespan and
energy consumption of the schedule are also calculated in the process. The complexity of
CPTO() is given as follows: Let Na be the total number of computational and
communication tasks and Nsucc be the average number of successors of a task. Let us

17
assume that the tasks are already in topological order. For lines 4-8, the lengths of all the
critical paths can be calculated in O(Na _ Nsucc) time.

We use a Fibonacci heap to implement the priority queue Q. Therefore, insertion


into Q (line 9) is O(1)and removal from Q (line 11) is O(logNa)amortized time. Line 24
requires O(Nsucc) time for execution. The while loop is executed Na times. Thus, the
total complexity of CPTO() is O(Na ,Nsucc(Na) ,(logNa ) Nsucc))=O(Na _ (logNa )
Nsucc))

CPTO(Mp,Mv,e,ms)
1: e--> 0
2: ms--> 0
3: Replace the communication between any 2 tasks that are scheduled on different
processors with a task, where the execution time is the communication time
4: for all Tj do /* Both computational and communication tasks */
5: Calculate lcp(j) /* Length of critical path starting from Tj */
6: tstart(j) 0 /* Initialize start time of Tj to 0 */
7: tend(j) 0 /* Initialize end time of Tj to 0 */
8: end for
9: Insert tasks with no incoming edges into priority queue
Q, where tasks are sorted in decreasing values of lcp
10: while Q is not empty do
11: Remove Tj from front of Q. /* Get task with largest critical path length */
12: if Tj is communication task then
13: e e + Communication energy
14: tstart(j) Earliest time communication bus is free
15: tend(j) tstart(j)) Communication time
16: else /* Tj is computational task */
17: e e ( e(j),Mp(j),Mv(j))
18: tstart(j) Earliest time PEMp(j) is free

18
19: tend(j)> tstart(j) , t(j,Mp(j),Mv(j))
20: end if
21: if tend(j)-- > ms then
22: ms tend(j)
23: end if
24: Remove Tj and all outgoing edges from task graph
25: Insert tasks with no incoming edges into priority queue Q
26: end while
Algorithm 3 shows the SELECTREMAPTASK() algorithm that is used to select
the best task to be remapped, as well asthe processor and voltage level it should be
remapped to. Inthis algorithm, we consider the cases when Ti is remappedto processor Pj
at voltage level V (j, k) for all i, j and k. Foreach < Ti, Pj, V (j, k) > triplet, we invoke
CPTO() to obtainthe energy consumption e0 and makespan m0s of the newschedule
generated by the remapping (lines 7-11). We thencalculate the priority pr if this new
schedule is feasible andhas a lower energy consumption than the current schedule:

SELECTREMAPTASK (Pselected; Vselected; eselected; msselected )


1: Tselected 1
2: prselected 1
3: for i 1 to Nt do
4: for j 1 to Np do
5: Select the highest voltage l such that the total energy consumption is reduced
6: for k l to 1 do /* No need to consider voltage levels higher than l */
7: curProc Mp(i), Mp(i) j /* Re-map Ti to
PEj at V (j, k) */
8: curVoltage Mv(i), Mv(i) k
9: CPTO(Mp,Mv, e0,m0)
/* Obtain schedule
based on new mapping */
10: Mp(i) curProc /* Revert back to
original mapping */

19
11: Mv(i) curV oltage
12: if m0s _ d then /* Schedule is feasible */
13: Calculate priority pr using (7)
14: if pr > prselected then /* Highest priority found so far */
15: prselected pr
16: Tselected i
17: Pselected j
18: Vselected k
19: eselected e0
20: msselected m0s
21: end if
22: else /* Deadline violated, no need to Consider lower voltage levels */
23: break
24: end if
25: end for
26: end for
27: end for
28: return Tselected

Here, we consider two cases. In the first case, the new schedule has a lower
energy consumption but a longer makespan. Here, we use the concept of energy gradient
to calculate the priority so that schedules that give the largest reduction in energy
consumption with the least increase in makespan will be assigned higher priorities. Most
of the schedules will be in this case. In the second case, the new schedule has both a
lower energy consumption and a shorter makespan. In this case, we assign higher
priorities to schedules that result in larger reduction of energy consumption. We use an
arbitrary large constant w in the calculation of the priority so as to assign higher priorities
to these schedules compared to the schedules in the first case.

This is because schedules that reduce both the energy consumption and the
makespan are much more preferred than those in the first case. It shall be noted that we

20
do not need to invoke CPTO() and calculate the priorities for all < Ti, Pj, V (j, k) >
triplets. For a task that is remapped to a particular processor, we can obtain the highest
voltage level l such that the total energy consumption is lower than the current
consumption for all voltages _ l (line 5). This step takes O(logNv) time. Therefore, we do
not need to consider the schedules for voltages greater than l in the innermost k-loop
(lines 6-25). In addition, whenever an infeasible schedule is generated for a particular
voltage level, we do not need to consider the lower voltage levels as well (lines 22-23).
As the schedule becomes more optimized, the number of feasible voltage levels that can
be mapped to also decreases. Although the overall worst-case complexity of
SELECTREMAPTASK () is O(Nt . Np .(logNv )Nv . CCPTO)) ¼=O (Nt . Np . Nv
_.CCPTO), the average-case complexity is usually much smaller.

To implement this the below Xml dataset is considered as reference

<system name="JTRS">
<resLibrary name="">
<resType name="gpp0">
<functions>
<f name="f0" duration="200" pwrFactor="7"/>
<f name="f1" duration="100" pwrFactor="8"/>
<f name="f2" duration="140" pwrFactor="5"/>
<f name="f3" duration="420" pwrFactor="5"/>
<f name="f4" duration="80" pwrFactor="6"/>
<f name="f5" duration="120" pwrFactor="8"/>
<f name="f6" duration="150" pwrFactor="8"/>
<f name="f7" duration="170" pwrFactor="5"/>
</functions>
</resType>
<resType name="gpp1">
<functions>
<f name="f0" duration="600" pwrFactor="3"/>

21
<f name="f1" duration="200" pwrFactor="5"/>
<f name="f2" duration="180" pwrFactor="3"/>
<f name="f3" duration="250" pwrFactor="3"/>
<f name="f4" duration="180" pwrFactor="3"/>
<f name="f5" duration="160" pwrFactor="5"/>
<f name="f6" duration="400" pwrFactor="4"/>
<f name="f7" duration="300" pwrFactor="5"/>
</functions>
</resType>

<resType name="asic0">
<functions>
<f name="f0" duration="50" pwrFactor="1"/>
<f name="f1" duration="20" pwrFactor="1"/>
<f name="f2" duration="130" pwrFactor="2"/>
<f name="f3" duration="10" pwrFactor="0.5"/>
<f name="f4" duration="4" pwrFactor="0.7"/>
<f name="f5" duration="110" pwrFactor="3"/>
<f name="f6" duration="30" pwrFactor="1"/>
<f name="f7" duration="20" pwrFactor="2"/>
</functions>
</resType>
</resLibrary>

<architecture name="JTRS1" profiling="false" ctrlByCPM="false">

<res resRef="gpp0" name="r0" profiling="false" DVS="false" ctrlByCPM="false">


<port name="p0"/>
</res>

<res resRef="gpp1" name="r1" profiling="false" DVS="false" ctrlByCPM="false">

22
<port name="p0"/>
</res>

<res resRef="asic0" name="r2" profiling="false" DVS="false" ctrlByCPM="false">


<port name="p0"/>
</res>

<taskGraphs>

<taskGraph name="TG">
<root name="n0" res="r2" inPort="p0" func="f0" deadline="0"/>
<node name="n1" res="r0" func="f1" deadline="0"/>
<node name="n2" res="r2" func="f2" deadline="0"/>
<node name="n3" res="r1" func="f3" deadline="1250"/>
<node name="n4" res="r0" func="f4" deadline="1700"/>
<node name="n5" res="r2" func="f5" deadline="0"/>
<node name="n6" res="r2" func="f6" deadline="0"/>
<node name="n7" res="r0" func="f7" deadline="1800"/>
<edge sNode="n0" dNode="n1" dataSize="10"/>
<edge sNode="n0" dNode="n2" dataSize="15"/>
<edge sNode="n1" dNode="n3" dataSize="20"/>
<edge sNode="n2" dNode="n3" dataSize="10"/>
<edge sNode="n2" dNode="n4" dataSize="5"/>
<edge sNode="n2" dNode="n5" dataSize="15"/>
<edge sNode="n3" dNode="n7" dataSize="10"/>
<edge sNode="n5" dNode="n6" dataSize="20"/>
<edge sNode="n6" dNode="n7" dataSize="25"/>
</taskGraph>
</taskGraphs>
</architecture>

23
<application name="app">

<phase name="phase0">
<task name="thisTask" TG="TG" start="0" period="1800" deadline="1800"/>
</phase>
</application>
</system>

The table used in this project is three tables .There are


• Task Table
• Resource Duration table
• Task Resource Map table

Task Table:

Task Name Assumed Name


1 T0 n0
2 T1 n1
3 T2 n2
4 T3 n3
5 T4 n4
6 T5 n5
7 T6 n6
8 T7 n7
TABLE-3.2
This task table shows the number of task we considered here,

Resource Duration Table:


Resource Resource
Resource
Duration Assumed Function
Name
ID Name Name Duration
1 Gpp0 r0 f0 200
2 Gpp0 r0 f1 100
3 Gpp0 r0 f2 140
4 Gpp0 r0 f3 420
5 Gpp0 r0 f4 80
6 Gpp0 r0 f5 120

24
7 Gpp0 r0 f6 150
9 Gpp0 r0 f7 170
10 Gpp1 r1 f0 600
11 Gpp1 r1 f1 200
12 Gpp1 r1 f2 180
13 Gpp1 r1 f3 250
14 Gpp1 r1 f4 180
15 Gpp1 r1 f5 160
16 Gpp1 r1 f6 400
17 Gpp1 r1 f7 300
18 Asic0 r2 f0 465
19 Asic0 r2 f1 478
20 Asic0 r2 f2 551
21 Asic0 r2 f3 10
22 Asic0 r2 f4 45
23 Asic0 r2 f5 47
24 Asic0 r2 f6 782
25 Asic0 r2 f7 691
TABLE-3.3
The each resource has seven function from this function has time delay that is
execution delay .These execution delay is also known as communication Delay.
This resource is allocated based on duration of function present in the three
resources. These are shown in below table.
Task Resource Mapping:
Task
Process Resource Start End
Resource Task Name Execution Delay Duration
ID Name Time Time
ID
1 1 T0 r1 0 200 200 1800
2 1 T1 r0 200 300 100 1800
3 1 T2 r0 300 440 140 1800
4 1 T3 r2 440 450 10 1800
5 1 T4 r2 450 495 45 1800
6 1 T5 r2 495 542 47 1800
7 1 T6 r0 542 692 150 1800
8 1 T7 r0 692 862 170 1800
TABLE-3.4
To execute this in java before that implemented in stored procedure. This
stored procedure stored in Annexure 1.

25
CHAPTER 4

Literature Review

[1] The problem of scheduling dependent tasks was defined by M.R. Garey and
D.S. Johnson, This algorithm describes that task with precedence constraints on a
heterogeneous multiprocessor system with the objective of minimizing the total
energy consumption NP-complete[6].

[2] List scheduling was defined by F. Gruian and K. Kuchcinski, “LEneS: This
scheduling describes that scheduling heuristic with a priority function based on the

26
average energy consumption. Whenever an infeasible schedule is found, the
priorities of the tasks are dynamically increased and the tasks are rescheduled [5].

[3]Fast heuristic algorithm Proposed by B. Gorjiara, P. Chou, N. Bagherzadeh, M.


Reshadi, and D. Jensen. This algorithm describes a fast heuristic by randomly
slowing down some of the high-power tasks. Tasks with higher power consumption
have higher probabilities of being slowed down.[3]

[4]Stochastic-based scheduling algorithm Proposed by B. Gorjiara, N. Bagherzadeh,


and P. Chou. This algorithm describes they randomly slow down or speed up the
tasks based on their energy gradient and execution delays. Tasks with higher energy
gradients and lower execution delays are assigned higher probabilities of being
slowed down.[3].

[5]Integer Linear Programming (ILP) Proposed by Zhang et al. This method


describes formulate the voltage scaling problem for a fixed task ordering and without
considering communication time and energy.[2].

CHAPTER 5

Conclusion:
In design of fast and efficient Processor is scheduled based on time based
algorithm. The below figure is the output of the time scheduling for each resources. The
first bar diagram shows resource r0 for task T1,T2,T6,T7 and the resource r1 is allocated
for T0 only, and resource r2 for T3, T4,T5. This resource is allocated based on the
duration of the processors to complete the task before deadline for that we considering

27
particular resources for task based on the execution delay for processor to that task. The
output mapping table also shows below

In this below table we done the task before deadline is shown.

Task
Process Resource Start End
Resource Task Name Execution Delay Duration
ID Name Time Time
ID
1 1 T0 r1 0 200 200 1800
2 1 T1 r0 200 300 100 1800
3 1 T2 r0 300 440 140 1800
4 1 T3 r2 440 450 10 1800
5 1 T4 r2 450 495 45 1800
6 1 T5 r2 495 542 47 1800
7 1 T6 r0 542 692 150 1800
8 1 T7 r0 692 862 170 1800

ANNEXURE-1

Store Procedure for Task Resource Mapping

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go

ALTER PROCEDURE [dbo].[EGMS]

28
As
BEGIN

truncate table TaskResourceMapping

Declare @StartTime int;


Declare @EndTime int
Declare @Count int
Declare @i int
Declare @TaskSelected varchar(100)
Declare @TempResourceSelected varchar(100)
Declare @ResourceSelected varchar(100)
Declare @Duration int

set @StartTime=0;
set @TaskSelected=''
set @ResourceSelected=''
set @Count=0
set @i=1
set @TempResourceSelected=0
set @Duration=0

select @Count=count(*) from task

while(@i<=@count)
begin
select @TaskSelected=taskname from task where TaskID=@i

select @Duration=min(duration) from resourceduration where


Substring(FunctionName,2,1)=substring(@TaskSelected,2,1)

29
select @ResourceSelected=ResourceAssumedName from resourceduration where
duration=@Duration

insert into TaskResourceMapping


values(1,@TaskSelected,@ResourceSelected,@StartTime,@StartTime+@Duration,@Du
ration,1800)

set @StartTime=@StartTime+@Duration

set @i=@i+1
end

END

select ResourceName,taskname,ExecutionDelay from TaskResourceMapping


order by ResourceName

Coding for JAVA :-


import java.sql.*;

public class SQLConnectionClass


{
public static void main(String[] args)
{

Connection con = null;


CallableStatement proc_stmt = null;
ResultSet rs = null;

30
try
{
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
con =
DriverManager.getConnection("jdbc:sqlserver://sparksystech;databaseName=EG
MS", "sa", "sa123#");
proc_stmt = con.prepareCall("{ call EGMS }");
rs=proc_stmt.executeQuery();
while (rs.next())
{
String resourcename = rs.getString(1);
String taskname = rs.getString(2);
int exectuiondelay= rs.getInt(3);
System.out.println("Resource Name : " + resourcename + " Mapped Task :
" + taskname + " ExectuionDelay: " + exectuiondelay);
}
} catch (ClassNotFoundException ex) {
ex.printStackTrace();
} catch (SQLException ex) {
ex.printStackTrace();
} finally {
try {
rs.close();
proc_stmt.close();
con.close();
} catch (SQLException ex) {
ex.printStackTrace();
}
}
}

31
ANNEXURE-2

SCREEN SHOTS

Snap Short for the Output for Resource Allocation to The Task.

32
FIGURE-3.1-Task Resource Mapping Graph

Reference

33
[1] http://sourceforge.net/projects/lpsolve/, 2008.
[2] http://cecs02.cecs.uci.edu/DVS/, 2008.
[3] B. Gorjiara, N. Bagherzadeh, and P. Chou, “Ultra-Fast and Efficient Algorithm for
Energy Optimization by Gradient-Based Stochastic Voltage and Task Scheduling,” ACM
Trans. Design Automation of Electronic Systems, vol. 12, Article 39, no. 4, Sept. 2007.
[4] Y. Yu and V.K. Prasanna, “Energy-Balanced Task Allocation for Collaborative
Processing in Wireless Sensor Networks,” Mobile Networks and Applications, vol. 10,
pp. 115-131, 2005.
[5] F. Gruian and K. Kuchcinski, “LEneS: Task Scheduling for Low- Energy Systems
Using Variable Supply Voltage Processors,” Proc. Asia and South Pacific Design
Automation Conf. (ASP-DAC ’01), pp. 449-455, Jan. 2001.
[6] M.R. Garey and D.S. Johnson, Computers and Intractability: A Guide to the Theory
of NP-Completeness. W.H. Freeman, 1979.

34

Das könnte Ihnen auch gefallen